Compare commits

...

6 Commits

Author SHA1 Message Date
Ubbe
d95aef7665 fix(copilot): stream timeout, long-running tool polling, and CreateAgent UI refresh (#12070)
Agent generation completes on the backend but the UI does not
update/refresh to show the result.

### Changes 🏗️

![Uploading Screenshot 2026-02-13 at 00.44.54.png…]()


- **Stream start timeout (12s):** If the backend doesn't begin streaming
within 12 seconds of submitting a message, the stream is aborted and a
destructive toast is shown to the user.
- **Long-running tool polling:** Added `useLongRunningToolPolling` hook
that polls the session endpoint every 1.5s while a tool output is in an
operating state (`operation_started` / `operation_pending` /
`operation_in_progress`). When the backend completes, messages are
refreshed so the UI reflects the final result.
- **CreateAgent UI improvements:** Replaced the orbit loader / progress
bar with a mini-game, added expanded accordion for saved agents, and
improved the saved-agent card with image, icons, and links that open in
new tabs.
- **Backend tweaks:** Added `image_url` to `CreateAgentToolOutput`,
minor model/service updates for the dummy agent generator.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Send a message and verify the stream starts within 12s or a toast
appears
- [x] Trigger agent creation and verify the UI updates when the backend
completes
- [x] Verify the saved-agent card renders correctly with image, links,
and icons

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 20:06:40 +00:00
Nicholas Tindle
cb166dd6fb feat(blocks): Store sandbox files to workspace (#12073)
Store files created by sandbox blocks (Claude Code, Code Executor) to
the user's workspace for persistence across runs.

### Changes 🏗️

- **New `sandbox_files.py` utility** (`backend/util/sandbox_files.py`)
  - Shared module for extracting files from E2B sandboxes
- Stores files to workspace via `store_media_file()` (includes virus
scanning, size limits)
  - Returns `SandboxFileOutput` with path, content, and `workspace_ref`

- **Claude Code block** (`backend/blocks/claude_code.py`)
  - Added `workspace_ref` field to `FileOutput` schema
  - Replaced inline `_extract_files()` with shared utility
  - Files from working directory now stored to workspace automatically

- **Code Executor block** (`backend/blocks/code_executor.py`)
  - Added `files` output field to `ExecuteCodeBlock.Output`
  - Creates `/output` directory in sandbox before execution
  - Extracts all files (text + binary) from `/output` after execution
- Updated `execute_code()` to support file extraction with
`extract_files` param

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create agent with Claude Code block, have it create a file, verify
`workspace_ref` in output
- [x] Create agent with Code Executor block, write file to `/output`,
verify `workspace_ref` in output
  - [x] Verify files persist in workspace after sandbox disposal
- [x] Verify binary files (images, etc.) work correctly in Code Executor
- [x] Verify existing graphs using `content` field still work (backward
compat)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this is purely additive backend
code.

---

**Related:** Closes SECRT-1931

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Adds automatic extraction and workspace storage of sandbox-written
files (including binaries for code execution), which can affect output
payload size, performance, and file-handling edge cases.
> 
> **Overview**
> **Sandbox blocks now persist generated files to workspace.** A new
shared utility (`backend/util/sandbox_files.py`) extracts files from an
E2B sandbox (scoped by a start timestamp) and stores them via
`store_media_file`, returning `SandboxFileOutput` with `workspace_ref`.
> 
> `ClaudeCodeBlock` replaces its inline file-scraping logic with this
utility and updates the `files` output schema to include
`workspace_ref`.
> 
> `ExecuteCodeBlock` adds a `files` output and extends the executor
mixin to optionally extract/store files (text + binary) when an
`execution_context` is provided; related mocks/tests and docs are
updated accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
343854c0cf. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 15:56:59 +00:00
Swifty
3d31f62bf1 Revert "added feature request tooling"
This reverts commit b8b6c9de23.
2026-02-12 16:39:24 +01:00
Swifty
b8b6c9de23 added feature request tooling 2026-02-12 16:38:17 +01:00
Abhimanyu Yadav
4f6055f494 refactor(frontend): remove default expiration date from API key credentials form (#12092)
### Changes 🏗️

Removed the default expiration date for API keys in the credentials
modal. Previously, API keys were set to expire the next day by default,
but now the expiration date field starts empty, allowing users to
explicitly choose whether they want to set an expiration date.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open the API key credentials modal and verify the expiration date
field is empty by default
  - [x] Test creating an API key with and without an expiration date
  - [x] Verify both scenarios work correctly

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Removed the default expiration date for API key credentials in the
credentials modal. Previously, API keys were automatically set to expire
the next day at midnight. Now the expiration date field starts empty,
allowing users to explicitly choose whether to set an expiration.

- Removed `getDefaultExpirationDate()` helper function that calculated
tomorrow's date
- Changed default `expiresAt` value from calculated date to empty string
- Backend already supports optional expiration (`expires_at?: number`),
so no backend changes needed
- Form submission correctly handles empty expiration by passing
`undefined` to the API
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes are straightforward and well-contained. The refactor
removes a helper function and changes a default value. The backend API
already supports optional expiration dates, and the form submission
logic correctly handles empty values by passing undefined. The change
improves UX by not forcing a default expiration date on users.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-12 12:57:06 +00:00
Otto
695a185fa1 fix(frontend): remove fixed min-height from CoPilot message container (#12091)
## Summary

Removes the `min-h-screen` class from `ConversationContent` in
ChatMessagesContainer, which was causing fixed height layout issues in
the CoPilot chat interface.

## Changes

- Removed `min-h-screen` from ConversationContent className

## Linear

Fixes [SECRT-1944](https://linear.app/autogpt/issue/SECRT-1944)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Removes the `min-h-screen` (100vh) class from `ConversationContent` that
was causing the chat message container to enforce a minimum viewport
height. The parent container already handles height constraints with
`h-full min-h-0` and flexbox layout, so the fixed minimum height was
creating layout conflicts. The component now properly grows within its
flex container using `flex-1`.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The change removes a single problematic CSS class that was causing
fixed height layout issues. The parent container already handles height
constraints properly with flexbox, and removing min-h-screen allows the
component to size correctly within its flex parent. This is a targeted,
low-risk bug fix with no logic changes.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-12 12:46:29 +00:00
18 changed files with 1434 additions and 228 deletions

View File

@@ -0,0 +1,154 @@
"""Dummy Agent Generator for testing.
Returns mock responses matching the format expected from the external service.
Enable via AGENTGENERATOR_USE_DUMMY=true in settings.
WARNING: This is for testing only. Do not use in production.
"""
import asyncio
import logging
import uuid
from typing import Any
logger = logging.getLogger(__name__)
# Dummy decomposition result (instructions type)
DUMMY_DECOMPOSITION_RESULT: dict[str, Any] = {
"type": "instructions",
"steps": [
{
"description": "Get input from user",
"action": "input",
"block_name": "AgentInputBlock",
},
{
"description": "Process the input",
"action": "process",
"block_name": "TextFormatterBlock",
},
{
"description": "Return output to user",
"action": "output",
"block_name": "AgentOutputBlock",
},
],
}
# Block IDs from backend/blocks/io.py
AGENT_INPUT_BLOCK_ID = "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b"
AGENT_OUTPUT_BLOCK_ID = "363ae599-353e-4804-937e-b2ee3cef3da4"
def _generate_dummy_agent_json() -> dict[str, Any]:
"""Generate a minimal valid agent JSON for testing."""
input_node_id = str(uuid.uuid4())
output_node_id = str(uuid.uuid4())
return {
"id": str(uuid.uuid4()),
"version": 1,
"is_active": True,
"name": "Dummy Test Agent",
"description": "A dummy agent generated for testing purposes",
"nodes": [
{
"id": input_node_id,
"block_id": AGENT_INPUT_BLOCK_ID,
"input_default": {
"name": "input",
"title": "Input",
"description": "Enter your input",
"placeholder_values": [],
},
"metadata": {"position": {"x": 0, "y": 0}},
},
{
"id": output_node_id,
"block_id": AGENT_OUTPUT_BLOCK_ID,
"input_default": {
"name": "output",
"title": "Output",
"description": "Agent output",
"format": "{output}",
},
"metadata": {"position": {"x": 400, "y": 0}},
},
],
"links": [
{
"id": str(uuid.uuid4()),
"source_id": input_node_id,
"sink_id": output_node_id,
"source_name": "result",
"sink_name": "value",
"is_static": False,
},
],
}
async def decompose_goal_dummy(
description: str,
context: str = "",
library_agents: list[dict[str, Any]] | None = None,
) -> dict[str, Any]:
"""Return dummy decomposition result."""
logger.info("Using dummy agent generator for decompose_goal")
return DUMMY_DECOMPOSITION_RESULT.copy()
async def generate_agent_dummy(
instructions: dict[str, Any],
library_agents: list[dict[str, Any]] | None = None,
operation_id: str | None = None,
task_id: str | None = None,
) -> dict[str, Any]:
"""Return dummy agent JSON after a simulated delay."""
logger.info("Using dummy agent generator for generate_agent (30s delay)")
await asyncio.sleep(30)
return _generate_dummy_agent_json()
async def generate_agent_patch_dummy(
update_request: str,
current_agent: dict[str, Any],
library_agents: list[dict[str, Any]] | None = None,
operation_id: str | None = None,
task_id: str | None = None,
) -> dict[str, Any]:
"""Return dummy patched agent (returns the current agent with updated description)."""
logger.info("Using dummy agent generator for generate_agent_patch")
patched = current_agent.copy()
patched["description"] = (
f"{current_agent.get('description', '')} (updated: {update_request})"
)
return patched
async def customize_template_dummy(
template_agent: dict[str, Any],
modification_request: str,
context: str = "",
) -> dict[str, Any]:
"""Return dummy customized template (returns template with updated description)."""
logger.info("Using dummy agent generator for customize_template")
customized = template_agent.copy()
customized["description"] = (
f"{template_agent.get('description', '')} (customized: {modification_request})"
)
return customized
async def get_blocks_dummy() -> list[dict[str, Any]]:
"""Return dummy blocks list."""
logger.info("Using dummy agent generator for get_blocks")
return [
{"id": AGENT_INPUT_BLOCK_ID, "name": "AgentInputBlock"},
{"id": AGENT_OUTPUT_BLOCK_ID, "name": "AgentOutputBlock"},
]
async def health_check_dummy() -> bool:
"""Always returns healthy for dummy service."""
return True

View File

@@ -12,8 +12,19 @@ import httpx
from backend.util.settings import Settings
from .dummy import (
customize_template_dummy,
decompose_goal_dummy,
generate_agent_dummy,
generate_agent_patch_dummy,
get_blocks_dummy,
health_check_dummy,
)
logger = logging.getLogger(__name__)
_dummy_mode_warned = False
def _create_error_response(
error_message: str,
@@ -90,10 +101,26 @@ def _get_settings() -> Settings:
return _settings
def is_external_service_configured() -> bool:
"""Check if external Agent Generator service is configured."""
def _is_dummy_mode() -> bool:
"""Check if dummy mode is enabled for testing."""
global _dummy_mode_warned
settings = _get_settings()
return bool(settings.config.agentgenerator_host)
is_dummy = bool(settings.config.agentgenerator_use_dummy)
if is_dummy and not _dummy_mode_warned:
logger.warning(
"Agent Generator running in DUMMY MODE - returning mock responses. "
"Do not use in production!"
)
_dummy_mode_warned = True
return is_dummy
def is_external_service_configured() -> bool:
"""Check if external Agent Generator service is configured (or dummy mode)."""
settings = _get_settings()
return bool(settings.config.agentgenerator_host) or bool(
settings.config.agentgenerator_use_dummy
)
def _get_base_url() -> str:
@@ -137,6 +164,9 @@ async def decompose_goal_external(
- {"type": "error", "error": "...", "error_type": "..."} on error
Or None on unexpected error
"""
if _is_dummy_mode():
return await decompose_goal_dummy(description, context, library_agents)
client = _get_client()
if context:
@@ -226,6 +256,11 @@ async def generate_agent_external(
Returns:
Agent JSON dict, {"status": "accepted"} for async, or error dict {"type": "error", ...} on error
"""
if _is_dummy_mode():
return await generate_agent_dummy(
instructions, library_agents, operation_id, task_id
)
client = _get_client()
# Build request payload
@@ -297,6 +332,11 @@ async def generate_agent_patch_external(
Returns:
Updated agent JSON, clarifying questions dict, {"status": "accepted"} for async, or error dict on error
"""
if _is_dummy_mode():
return await generate_agent_patch_dummy(
update_request, current_agent, library_agents, operation_id, task_id
)
client = _get_client()
# Build request payload
@@ -383,6 +423,11 @@ async def customize_template_external(
Returns:
Customized agent JSON, clarifying questions dict, or error dict on error
"""
if _is_dummy_mode():
return await customize_template_dummy(
template_agent, modification_request, context
)
client = _get_client()
request = modification_request
@@ -445,6 +490,9 @@ async def get_blocks_external() -> list[dict[str, Any]] | None:
Returns:
List of block info dicts or None on error
"""
if _is_dummy_mode():
return await get_blocks_dummy()
client = _get_client()
try:
@@ -478,6 +526,9 @@ async def health_check() -> bool:
if not is_external_service_configured():
return False
if _is_dummy_mode():
return await health_check_dummy()
client = _get_client()
try:

View File

@@ -1,10 +1,10 @@
import json
import shlex
import uuid
from typing import Literal, Optional
from typing import TYPE_CHECKING, Literal, Optional
from e2b import AsyncSandbox as BaseAsyncSandbox
from pydantic import BaseModel, SecretStr
from pydantic import SecretStr
from backend.blocks._base import (
Block,
@@ -20,6 +20,13 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.sandbox_files import (
SandboxFileOutput,
extract_and_store_sandbox_files,
)
if TYPE_CHECKING:
from backend.executor.utils import ExecutionContext
class ClaudeCodeExecutionError(Exception):
@@ -174,22 +181,15 @@ class ClaudeCodeBlock(Block):
advanced=True,
)
class FileOutput(BaseModel):
"""A file extracted from the sandbox."""
path: str
relative_path: str # Path relative to working directory (for GitHub, etc.)
name: str
content: str
class Output(BlockSchemaOutput):
response: str = SchemaField(
description="The output/response from Claude Code execution"
)
files: list["ClaudeCodeBlock.FileOutput"] = SchemaField(
files: list[SandboxFileOutput] = SchemaField(
description=(
"List of text files created/modified by Claude Code during this execution. "
"Each file has 'path', 'relative_path', 'name', and 'content' fields."
"Each file has 'path', 'relative_path', 'name', 'content', and 'workspace_ref' fields. "
"workspace_ref contains a workspace:// URI if the file was stored to workspace."
)
)
conversation_history: str = SchemaField(
@@ -252,6 +252,7 @@ class ClaudeCodeBlock(Block):
"relative_path": "index.html",
"name": "index.html",
"content": "<html>Hello World</html>",
"workspace_ref": None,
}
],
),
@@ -267,11 +268,12 @@ class ClaudeCodeBlock(Block):
"execute_claude_code": lambda *args, **kwargs: (
"Created index.html with hello world content", # response
[
ClaudeCodeBlock.FileOutput(
SandboxFileOutput(
path="/home/user/index.html",
relative_path="index.html",
name="index.html",
content="<html>Hello World</html>",
workspace_ref=None,
)
], # files
"User: Create a hello world HTML file\n"
@@ -294,7 +296,8 @@ class ClaudeCodeBlock(Block):
existing_sandbox_id: str,
conversation_history: str,
dispose_sandbox: bool,
) -> tuple[str, list["ClaudeCodeBlock.FileOutput"], str, str, str]:
execution_context: "ExecutionContext",
) -> tuple[str, list[SandboxFileOutput], str, str, str]:
"""
Execute Claude Code in an E2B sandbox.
@@ -449,14 +452,18 @@ class ClaudeCodeBlock(Block):
else:
new_conversation_history = turn_entry
# Extract files created/modified during this run
files = await self._extract_files(
sandbox, working_directory, start_timestamp
# Extract files created/modified during this run and store to workspace
sandbox_files = await extract_and_store_sandbox_files(
sandbox=sandbox,
working_directory=working_directory,
execution_context=execution_context,
since_timestamp=start_timestamp,
text_only=True,
)
return (
response,
files,
sandbox_files, # Already SandboxFileOutput objects
new_conversation_history,
current_session_id,
sandbox_id,
@@ -471,140 +478,6 @@ class ClaudeCodeBlock(Block):
if dispose_sandbox and sandbox:
await sandbox.kill()
async def _extract_files(
self,
sandbox: BaseAsyncSandbox,
working_directory: str,
since_timestamp: str | None = None,
) -> list["ClaudeCodeBlock.FileOutput"]:
"""
Extract text files created/modified during this Claude Code execution.
Args:
sandbox: The E2B sandbox instance
working_directory: Directory to search for files
since_timestamp: ISO timestamp - only return files modified after this time
Returns:
List of FileOutput objects with path, relative_path, name, and content
"""
files: list[ClaudeCodeBlock.FileOutput] = []
# Text file extensions we can safely read as text
text_extensions = {
".txt",
".md",
".html",
".htm",
".css",
".js",
".ts",
".jsx",
".tsx",
".json",
".xml",
".yaml",
".yml",
".toml",
".ini",
".cfg",
".conf",
".py",
".rb",
".php",
".java",
".c",
".cpp",
".h",
".hpp",
".cs",
".go",
".rs",
".swift",
".kt",
".scala",
".sh",
".bash",
".zsh",
".sql",
".graphql",
".env",
".gitignore",
".dockerfile",
"Dockerfile",
".vue",
".svelte",
".astro",
".mdx",
".rst",
".tex",
".csv",
".log",
}
try:
# List files recursively using find command
# Exclude node_modules and .git directories, but allow hidden files
# like .env and .gitignore (they're filtered by text_extensions later)
# Filter by timestamp to only get files created/modified during this run
safe_working_dir = shlex.quote(working_directory)
timestamp_filter = ""
if since_timestamp:
timestamp_filter = f"-newermt {shlex.quote(since_timestamp)} "
find_result = await sandbox.commands.run(
f"find {safe_working_dir} -type f "
f"{timestamp_filter}"
f"-not -path '*/node_modules/*' "
f"-not -path '*/.git/*' "
f"2>/dev/null"
)
if find_result.stdout:
for file_path in find_result.stdout.strip().split("\n"):
if not file_path:
continue
# Check if it's a text file we can read
is_text = any(
file_path.endswith(ext) for ext in text_extensions
) or file_path.endswith("Dockerfile")
if is_text:
try:
content = await sandbox.files.read(file_path)
# Handle bytes or string
if isinstance(content, bytes):
content = content.decode("utf-8", errors="replace")
# Extract filename from path
file_name = file_path.split("/")[-1]
# Calculate relative path by stripping working directory
relative_path = file_path
if file_path.startswith(working_directory):
relative_path = file_path[len(working_directory) :]
# Remove leading slash if present
if relative_path.startswith("/"):
relative_path = relative_path[1:]
files.append(
ClaudeCodeBlock.FileOutput(
path=file_path,
relative_path=relative_path,
name=file_name,
content=content,
)
)
except Exception:
# Skip files that can't be read
pass
except Exception:
# If file extraction fails, return empty results
pass
return files
def _escape_prompt(self, prompt: str) -> str:
"""Escape the prompt for safe shell execution."""
# Use single quotes and escape any single quotes in the prompt
@@ -617,6 +490,7 @@ class ClaudeCodeBlock(Block):
*,
e2b_credentials: APIKeyCredentials,
anthropic_credentials: APIKeyCredentials,
execution_context: "ExecutionContext",
**kwargs,
) -> BlockOutput:
try:
@@ -637,6 +511,7 @@ class ClaudeCodeBlock(Block):
existing_sandbox_id=input_data.sandbox_id,
conversation_history=input_data.conversation_history,
dispose_sandbox=input_data.dispose_sandbox,
execution_context=execution_context,
)
yield "response", response

View File

@@ -1,5 +1,5 @@
from enum import Enum
from typing import Any, Literal, Optional
from typing import TYPE_CHECKING, Any, Literal, Optional
from e2b_code_interpreter import AsyncSandbox
from e2b_code_interpreter import Result as E2BExecutionResult
@@ -20,6 +20,13 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.sandbox_files import (
SandboxFileOutput,
extract_and_store_sandbox_files,
)
if TYPE_CHECKING:
from backend.executor.utils import ExecutionContext
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -85,6 +92,9 @@ class CodeExecutionResult(MainCodeExecutionResult):
class BaseE2BExecutorMixin:
"""Shared implementation methods for E2B executor blocks."""
# Default working directory in E2B sandboxes
WORKING_DIR = "/home/user"
async def execute_code(
self,
api_key: str,
@@ -95,14 +105,21 @@ class BaseE2BExecutorMixin:
timeout: Optional[int] = None,
sandbox_id: Optional[str] = None,
dispose_sandbox: bool = False,
execution_context: Optional["ExecutionContext"] = None,
extract_files: bool = False,
):
"""
Unified code execution method that handles all three use cases:
1. Create new sandbox and execute (ExecuteCodeBlock)
2. Create new sandbox, execute, and return sandbox_id (InstantiateCodeSandboxBlock)
3. Connect to existing sandbox and execute (ExecuteCodeStepBlock)
Args:
extract_files: If True and execution_context provided, extract files
created/modified during execution and store to workspace.
""" # noqa
sandbox = None
files: list[SandboxFileOutput] = []
try:
if sandbox_id:
# Connect to existing sandbox (ExecuteCodeStepBlock case)
@@ -118,6 +135,12 @@ class BaseE2BExecutorMixin:
for cmd in setup_commands:
await sandbox.commands.run(cmd)
# Capture timestamp before execution to scope file extraction
start_timestamp = None
if extract_files:
ts_result = await sandbox.commands.run("date -u +%Y-%m-%dT%H:%M:%S")
start_timestamp = ts_result.stdout.strip() if ts_result.stdout else None
# Execute the code
execution = await sandbox.run_code(
code,
@@ -133,7 +156,24 @@ class BaseE2BExecutorMixin:
stdout_logs = "".join(execution.logs.stdout)
stderr_logs = "".join(execution.logs.stderr)
return results, text_output, stdout_logs, stderr_logs, sandbox.sandbox_id
# Extract files created/modified during this execution
if extract_files and execution_context:
files = await extract_and_store_sandbox_files(
sandbox=sandbox,
working_directory=self.WORKING_DIR,
execution_context=execution_context,
since_timestamp=start_timestamp,
text_only=False, # Include binary files too
)
return (
results,
text_output,
stdout_logs,
stderr_logs,
sandbox.sandbox_id,
files,
)
finally:
# Dispose of sandbox if requested to reduce usage costs
if dispose_sandbox and sandbox:
@@ -238,6 +278,12 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
files: list[SandboxFileOutput] = SchemaField(
description=(
"Files created or modified during execution. "
"Each file has path, name, content, and workspace_ref (if stored)."
),
)
def __init__(self):
super().__init__(
@@ -259,23 +305,30 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
("results", []),
("response", "Hello World"),
("stdout_logs", "Hello World\n"),
("files", []),
],
test_mock={
"execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox: ( # noqa
"execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox, execution_context, extract_files: ( # noqa
[], # results
"Hello World", # text_output
"Hello World\n", # stdout_logs
"", # stderr_logs
"sandbox_id", # sandbox_id
[], # files
),
},
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: "ExecutionContext",
**kwargs,
) -> BlockOutput:
try:
results, text_output, stdout, stderr, _ = await self.execute_code(
results, text_output, stdout, stderr, _, files = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.code,
language=input_data.language,
@@ -283,6 +336,8 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
setup_commands=input_data.setup_commands,
timeout=input_data.timeout,
dispose_sandbox=input_data.dispose_sandbox,
execution_context=execution_context,
extract_files=True,
)
# Determine result object shape & filter out empty formats
@@ -296,6 +351,8 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
yield "stdout_logs", stdout
if stderr:
yield "stderr_logs", stderr
# Always yield files (empty list if none)
yield "files", [f.model_dump() for f in files]
except Exception as e:
yield "error", str(e)
@@ -393,6 +450,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
"Hello World\n", # stdout_logs
"", # stderr_logs
"sandbox_id", # sandbox_id
[], # files
),
},
)
@@ -401,7 +459,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
_, text_output, stdout, stderr, sandbox_id = await self.execute_code(
_, text_output, stdout, stderr, sandbox_id, _ = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.setup_code,
language=input_data.language,
@@ -500,6 +558,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
"Hello World\n", # stdout_logs
"", # stderr_logs
sandbox_id, # sandbox_id
[], # files
),
},
)
@@ -508,7 +567,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
results, text_output, stdout, stderr, _ = await self.execute_code(
results, text_output, stdout, stderr, _, _ = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.step_code,
language=input_data.language,

View File

@@ -0,0 +1,288 @@
"""
Shared utilities for extracting and storing files from E2B sandboxes.
This module provides common file extraction and workspace storage functionality
for blocks that run code in E2B sandboxes (Claude Code, Code Executor, etc.).
"""
import base64
import logging
import mimetypes
import shlex
from dataclasses import dataclass
from typing import TYPE_CHECKING
from pydantic import BaseModel
from backend.util.file import store_media_file
from backend.util.type import MediaFileType
if TYPE_CHECKING:
from e2b import AsyncSandbox as BaseAsyncSandbox
from backend.executor.utils import ExecutionContext
logger = logging.getLogger(__name__)
# Text file extensions that can be safely read and stored as text
TEXT_EXTENSIONS = {
".txt",
".md",
".html",
".htm",
".css",
".js",
".ts",
".jsx",
".tsx",
".json",
".xml",
".yaml",
".yml",
".toml",
".ini",
".cfg",
".conf",
".py",
".rb",
".php",
".java",
".c",
".cpp",
".h",
".hpp",
".cs",
".go",
".rs",
".swift",
".kt",
".scala",
".sh",
".bash",
".zsh",
".sql",
".graphql",
".env",
".gitignore",
".dockerfile",
"Dockerfile",
".vue",
".svelte",
".astro",
".mdx",
".rst",
".tex",
".csv",
".log",
}
class SandboxFileOutput(BaseModel):
"""A file extracted from a sandbox and optionally stored in workspace."""
path: str
"""Full path in the sandbox."""
relative_path: str
"""Path relative to the working directory."""
name: str
"""Filename only."""
content: str
"""File content as text (for backward compatibility)."""
workspace_ref: str | None = None
"""Workspace reference (workspace://{id}#mime) if stored, None otherwise."""
@dataclass
class ExtractedFile:
"""Internal representation of an extracted file before storage."""
path: str
relative_path: str
name: str
content: bytes
is_text: bool
async def extract_sandbox_files(
sandbox: "BaseAsyncSandbox",
working_directory: str,
since_timestamp: str | None = None,
text_only: bool = True,
) -> list[ExtractedFile]:
"""
Extract files from an E2B sandbox.
Args:
sandbox: The E2B sandbox instance
working_directory: Directory to search for files
since_timestamp: ISO timestamp - only return files modified after this time
text_only: If True, only extract text files (default). If False, extract all files.
Returns:
List of ExtractedFile objects with path, content, and metadata
"""
files: list[ExtractedFile] = []
try:
# Build find command
safe_working_dir = shlex.quote(working_directory)
timestamp_filter = ""
if since_timestamp:
timestamp_filter = f"-newermt {shlex.quote(since_timestamp)} "
find_result = await sandbox.commands.run(
f"find {safe_working_dir} -type f "
f"{timestamp_filter}"
f"-not -path '*/node_modules/*' "
f"-not -path '*/.git/*' "
f"2>/dev/null"
)
if not find_result.stdout:
return files
for file_path in find_result.stdout.strip().split("\n"):
if not file_path:
continue
# Check if it's a text file
is_text = any(file_path.endswith(ext) for ext in TEXT_EXTENSIONS)
# Skip non-text files if text_only mode
if text_only and not is_text:
continue
try:
# Read file content as bytes
content = await sandbox.files.read(file_path, format="bytes")
if isinstance(content, str):
content = content.encode("utf-8")
elif isinstance(content, bytearray):
content = bytes(content)
# Extract filename from path
file_name = file_path.split("/")[-1]
# Calculate relative path
relative_path = file_path
if file_path.startswith(working_directory):
relative_path = file_path[len(working_directory) :]
if relative_path.startswith("/"):
relative_path = relative_path[1:]
files.append(
ExtractedFile(
path=file_path,
relative_path=relative_path,
name=file_name,
content=content,
is_text=is_text,
)
)
except Exception as e:
logger.debug(f"Failed to read file {file_path}: {e}")
continue
except Exception as e:
logger.warning(f"File extraction failed: {e}")
return files
async def store_sandbox_files(
extracted_files: list[ExtractedFile],
execution_context: "ExecutionContext",
) -> list[SandboxFileOutput]:
"""
Store extracted sandbox files to workspace and return output objects.
Args:
extracted_files: List of files extracted from sandbox
execution_context: Execution context for workspace storage
Returns:
List of SandboxFileOutput objects with workspace refs
"""
outputs: list[SandboxFileOutput] = []
for file in extracted_files:
# Decode content for text files (for backward compat content field)
if file.is_text:
try:
content_str = file.content.decode("utf-8", errors="replace")
except Exception:
content_str = ""
else:
content_str = f"[Binary file: {len(file.content)} bytes]"
# Build data URI (needed for storage and as binary fallback)
mime_type = mimetypes.guess_type(file.name)[0] or "application/octet-stream"
data_uri = f"data:{mime_type};base64,{base64.b64encode(file.content).decode()}"
# Try to store in workspace
workspace_ref: str | None = None
try:
result = await store_media_file(
file=MediaFileType(data_uri),
execution_context=execution_context,
return_format="for_block_output",
)
if result.startswith("workspace://"):
workspace_ref = result
elif not file.is_text:
# Non-workspace context (graph execution): store_media_file
# returned a data URI — use it as content so binary data isn't lost.
content_str = result
except Exception as e:
logger.warning(f"Failed to store file {file.name} to workspace: {e}")
# For binary files, fall back to data URI to prevent data loss
if not file.is_text:
content_str = data_uri
outputs.append(
SandboxFileOutput(
path=file.path,
relative_path=file.relative_path,
name=file.name,
content=content_str,
workspace_ref=workspace_ref,
)
)
return outputs
async def extract_and_store_sandbox_files(
sandbox: "BaseAsyncSandbox",
working_directory: str,
execution_context: "ExecutionContext",
since_timestamp: str | None = None,
text_only: bool = True,
) -> list[SandboxFileOutput]:
"""
Extract files from sandbox and store them in workspace.
This is the main entry point combining extraction and storage.
Args:
sandbox: The E2B sandbox instance
working_directory: Directory to search for files
execution_context: Execution context for workspace storage
since_timestamp: ISO timestamp - only return files modified after this time
text_only: If True, only extract text files
Returns:
List of SandboxFileOutput objects with content and workspace refs
"""
extracted = await extract_sandbox_files(
sandbox=sandbox,
working_directory=working_directory,
since_timestamp=since_timestamp,
text_only=text_only,
)
return await store_sandbox_files(extracted, execution_context)

View File

@@ -368,6 +368,10 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
default=600,
description="The timeout in seconds for Agent Generator service requests (includes retries for rate limits)",
)
agentgenerator_use_dummy: bool = Field(
default=False,
description="Use dummy agent generator responses for testing (bypasses external service)",
)
enable_example_blocks: bool = Field(
default=False,

View File

@@ -25,6 +25,7 @@ class TestServiceConfiguration:
"""Test that external service is not configured when host is empty."""
mock_settings = MagicMock()
mock_settings.config.agentgenerator_host = ""
mock_settings.config.agentgenerator_use_dummy = False
with patch.object(service, "_get_settings", return_value=mock_settings):
assert service.is_external_service_configured() is False

View File

@@ -159,7 +159,7 @@ export const ChatMessagesContainer = ({
return (
<Conversation className="min-h-0 flex-1">
<ConversationContent className="flex min-h-screen flex-1 flex-col gap-6 px-3 py-6">
<ConversationContent className="flex flex-1 flex-col gap-6 px-3 py-6">
{isLoading && messages.length === 0 && (
<div className="flex min-h-full flex-1 items-center justify-center">
<LoadingSpinner className="text-neutral-600" />

View File

@@ -1,10 +0,0 @@
import { parseAsString, useQueryState } from "nuqs";
export function useCopilotSessionId() {
const [urlSessionId, setUrlSessionId] = useQueryState(
"sessionId",
parseAsString,
);
return { urlSessionId, setUrlSessionId };
}

View File

@@ -0,0 +1,126 @@
import { getGetV2GetSessionQueryKey } from "@/app/api/__generated__/endpoints/chat/chat";
import { useQueryClient } from "@tanstack/react-query";
import type { UIDataTypes, UIMessage, UITools } from "ai";
import { useCallback, useEffect, useRef } from "react";
import { convertChatSessionMessagesToUiMessages } from "../helpers/convertChatSessionToUiMessages";
const OPERATING_TYPES = new Set([
"operation_started",
"operation_pending",
"operation_in_progress",
]);
const POLL_INTERVAL_MS = 1_500;
/**
* Detects whether any message contains a tool part whose output indicates
* a long-running operation is still in progress.
*/
function hasOperatingTool(
messages: UIMessage<unknown, UIDataTypes, UITools>[],
) {
for (const msg of messages) {
for (const part of msg.parts) {
if (!part.type.startsWith("tool-")) continue;
const toolPart = part as { output?: unknown };
if (!toolPart.output) continue;
const output =
typeof toolPart.output === "string"
? safeParse(toolPart.output)
: toolPart.output;
if (
output &&
typeof output === "object" &&
"type" in output &&
OPERATING_TYPES.has((output as { type: string }).type)
) {
return true;
}
}
}
return false;
}
function safeParse(value: string): unknown {
try {
return JSON.parse(value);
} catch {
return null;
}
}
/**
* Polls the session endpoint while any tool is in an "operating" state
* (operation_started / operation_pending / operation_in_progress).
*
* When the session data shows the tool output has changed (e.g. to
* agent_saved), it calls `setMessages` with the updated messages.
*/
export function useLongRunningToolPolling(
sessionId: string | null,
messages: UIMessage<unknown, UIDataTypes, UITools>[],
setMessages: (
updater: (
prev: UIMessage<unknown, UIDataTypes, UITools>[],
) => UIMessage<unknown, UIDataTypes, UITools>[],
) => void,
) {
const queryClient = useQueryClient();
const intervalRef = useRef<ReturnType<typeof setInterval> | null>(null);
const stopPolling = useCallback(() => {
if (intervalRef.current) {
clearInterval(intervalRef.current);
intervalRef.current = null;
}
}, []);
const poll = useCallback(async () => {
if (!sessionId) return;
// Invalidate the query cache so the next fetch gets fresh data
await queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(sessionId),
});
// Fetch fresh session data
const data = queryClient.getQueryData<{
status: number;
data: { messages?: unknown[] };
}>(getGetV2GetSessionQueryKey(sessionId));
if (data?.status !== 200 || !data.data.messages) return;
const freshMessages = convertChatSessionMessagesToUiMessages(
sessionId,
data.data.messages,
);
if (!freshMessages || freshMessages.length === 0) return;
// Update when the long-running tool completed
if (!hasOperatingTool(freshMessages)) {
setMessages(() => freshMessages);
stopPolling();
}
}, [sessionId, queryClient, setMessages, stopPolling]);
useEffect(() => {
const shouldPoll = hasOperatingTool(messages);
// Always clear any previous interval first so we never leak timers
// when the effect re-runs due to dependency changes (e.g. messages
// updating as the LLM streams text after the tool call).
stopPolling();
if (shouldPoll && sessionId) {
intervalRef.current = setInterval(() => {
poll();
}, POLL_INTERVAL_MS);
}
return () => {
stopPolling();
};
}, [messages, sessionId, poll, stopPolling]);
}

View File

@@ -1,24 +1,30 @@
"use client";
import { WarningDiamondIcon } from "@phosphor-icons/react";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
BookOpenIcon,
CheckFatIcon,
PencilSimpleIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import NextLink from "next/link";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ProgressBar } from "../../components/ProgressBar/ProgressBar";
import {
ContentCardDescription,
ContentCodeBlock,
ContentGrid,
ContentHint,
ContentLink,
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useAsymptoticProgress } from "../../hooks/useAsymptoticProgress";
import {
ClarificationQuestionsCard,
ClarifyingQuestion,
} from "./components/ClarificationQuestionsCard";
import { MiniGame } from "./components/MiniGame/MiniGame";
import {
AccordionIcon,
formatMaybeJson,
@@ -52,7 +58,7 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
const icon = <AccordionIcon />;
if (isAgentSavedOutput(output)) {
return { icon, title: output.agent_name };
return { icon, title: output.agent_name, expanded: true };
}
if (isAgentPreviewOutput(output)) {
return {
@@ -78,6 +84,7 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
return {
icon,
title: "Creating agent, this may take a few minutes. Sit back and relax.",
expanded: true,
};
}
return {
@@ -107,8 +114,6 @@ export function CreateAgentTool({ part }: Props) {
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output));
const progress = useAsymptoticProgress(isOperating);
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
@@ -152,31 +157,53 @@ export function CreateAgentTool({ part }: Props) {
<ToolAccordion {...getAccordionMeta(output)}>
{isOperating && (
<ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" />
<MiniGame />
<ContentHint>
This could take a few minutes, grab a coffee
This could take a few minutes play while you wait!
</ContentHint>
</ContentGrid>
)}
{isAgentSavedOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
<div className="flex flex-wrap gap-2">
<ContentLink href={output.library_agent_link}>
Open in library
</ContentLink>
<ContentLink href={output.agent_page_link}>
Open in builder
</ContentLink>
<div className="rounded-xl border border-border/60 bg-card p-4 shadow-sm">
<div className="flex items-baseline gap-2">
<CheckFatIcon
size={18}
weight="regular"
className="relative top-1 text-green-500"
/>
<Text
variant="body-medium"
className="text-blacks mb-2 text-[16px]"
>
{output.message}
</Text>
</div>
<ContentCodeBlock>
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</ContentCodeBlock>
</ContentGrid>
<div className="mt-3 flex flex-wrap gap-4">
<Button variant="outline" size="small">
<NextLink
href={output.library_agent_link}
className="inline-flex items-center gap-1.5"
target="_blank"
rel="noopener noreferrer"
>
<BookOpenIcon size={14} weight="regular" />
Open in library
</NextLink>
</Button>
<Button variant="outline" size="small">
<NextLink
href={output.agent_page_link}
target="_blank"
rel="noopener noreferrer"
className="inline-flex items-center gap-1.5"
>
<PencilSimpleIcon size={14} weight="regular" />
Open in builder
</NextLink>
</Button>
</div>
</div>
)}
{isAgentPreviewOutput(output) && (

View File

@@ -0,0 +1,21 @@
"use client";
import { useMiniGame } from "./useMiniGame";
export function MiniGame() {
const { canvasRef } = useMiniGame();
return (
<div
className="w-full overflow-hidden rounded-md bg-background text-foreground"
style={{ border: "1px solid #d17fff" }}
>
<canvas
ref={canvasRef}
tabIndex={0}
className="block w-full outline-none"
style={{ imageRendering: "pixelated" }}
/>
</div>
);
}

View File

@@ -0,0 +1,579 @@
import { useEffect, useRef } from "react";
/* ------------------------------------------------------------------ */
/* Constants */
/* ------------------------------------------------------------------ */
const CANVAS_HEIGHT = 150;
const GRAVITY = 0.55;
const JUMP_FORCE = -9.5;
const BASE_SPEED = 3;
const SPEED_INCREMENT = 0.0008;
const SPAWN_MIN = 70;
const SPAWN_MAX = 130;
const CHAR_SIZE = 18;
const CHAR_X = 50;
const GROUND_PAD = 20;
const STORAGE_KEY = "copilot-minigame-highscore";
// Colors
const COLOR_BG = "#E8EAF6";
const COLOR_CHAR = "#263238";
const COLOR_BOSS = "#F50057";
// Boss
const BOSS_SIZE = 36;
const BOSS_ENTER_SPEED = 2;
const BOSS_LEAVE_SPEED = 3;
const BOSS_SHOOT_COOLDOWN = 90;
const BOSS_SHOTS_TO_EVADE = 5;
const BOSS_INTERVAL = 20; // every N score
const PROJ_SPEED = 4.5;
const PROJ_SIZE = 12;
/* ------------------------------------------------------------------ */
/* Types */
/* ------------------------------------------------------------------ */
interface Obstacle {
x: number;
width: number;
height: number;
scored: boolean;
}
interface Projectile {
x: number;
y: number;
speed: number;
evaded: boolean;
type: "low" | "high";
}
interface BossState {
phase: "inactive" | "entering" | "fighting" | "leaving";
x: number;
targetX: number;
shotsEvaded: number;
cooldown: number;
projectiles: Projectile[];
bob: number;
}
interface GameState {
charY: number;
vy: number;
obstacles: Obstacle[];
score: number;
highScore: number;
speed: number;
frame: number;
nextSpawn: number;
running: boolean;
over: boolean;
groundY: number;
boss: BossState;
bossThreshold: number;
}
/* ------------------------------------------------------------------ */
/* Helpers */
/* ------------------------------------------------------------------ */
function randInt(min: number, max: number) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
function readHighScore(): number {
try {
return parseInt(localStorage.getItem(STORAGE_KEY) || "0", 10) || 0;
} catch {
return 0;
}
}
function writeHighScore(score: number) {
try {
localStorage.setItem(STORAGE_KEY, String(score));
} catch {
/* noop */
}
}
function makeBoss(): BossState {
return {
phase: "inactive",
x: 0,
targetX: 0,
shotsEvaded: 0,
cooldown: 0,
projectiles: [],
bob: 0,
};
}
function makeState(groundY: number): GameState {
return {
charY: groundY - CHAR_SIZE,
vy: 0,
obstacles: [],
score: 0,
highScore: readHighScore(),
speed: BASE_SPEED,
frame: 0,
nextSpawn: randInt(SPAWN_MIN, SPAWN_MAX),
running: false,
over: false,
groundY,
boss: makeBoss(),
bossThreshold: BOSS_INTERVAL,
};
}
function gameOver(s: GameState) {
s.running = false;
s.over = true;
if (s.score > s.highScore) {
s.highScore = s.score;
writeHighScore(s.score);
}
}
/* ------------------------------------------------------------------ */
/* Projectile collision — shared between fighting & leaving phases */
/* ------------------------------------------------------------------ */
/** Returns true if the player died. */
function tickProjectiles(s: GameState): boolean {
const boss = s.boss;
for (const p of boss.projectiles) {
p.x -= p.speed;
if (!p.evaded && p.x + PROJ_SIZE < CHAR_X) {
p.evaded = true;
boss.shotsEvaded++;
}
// Collision
if (
!p.evaded &&
CHAR_X + CHAR_SIZE > p.x &&
CHAR_X < p.x + PROJ_SIZE &&
s.charY + CHAR_SIZE > p.y &&
s.charY < p.y + PROJ_SIZE
) {
gameOver(s);
return true;
}
}
boss.projectiles = boss.projectiles.filter((p) => p.x + PROJ_SIZE > -20);
return false;
}
/* ------------------------------------------------------------------ */
/* Update */
/* ------------------------------------------------------------------ */
function update(s: GameState, canvasWidth: number) {
if (!s.running) return;
s.frame++;
// Speed only ramps during regular play
if (s.boss.phase === "inactive") {
s.speed = BASE_SPEED + s.frame * SPEED_INCREMENT;
}
// ---- Character physics (always active) ---- //
s.vy += GRAVITY;
s.charY += s.vy;
if (s.charY + CHAR_SIZE >= s.groundY) {
s.charY = s.groundY - CHAR_SIZE;
s.vy = 0;
}
// ---- Trigger boss ---- //
if (s.boss.phase === "inactive" && s.score >= s.bossThreshold) {
s.boss.phase = "entering";
s.boss.x = canvasWidth + 10;
s.boss.targetX = canvasWidth - BOSS_SIZE - 40;
s.boss.shotsEvaded = 0;
s.boss.cooldown = BOSS_SHOOT_COOLDOWN;
s.boss.projectiles = [];
s.obstacles = [];
}
// ---- Boss: entering ---- //
if (s.boss.phase === "entering") {
s.boss.bob = Math.sin(s.frame * 0.05) * 3;
s.boss.x -= BOSS_ENTER_SPEED;
if (s.boss.x <= s.boss.targetX) {
s.boss.x = s.boss.targetX;
s.boss.phase = "fighting";
}
return; // no obstacles while entering
}
// ---- Boss: fighting ---- //
if (s.boss.phase === "fighting") {
s.boss.bob = Math.sin(s.frame * 0.05) * 3;
// Shoot
s.boss.cooldown--;
if (s.boss.cooldown <= 0) {
const isLow = Math.random() < 0.5;
s.boss.projectiles.push({
x: s.boss.x - PROJ_SIZE,
y: isLow ? s.groundY - 14 : s.groundY - 70,
speed: PROJ_SPEED,
evaded: false,
type: isLow ? "low" : "high",
});
s.boss.cooldown = BOSS_SHOOT_COOLDOWN;
}
if (tickProjectiles(s)) return;
// Boss defeated?
if (s.boss.shotsEvaded >= BOSS_SHOTS_TO_EVADE) {
s.boss.phase = "leaving";
s.score += 5; // bonus
s.bossThreshold = s.score + BOSS_INTERVAL;
}
return;
}
// ---- Boss: leaving ---- //
if (s.boss.phase === "leaving") {
s.boss.bob = Math.sin(s.frame * 0.05) * 3;
s.boss.x += BOSS_LEAVE_SPEED;
// Still check in-flight projectiles
if (tickProjectiles(s)) return;
if (s.boss.x > canvasWidth + 50) {
s.boss = makeBoss();
s.nextSpawn = s.frame + randInt(SPAWN_MIN / 2, SPAWN_MAX / 2);
}
return;
}
// ---- Regular obstacle play ---- //
if (s.frame >= s.nextSpawn) {
s.obstacles.push({
x: canvasWidth + 10,
width: randInt(10, 16),
height: randInt(20, 48),
scored: false,
});
s.nextSpawn = s.frame + randInt(SPAWN_MIN, SPAWN_MAX);
}
for (const o of s.obstacles) {
o.x -= s.speed;
if (!o.scored && o.x + o.width < CHAR_X) {
o.scored = true;
s.score++;
}
}
s.obstacles = s.obstacles.filter((o) => o.x + o.width > -20);
for (const o of s.obstacles) {
const oY = s.groundY - o.height;
if (
CHAR_X + CHAR_SIZE > o.x &&
CHAR_X < o.x + o.width &&
s.charY + CHAR_SIZE > oY
) {
gameOver(s);
return;
}
}
}
/* ------------------------------------------------------------------ */
/* Drawing */
/* ------------------------------------------------------------------ */
function drawBoss(ctx: CanvasRenderingContext2D, s: GameState, bg: string) {
const bx = s.boss.x;
const by = s.groundY - BOSS_SIZE + s.boss.bob;
// Body
ctx.save();
ctx.fillStyle = COLOR_BOSS;
ctx.globalAlpha = 0.9;
ctx.beginPath();
ctx.roundRect(bx, by, BOSS_SIZE, BOSS_SIZE, 4);
ctx.fill();
ctx.restore();
// Eyes
ctx.save();
ctx.fillStyle = bg;
const eyeY = by + 13;
ctx.beginPath();
ctx.arc(bx + 10, eyeY, 4, 0, Math.PI * 2);
ctx.fill();
ctx.beginPath();
ctx.arc(bx + 26, eyeY, 4, 0, Math.PI * 2);
ctx.fill();
ctx.restore();
// Angry eyebrows
ctx.save();
ctx.strokeStyle = bg;
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(bx + 5, eyeY - 7);
ctx.lineTo(bx + 14, eyeY - 4);
ctx.stroke();
ctx.beginPath();
ctx.moveTo(bx + 31, eyeY - 7);
ctx.lineTo(bx + 22, eyeY - 4);
ctx.stroke();
ctx.restore();
// Zigzag mouth
ctx.save();
ctx.strokeStyle = bg;
ctx.lineWidth = 1.5;
ctx.beginPath();
ctx.moveTo(bx + 10, by + 27);
ctx.lineTo(bx + 14, by + 24);
ctx.lineTo(bx + 18, by + 27);
ctx.lineTo(bx + 22, by + 24);
ctx.lineTo(bx + 26, by + 27);
ctx.stroke();
ctx.restore();
}
function drawProjectiles(ctx: CanvasRenderingContext2D, boss: BossState) {
ctx.save();
ctx.fillStyle = COLOR_BOSS;
ctx.globalAlpha = 0.8;
for (const p of boss.projectiles) {
if (p.evaded) continue;
ctx.beginPath();
ctx.arc(
p.x + PROJ_SIZE / 2,
p.y + PROJ_SIZE / 2,
PROJ_SIZE / 2,
0,
Math.PI * 2,
);
ctx.fill();
}
ctx.restore();
}
function draw(
ctx: CanvasRenderingContext2D,
s: GameState,
w: number,
h: number,
fg: string,
started: boolean,
) {
ctx.fillStyle = COLOR_BG;
ctx.fillRect(0, 0, w, h);
// Ground
ctx.save();
ctx.strokeStyle = fg;
ctx.globalAlpha = 0.15;
ctx.setLineDash([4, 4]);
ctx.beginPath();
ctx.moveTo(0, s.groundY);
ctx.lineTo(w, s.groundY);
ctx.stroke();
ctx.restore();
// Character
ctx.save();
ctx.fillStyle = COLOR_CHAR;
ctx.globalAlpha = 0.85;
ctx.beginPath();
ctx.roundRect(CHAR_X, s.charY, CHAR_SIZE, CHAR_SIZE, 3);
ctx.fill();
ctx.restore();
// Eyes
ctx.save();
ctx.fillStyle = COLOR_BG;
ctx.beginPath();
ctx.arc(CHAR_X + 6, s.charY + 7, 2.5, 0, Math.PI * 2);
ctx.fill();
ctx.beginPath();
ctx.arc(CHAR_X + 12, s.charY + 7, 2.5, 0, Math.PI * 2);
ctx.fill();
ctx.restore();
// Obstacles
ctx.save();
ctx.fillStyle = fg;
ctx.globalAlpha = 0.55;
for (const o of s.obstacles) {
ctx.fillRect(o.x, s.groundY - o.height, o.width, o.height);
}
ctx.restore();
// Boss + projectiles
if (s.boss.phase !== "inactive") {
drawBoss(ctx, s, COLOR_BG);
drawProjectiles(ctx, s.boss);
}
// Score HUD
ctx.save();
ctx.fillStyle = fg;
ctx.globalAlpha = 0.5;
ctx.font = "bold 11px monospace";
ctx.textAlign = "right";
ctx.fillText(`Score: ${s.score}`, w - 12, 20);
ctx.fillText(`Best: ${s.highScore}`, w - 12, 34);
if (s.boss.phase === "fighting") {
ctx.fillText(
`Evade: ${s.boss.shotsEvaded}/${BOSS_SHOTS_TO_EVADE}`,
w - 12,
48,
);
}
ctx.restore();
// Prompts
if (!started && !s.running && !s.over) {
ctx.save();
ctx.fillStyle = fg;
ctx.globalAlpha = 0.5;
ctx.font = "12px sans-serif";
ctx.textAlign = "center";
ctx.fillText("Click or press Space to play while you wait", w / 2, h / 2);
ctx.restore();
}
if (s.over) {
ctx.save();
ctx.fillStyle = fg;
ctx.globalAlpha = 0.7;
ctx.font = "bold 13px sans-serif";
ctx.textAlign = "center";
ctx.fillText("Game Over", w / 2, h / 2 - 8);
ctx.font = "11px sans-serif";
ctx.fillText("Click or Space to restart", w / 2, h / 2 + 10);
ctx.restore();
}
}
/* ------------------------------------------------------------------ */
/* Hook */
/* ------------------------------------------------------------------ */
export function useMiniGame() {
const canvasRef = useRef<HTMLCanvasElement>(null);
const stateRef = useRef<GameState | null>(null);
const rafRef = useRef(0);
const startedRef = useRef(false);
useEffect(() => {
const canvas = canvasRef.current;
if (!canvas) return;
const container = canvas.parentElement;
if (container) {
canvas.width = container.clientWidth;
canvas.height = CANVAS_HEIGHT;
}
const groundY = canvas.height - GROUND_PAD;
stateRef.current = makeState(groundY);
const style = getComputedStyle(canvas);
let fg = style.color || "#71717a";
// -------------------------------------------------------------- //
// Jump //
// -------------------------------------------------------------- //
function jump() {
const s = stateRef.current;
if (!s) return;
if (s.over) {
const hs = s.highScore;
const gy = s.groundY;
stateRef.current = makeState(gy);
stateRef.current.highScore = hs;
stateRef.current.running = true;
startedRef.current = true;
return;
}
if (!s.running) {
s.running = true;
startedRef.current = true;
return;
}
// Only jump when on the ground
if (s.charY + CHAR_SIZE >= s.groundY) {
s.vy = JUMP_FORCE;
}
}
function onKey(e: KeyboardEvent) {
if (e.code === "Space" || e.key === " ") {
e.preventDefault();
jump();
}
}
function onClick() {
canvas?.focus();
jump();
}
// -------------------------------------------------------------- //
// Loop //
// -------------------------------------------------------------- //
function loop() {
const s = stateRef.current;
if (!canvas || !s) return;
const ctx = canvas.getContext("2d");
if (!ctx) return;
update(s, canvas.width);
draw(ctx, s, canvas.width, canvas.height, fg, startedRef.current);
rafRef.current = requestAnimationFrame(loop);
}
rafRef.current = requestAnimationFrame(loop);
canvas.addEventListener("click", onClick);
canvas.addEventListener("keydown", onKey);
const observer = new ResizeObserver((entries) => {
for (const entry of entries) {
canvas.width = entry.contentRect.width;
canvas.height = CANVAS_HEIGHT;
if (stateRef.current) {
stateRef.current.groundY = canvas.height - GROUND_PAD;
}
const cs = getComputedStyle(canvas);
fg = cs.color || fg;
}
});
if (container) observer.observe(container);
return () => {
cancelAnimationFrame(rafRef.current);
canvas.removeEventListener("click", onClick);
canvas.removeEventListener("keydown", onKey);
observer.disconnect();
};
}, []);
return { canvasRef };
}

View File

@@ -1,10 +1,14 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { toast } from "@/components/molecules/Toast/use-toast";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { useEffect, useMemo, useState } from "react";
import { useEffect, useMemo, useRef, useState } from "react";
import { useChatSession } from "./useChatSession";
import { useLongRunningToolPolling } from "./hooks/useLongRunningToolPolling";
const STREAM_START_TIMEOUT_MS = 12_000;
export function useCopilotPage() {
const { isUserLoading, isLoggedIn } = useSupabase();
@@ -52,6 +56,24 @@ export function useCopilotPage() {
transport: transport ?? undefined,
});
// Abort the stream if the backend doesn't start sending data within 12s.
const stopRef = useRef(stop);
stopRef.current = stop;
useEffect(() => {
if (status !== "submitted") return;
const timer = setTimeout(() => {
stopRef.current();
toast({
title: "Stream timed out",
description: "The server took too long to respond. Please try again.",
variant: "destructive",
});
}, STREAM_START_TIMEOUT_MS);
return () => clearTimeout(timer);
}, [status]);
useEffect(() => {
if (!hydratedMessages || hydratedMessages.length === 0) return;
setMessages((prev) => {
@@ -60,6 +82,11 @@ export function useCopilotPage() {
});
}, [hydratedMessages, setMessages]);
// Poll session endpoint when a long-running tool (create_agent, edit_agent)
// is in progress. When the backend completes, the session data will contain
// the final tool output — this hook detects the change and updates messages.
useLongRunningToolPolling(sessionId, messages, setMessages);
// Clear messages when session is null
useEffect(() => {
if (!sessionId) setMessages([]);

View File

@@ -30,6 +30,7 @@ export function APIKeyCredentialsModal({
const {
form,
isLoading,
isSubmitting,
supportsApiKey,
providerName,
schemaDescription,
@@ -138,7 +139,12 @@ export function APIKeyCredentialsModal({
/>
)}
/>
<Button type="submit" className="min-w-68">
<Button
type="submit"
className="min-w-68"
loading={isSubmitting}
disabled={isSubmitting}
>
Add API Key
</Button>
</form>

View File

@@ -4,6 +4,7 @@ import {
CredentialsMetaInput,
} from "@/lib/autogpt-server-api/types";
import { zodResolver } from "@hookform/resolvers/zod";
import { useState } from "react";
import { useForm, type UseFormReturn } from "react-hook-form";
import { z } from "zod";
@@ -26,6 +27,7 @@ export function useAPIKeyCredentialsModal({
}: Args): {
form: UseFormReturn<APIKeyFormValues>;
isLoading: boolean;
isSubmitting: boolean;
supportsApiKey: boolean;
provider?: string;
providerName?: string;
@@ -33,6 +35,7 @@ export function useAPIKeyCredentialsModal({
onSubmit: (values: APIKeyFormValues) => Promise<void>;
} {
const credentials = useCredentials(schema, siblingInputs);
const [isSubmitting, setIsSubmitting] = useState(false);
const formSchema = z.object({
apiKey: z.string().min(1, "API Key is required"),
@@ -40,48 +43,42 @@ export function useAPIKeyCredentialsModal({
expiresAt: z.string().optional(),
});
function getDefaultExpirationDate(): string {
const tomorrow = new Date();
tomorrow.setDate(tomorrow.getDate() + 1);
tomorrow.setHours(0, 0, 0, 0);
const year = tomorrow.getFullYear();
const month = String(tomorrow.getMonth() + 1).padStart(2, "0");
const day = String(tomorrow.getDate()).padStart(2, "0");
const hours = String(tomorrow.getHours()).padStart(2, "0");
const minutes = String(tomorrow.getMinutes()).padStart(2, "0");
return `${year}-${month}-${day}T${hours}:${minutes}`;
}
const form = useForm<APIKeyFormValues>({
resolver: zodResolver(formSchema),
defaultValues: {
apiKey: "",
title: "",
expiresAt: getDefaultExpirationDate(),
expiresAt: "",
},
});
async function onSubmit(values: APIKeyFormValues) {
if (!credentials || credentials.isLoading) return;
const expiresAt = values.expiresAt
? new Date(values.expiresAt).getTime() / 1000
: undefined;
const newCredentials = await credentials.createAPIKeyCredentials({
api_key: values.apiKey,
title: values.title,
expires_at: expiresAt,
});
onCredentialsCreate({
provider: credentials.provider,
id: newCredentials.id,
type: "api_key",
title: newCredentials.title,
});
setIsSubmitting(true);
try {
const expiresAt = values.expiresAt
? new Date(values.expiresAt).getTime() / 1000
: undefined;
const newCredentials = await credentials.createAPIKeyCredentials({
api_key: values.apiKey,
title: values.title,
expires_at: expiresAt,
});
onCredentialsCreate({
provider: credentials.provider,
id: newCredentials.id,
type: "api_key",
title: newCredentials.title,
});
} finally {
setIsSubmitting(false);
}
}
return {
form,
isLoading: !credentials || credentials.isLoading,
isSubmitting,
supportsApiKey: !!credentials?.supportsApiKey,
provider: credentials?.provider,
providerName:

View File

@@ -563,7 +563,7 @@ The block supports conversation continuation through three mechanisms:
|--------|-------------|------|
| error | Error message if execution failed | str |
| response | The output/response from Claude Code execution | str |
| files | List of text files created/modified by Claude Code during this execution. Each file has 'path', 'relative_path', 'name', and 'content' fields. | List[FileOutput] |
| files | List of text files created/modified by Claude Code during this execution. Each file has 'path', 'relative_path', 'name', 'content', and 'workspace_ref' fields. workspace_ref contains a workspace:// URI if the file was stored to workspace. | List[SandboxFileOutput] |
| conversation_history | Full conversation history including this turn. Pass this to conversation_history input to continue on a fresh sandbox if the previous sandbox timed out. | str |
| session_id | Session ID for this conversation. Pass this back along with sandbox_id to continue the conversation. | str |
| sandbox_id | ID of the sandbox instance. Pass this back along with session_id to continue the conversation. This is None if dispose_sandbox was True (sandbox was disposed). | str |

View File

@@ -215,6 +215,7 @@ The sandbox includes pip and npm pre-installed. Set timeout to limit execution t
| response | Text output (if any) of the main execution result | str |
| stdout_logs | Standard output logs from execution | str |
| stderr_logs | Standard error logs from execution | str |
| files | Files created or modified during execution. Each file has path, name, content, and workspace_ref (if stored). | List[SandboxFileOutput] |
### Possible use case
<!-- MANUAL: use_case -->