Compare commits

..

4 Commits

Author SHA1 Message Date
Otto-AGPT
cdeefb8621 fix(copilot): Use correct OpenRouter reasoning API format
Addresses review comments from CodeRabbit and Sentry:

- Change reasoning format from {"enabled": True} (invalid) to
  {"max_tokens": config.thinking_budget_tokens} per OpenRouter docs
- Add missing thinking_budget_tokens config field (default: 10000)
- Extract duplicate code into _apply_thinking_config() helper function
- Update description from 'adaptive' to 'extended' thinking for clarity

References:
- OpenRouter reasoning docs: https://openrouter.ai/docs/reasoning-tokens
2026-02-11 13:54:57 +00:00
Swifty
ba6d585170 update settings 2026-02-10 16:08:21 +01:00
Swifty
90eac56525 Merge branch 'dev' into fix/enable-extended-thinking 2026-02-10 15:26:40 +01:00
Otto
75f8772f8a feat(copilot): Enable extended thinking for Claude models
Adds configuration to enable Anthropic's extended thinking feature via
OpenRouter. This keeps the model's chain-of-thought reasoning internal
rather than outputting it to users.

Configuration:
- thinking_enabled: bool (default: True)
- thinking_budget_tokens: int (default: 10000)

The thinking config is only applied to Anthropic models (detected via
model name containing 'anthropic').

Fixes the issue where the CoPilot prompt expects thinking mode but it
wasn't enabled on the API side, causing internal reasoning to leak
into user-facing responses.
2026-02-10 13:58:57 +00:00
46 changed files with 186 additions and 566 deletions

View File

@@ -22,7 +22,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ github.event.workflow_run.head_branch }} ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0 fetch-depth: 0

View File

@@ -30,7 +30,7 @@ jobs:
actions: read # Required for CI access actions: read # Required for CI access
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -40,7 +40,7 @@ jobs:
actions: read # Required for CI access actions: read # Required for CI access
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -58,7 +58,7 @@ jobs:
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL

View File

@@ -27,7 +27,7 @@ jobs:
# If you do not check out your code, Copilot will do this for you. # If you do not check out your code, Copilot will do this for you.
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true

View File

@@ -23,7 +23,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -23,7 +23,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -28,7 +28,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -25,7 +25,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ github.event.inputs.git_ref || github.ref_name }} ref: ${{ github.event.inputs.git_ref || github.ref_name }}
@@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Trigger deploy workflow - name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DEPLOY_TOKEN }} token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -17,7 +17,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
ref: ${{ github.ref_name || 'master' }} ref: ${{ github.ref_name || 'master' }}
@@ -45,7 +45,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Trigger deploy workflow - name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DEPLOY_TOKEN }} token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -68,7 +68,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true

View File

@@ -82,7 +82,7 @@ jobs:
- name: Dispatch Deploy Event - name: Dispatch Deploy Event
if: steps.check_status.outputs.should_deploy == 'true' if: steps.check_status.outputs.should_deploy == 'true'
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DISPATCH_TOKEN }} token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -110,7 +110,7 @@ jobs:
- name: Dispatch Undeploy Event (from comment) - name: Dispatch Undeploy Event (from comment)
if: steps.check_status.outputs.should_undeploy == 'true' if: steps.check_status.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DISPATCH_TOKEN }} token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -168,7 +168,7 @@ jobs:
github.event_name == 'pull_request' && github.event_name == 'pull_request' &&
github.event.action == 'closed' && github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true' steps.check_pr_close.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v4 uses: peter-evans/repository-dispatch@v3
with: with:
token: ${{ secrets.DISPATCH_TOKEN }} token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -31,7 +31,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
- name: Check for component changes - name: Check for component changes
uses: dorny/paths-filter@v3 uses: dorny/paths-filter@v3
@@ -71,7 +71,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v6
@@ -107,7 +107,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -148,7 +148,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive
@@ -277,7 +277,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive

View File

@@ -29,7 +29,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v6 uses: actions/setup-node@v6
@@ -63,7 +63,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v4
with: with:
submodules: recursive submodules: recursive

View File

@@ -11,7 +11,7 @@ jobs:
steps: steps:
# - name: Wait some time for all actions to start # - name: Wait some time for all actions to start
# run: sleep 30 # run: sleep 30
- uses: actions/checkout@v6 - uses: actions/checkout@v4
# with: # with:
# fetch-depth: 0 # fetch-depth: 0
- name: Set up Python - name: Set up Python

View File

@@ -96,7 +96,13 @@ class ChatConfig(BaseSettings):
# Extended thinking configuration for Claude models # Extended thinking configuration for Claude models
thinking_enabled: bool = Field( thinking_enabled: bool = Field(
default=True, default=True,
description="Enable adaptive thinking for Claude models via OpenRouter", description="Enable extended thinking for Claude models via OpenRouter",
)
thinking_budget_tokens: int = Field(
default=10000,
ge=1000,
le=100000,
description="Maximum tokens for extended thinking (budget_tokens for Claude)",
) )
@field_validator("api_key", mode="before") @field_validator("api_key", mode="before")

View File

@@ -10,8 +10,6 @@ from typing import Any
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from backend.util.json import dumps as json_dumps
class ResponseType(str, Enum): class ResponseType(str, Enum):
"""Types of streaming responses following AI SDK protocol.""" """Types of streaming responses following AI SDK protocol."""
@@ -195,18 +193,6 @@ class StreamError(StreamBaseResponse):
default=None, description="Additional error details" default=None, description="Additional error details"
) )
def to_sse(self) -> str:
"""Convert to SSE format, only emitting fields required by AI SDK protocol.
The AI SDK uses z.strictObject({type, errorText}) which rejects
any extra fields like `code` or `details`.
"""
data = {
"type": self.type.value,
"errorText": self.errorText,
}
return f"data: {json_dumps(data)}\n\n"
class StreamHeartbeat(StreamBaseResponse): class StreamHeartbeat(StreamBaseResponse):
"""Heartbeat to keep SSE connection alive during long-running operations. """Heartbeat to keep SSE connection alive during long-running operations.

View File

@@ -80,6 +80,19 @@ settings = Settings()
client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url) client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
def _apply_thinking_config(extra_body: dict[str, Any], model: str) -> None:
"""Apply extended thinking configuration for Anthropic models via OpenRouter.
OpenRouter's reasoning API expects either:
- {"max_tokens": N} for explicit token budget
- {"effort": "high"} for automatic budget
See: https://openrouter.ai/docs/reasoning-tokens
"""
if config.thinking_enabled and "anthropic" in model.lower():
extra_body["reasoning"] = {"max_tokens": config.thinking_budget_tokens}
langfuse = get_client() langfuse = get_client()
# Redis key prefix for tracking running long-running operations # Redis key prefix for tracking running long-running operations
@@ -1066,9 +1079,8 @@ async def _stream_chat_chunks(
:128 :128
] # OpenRouter limit ] # OpenRouter limit
# Enable adaptive thinking for Anthropic models via OpenRouter # Enable extended thinking for Anthropic models via OpenRouter
if config.thinking_enabled and "anthropic" in model.lower(): _apply_thinking_config(extra_body, model)
extra_body["reasoning"] = {"enabled": True}
api_call_start = time_module.perf_counter() api_call_start = time_module.perf_counter()
stream = await client.chat.completions.create( stream = await client.chat.completions.create(
@@ -1833,9 +1845,8 @@ async def _generate_llm_continuation(
if session_id: if session_id:
extra_body["session_id"] = session_id[:128] extra_body["session_id"] = session_id[:128]
# Enable adaptive thinking for Anthropic models via OpenRouter # Enable extended thinking for Anthropic models via OpenRouter
if config.thinking_enabled and "anthropic" in config.model.lower(): _apply_thinking_config(extra_body, config.model)
extra_body["reasoning"] = {"enabled": True}
retry_count = 0 retry_count = 0
last_error: Exception | None = None last_error: Exception | None = None
@@ -1967,9 +1978,8 @@ async def _generate_llm_continuation_with_streaming(
if session_id: if session_id:
extra_body["session_id"] = session_id[:128] extra_body["session_id"] = session_id[:128]
# Enable adaptive thinking for Anthropic models via OpenRouter # Enable extended thinking for Anthropic models via OpenRouter
if config.thinking_enabled and "anthropic" in config.model.lower(): _apply_thinking_config(extra_body, config.model)
extra_body["reasoning"] = {"enabled": True}
# Make streaming LLM call (no tools - just text response) # Make streaming LLM call (no tools - just text response)
from typing import cast from typing import cast

View File

@@ -1,152 +0,0 @@
"""Dummy Agent Generator for testing.
Returns mock responses matching the format expected from the external service.
Enable via AGENTGENERATOR_USE_DUMMY=true in settings.
WARNING: This is for testing only. Do not use in production.
"""
import logging
import uuid
from typing import Any
logger = logging.getLogger(__name__)
# Dummy decomposition result (instructions type)
DUMMY_DECOMPOSITION_RESULT: dict[str, Any] = {
"type": "instructions",
"steps": [
{
"description": "Get input from user",
"action": "input",
"block_name": "AgentInputBlock",
},
{
"description": "Process the input",
"action": "process",
"block_name": "TextFormatterBlock",
},
{
"description": "Return output to user",
"action": "output",
"block_name": "AgentOutputBlock",
},
],
}
# Block IDs from backend/blocks/io.py
AGENT_INPUT_BLOCK_ID = "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b"
AGENT_OUTPUT_BLOCK_ID = "363ae599-353e-4804-937e-b2ee3cef3da4"
def _generate_dummy_agent_json() -> dict[str, Any]:
"""Generate a minimal valid agent JSON for testing."""
input_node_id = str(uuid.uuid4())
output_node_id = str(uuid.uuid4())
return {
"id": str(uuid.uuid4()),
"version": 1,
"is_active": True,
"name": "Dummy Test Agent",
"description": "A dummy agent generated for testing purposes",
"nodes": [
{
"id": input_node_id,
"block_id": AGENT_INPUT_BLOCK_ID,
"input_default": {
"name": "input",
"title": "Input",
"description": "Enter your input",
"placeholder_values": [],
},
"metadata": {"position": {"x": 0, "y": 0}},
},
{
"id": output_node_id,
"block_id": AGENT_OUTPUT_BLOCK_ID,
"input_default": {
"name": "output",
"title": "Output",
"description": "Agent output",
"format": "{output}",
},
"metadata": {"position": {"x": 400, "y": 0}},
},
],
"links": [
{
"id": str(uuid.uuid4()),
"source_id": input_node_id,
"sink_id": output_node_id,
"source_name": "result",
"sink_name": "value",
"is_static": False,
},
],
}
async def decompose_goal_dummy(
description: str,
context: str = "",
library_agents: list[dict[str, Any]] | None = None,
) -> dict[str, Any]:
"""Return dummy decomposition result."""
logger.info("Using dummy agent generator for decompose_goal")
return DUMMY_DECOMPOSITION_RESULT.copy()
async def generate_agent_dummy(
instructions: dict[str, Any],
library_agents: list[dict[str, Any]] | None = None,
operation_id: str | None = None,
task_id: str | None = None,
) -> dict[str, Any]:
"""Return dummy agent JSON."""
logger.info("Using dummy agent generator for generate_agent")
return _generate_dummy_agent_json()
async def generate_agent_patch_dummy(
update_request: str,
current_agent: dict[str, Any],
library_agents: list[dict[str, Any]] | None = None,
operation_id: str | None = None,
task_id: str | None = None,
) -> dict[str, Any]:
"""Return dummy patched agent (returns the current agent with updated description)."""
logger.info("Using dummy agent generator for generate_agent_patch")
patched = current_agent.copy()
patched["description"] = (
f"{current_agent.get('description', '')} (updated: {update_request})"
)
return patched
async def customize_template_dummy(
template_agent: dict[str, Any],
modification_request: str,
context: str = "",
) -> dict[str, Any]:
"""Return dummy customized template (returns template with updated description)."""
logger.info("Using dummy agent generator for customize_template")
customized = template_agent.copy()
customized["description"] = (
f"{template_agent.get('description', '')} (customized: {modification_request})"
)
return customized
async def get_blocks_dummy() -> list[dict[str, Any]]:
"""Return dummy blocks list."""
logger.info("Using dummy agent generator for get_blocks")
return [
{"id": AGENT_INPUT_BLOCK_ID, "name": "AgentInputBlock"},
{"id": AGENT_OUTPUT_BLOCK_ID, "name": "AgentOutputBlock"},
]
async def health_check_dummy() -> bool:
"""Always returns healthy for dummy service."""
return True

View File

@@ -12,19 +12,8 @@ import httpx
from backend.util.settings import Settings from backend.util.settings import Settings
from .dummy import (
customize_template_dummy,
decompose_goal_dummy,
generate_agent_dummy,
generate_agent_patch_dummy,
get_blocks_dummy,
health_check_dummy,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_dummy_mode_warned = False
def _create_error_response( def _create_error_response(
error_message: str, error_message: str,
@@ -101,26 +90,10 @@ def _get_settings() -> Settings:
return _settings return _settings
def _is_dummy_mode() -> bool:
"""Check if dummy mode is enabled for testing."""
global _dummy_mode_warned
settings = _get_settings()
is_dummy = bool(settings.config.agentgenerator_use_dummy)
if is_dummy and not _dummy_mode_warned:
logger.warning(
"Agent Generator running in DUMMY MODE - returning mock responses. "
"Do not use in production!"
)
_dummy_mode_warned = True
return is_dummy
def is_external_service_configured() -> bool: def is_external_service_configured() -> bool:
"""Check if external Agent Generator service is configured (or dummy mode).""" """Check if external Agent Generator service is configured."""
settings = _get_settings() settings = _get_settings()
return bool(settings.config.agentgenerator_host) or bool( return bool(settings.config.agentgenerator_host)
settings.config.agentgenerator_use_dummy
)
def _get_base_url() -> str: def _get_base_url() -> str:
@@ -164,9 +137,6 @@ async def decompose_goal_external(
- {"type": "error", "error": "...", "error_type": "..."} on error - {"type": "error", "error": "...", "error_type": "..."} on error
Or None on unexpected error Or None on unexpected error
""" """
if _is_dummy_mode():
return await decompose_goal_dummy(description, context, library_agents)
client = _get_client() client = _get_client()
if context: if context:
@@ -256,11 +226,6 @@ async def generate_agent_external(
Returns: Returns:
Agent JSON dict, {"status": "accepted"} for async, or error dict {"type": "error", ...} on error Agent JSON dict, {"status": "accepted"} for async, or error dict {"type": "error", ...} on error
""" """
if _is_dummy_mode():
return await generate_agent_dummy(
instructions, library_agents, operation_id, task_id
)
client = _get_client() client = _get_client()
# Build request payload # Build request payload
@@ -332,11 +297,6 @@ async def generate_agent_patch_external(
Returns: Returns:
Updated agent JSON, clarifying questions dict, {"status": "accepted"} for async, or error dict on error Updated agent JSON, clarifying questions dict, {"status": "accepted"} for async, or error dict on error
""" """
if _is_dummy_mode():
return await generate_agent_patch_dummy(
update_request, current_agent, library_agents, operation_id, task_id
)
client = _get_client() client = _get_client()
# Build request payload # Build request payload
@@ -423,11 +383,6 @@ async def customize_template_external(
Returns: Returns:
Customized agent JSON, clarifying questions dict, or error dict on error Customized agent JSON, clarifying questions dict, or error dict on error
""" """
if _is_dummy_mode():
return await customize_template_dummy(
template_agent, modification_request, context
)
client = _get_client() client = _get_client()
request = modification_request request = modification_request
@@ -490,9 +445,6 @@ async def get_blocks_external() -> list[dict[str, Any]] | None:
Returns: Returns:
List of block info dicts or None on error List of block info dicts or None on error
""" """
if _is_dummy_mode():
return await get_blocks_dummy()
client = _get_client() client = _get_client()
try: try:
@@ -526,9 +478,6 @@ async def health_check() -> bool:
if not is_external_service_configured(): if not is_external_service_configured():
return False return False
if _is_dummy_mode():
return await health_check_dummy()
client = _get_client() client = _get_client()
try: try:

View File

@@ -21,71 +21,43 @@ logger = logging.getLogger(__name__)
class HumanInTheLoopBlock(Block): class HumanInTheLoopBlock(Block):
""" """
Pauses execution and waits for human approval or rejection of the data. This block pauses execution and waits for human approval or modification of the data.
When executed, this block creates a pending review entry and sets the node execution When executed, it creates a pending review entry and sets the node execution status
status to REVIEW. The execution remains paused until a human user either approves to REVIEW. The execution will remain paused until a human user either:
or rejects the data. - Approves the data (with or without modifications)
- Rejects the data
**How it works:** This is useful for workflows that require human validation or intervention before
- The input data is presented to a human reviewer proceeding to the next steps.
- The reviewer can approve or reject (and optionally modify the data if editable)
- On approval: the data flows out through the `approved_data` output pin
- On rejection: the data flows out through the `rejected_data` output pin
**Important:** The output pins yield the actual data itself, NOT status strings.
The approval/rejection decision determines WHICH output pin fires, not the value.
You do NOT need to compare the output to "APPROVED" or "REJECTED" - simply connect
downstream blocks to the appropriate output pin for each case.
**Example usage:**
- Connect `approved_data` → next step in your workflow (data was approved)
- Connect `rejected_data` → error handling or notification (data was rejected)
""" """
class Input(BlockSchemaInput): class Input(BlockSchemaInput):
data: Any = SchemaField( data: Any = SchemaField(description="The data to be reviewed by a human user")
description="The data to be reviewed by a human user. "
"This exact data will be passed through to either approved_data or "
"rejected_data output based on the reviewer's decision."
)
name: str = SchemaField( name: str = SchemaField(
description="A descriptive name for what this data represents. " description="A descriptive name for what this data represents",
"This helps the reviewer understand what they are reviewing.",
) )
editable: bool = SchemaField( editable: bool = SchemaField(
description="Whether the human reviewer can edit the data before " description="Whether the human reviewer can edit the data",
"approving or rejecting it",
default=True, default=True,
advanced=True, advanced=True,
) )
class Output(BlockSchemaOutput): class Output(BlockSchemaOutput):
approved_data: Any = SchemaField( approved_data: Any = SchemaField(
description="Outputs the input data when the reviewer APPROVES it. " description="The data when approved (may be modified by reviewer)"
"The value is the actual data itself (not a status string like 'APPROVED'). "
"If the reviewer edited the data, this contains the modified version. "
"Connect downstream blocks here for the 'approved' workflow path."
) )
rejected_data: Any = SchemaField( rejected_data: Any = SchemaField(
description="Outputs the input data when the reviewer REJECTS it. " description="The data when rejected (may be modified by reviewer)"
"The value is the actual data itself (not a status string like 'REJECTED'). "
"If the reviewer edited the data, this contains the modified version. "
"Connect downstream blocks here for the 'rejected' workflow path."
) )
review_message: str = SchemaField( review_message: str = SchemaField(
description="Optional message provided by the reviewer explaining their " description="Any message provided by the reviewer", default=""
"decision. Only outputs when the reviewer provides a message; "
"this pin does not fire if no message was given.",
default="",
) )
def __init__(self): def __init__(self):
super().__init__( super().__init__(
id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d", id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d",
description="Pause execution for human review. Data flows through " description="Pause execution and wait for human approval or modification of data",
"approved_data or rejected_data output based on the reviewer's decision. "
"Outputs contain the actual data, not status strings.",
categories={BlockCategory.BASIC}, categories={BlockCategory.BASIC},
input_schema=HumanInTheLoopBlock.Input, input_schema=HumanInTheLoopBlock.Input,
output_schema=HumanInTheLoopBlock.Output, output_schema=HumanInTheLoopBlock.Output,

View File

@@ -743,11 +743,6 @@ class GraphModel(Graph, GraphMeta):
# For invalid blocks, we still raise immediately as this is a structural issue # For invalid blocks, we still raise immediately as this is a structural issue
raise ValueError(f"Invalid block {node.block_id} for node #{node.id}") raise ValueError(f"Invalid block {node.block_id} for node #{node.id}")
if block.disabled:
raise ValueError(
f"Block {node.block_id} is disabled and cannot be used in graphs"
)
node_input_mask = ( node_input_mask = (
nodes_input_masks.get(node.id, {}) if nodes_input_masks else {} nodes_input_masks.get(node.id, {}) if nodes_input_masks else {}
) )

View File

@@ -213,9 +213,6 @@ async def execute_node(
block_name=node_block.name, block_name=node_block.name,
) )
if node_block.disabled:
raise ValueError(f"Block {node_block.id} is disabled and cannot be executed")
# Sanity check: validate the execution input. # Sanity check: validate the execution input.
input_data, error = validate_exec(node, data.inputs, resolve_input=False) input_data, error = validate_exec(node, data.inputs, resolve_input=False)
if input_data is None: if input_data is None:

View File

@@ -364,44 +364,6 @@ def _remove_orphan_tool_responses(
return result return result
def validate_and_remove_orphan_tool_responses(
messages: list[dict],
log_warning: bool = True,
) -> list[dict]:
"""
Validate tool_call/tool_response pairs and remove orphaned responses.
Scans messages in order, tracking all tool_call IDs. Any tool response
referencing an ID not seen in a preceding message is considered orphaned
and removed. This prevents API errors like Anthropic's "unexpected tool_use_id".
Args:
messages: List of messages to validate (OpenAI or Anthropic format)
log_warning: Whether to log a warning when orphans are found
Returns:
A new list with orphaned tool responses removed
"""
available_ids: set[str] = set()
orphan_ids: set[str] = set()
for msg in messages:
available_ids |= _extract_tool_call_ids_from_message(msg)
for resp_id in _extract_tool_response_ids_from_message(msg):
if resp_id not in available_ids:
orphan_ids.add(resp_id)
if not orphan_ids:
return messages
if log_warning:
logger.warning(
f"Removing {len(orphan_ids)} orphan tool response(s): {orphan_ids}"
)
return _remove_orphan_tool_responses(messages, orphan_ids)
def _ensure_tool_pairs_intact( def _ensure_tool_pairs_intact(
recent_messages: list[dict], recent_messages: list[dict],
all_messages: list[dict], all_messages: list[dict],
@@ -761,13 +723,6 @@ async def compress_context(
# Filter out any None values that may have been introduced # Filter out any None values that may have been introduced
final_msgs: list[dict] = [m for m in msgs if m is not None] final_msgs: list[dict] = [m for m in msgs if m is not None]
# ---- STEP 6: Final tool-pair validation ---------------------------------
# After all compression steps, verify that every tool response has a
# matching tool_call in a preceding assistant message. Remove orphans
# to prevent API errors (e.g., Anthropic's "unexpected tool_use_id").
final_msgs = validate_and_remove_orphan_tool_responses(final_msgs)
final_count = sum(_msg_tokens(m, enc) for m in final_msgs) final_count = sum(_msg_tokens(m, enc) for m in final_msgs)
error = None error = None
if final_count + reserve > target_tokens: if final_count + reserve > target_tokens:

View File

@@ -368,10 +368,6 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
default=600, default=600,
description="The timeout in seconds for Agent Generator service requests (includes retries for rate limits)", description="The timeout in seconds for Agent Generator service requests (includes retries for rate limits)",
) )
agentgenerator_use_dummy: bool = Field(
default=False,
description="Use dummy agent generator responses for testing (bypasses external service)",
)
enable_example_blocks: bool = Field( enable_example_blocks: bool = Field(
default=False, default=False,

View File

@@ -46,14 +46,14 @@ pycares = ">=4.9.0,<5"
[[package]] [[package]]
name = "aiofiles" name = "aiofiles"
version = "25.1.0" version = "24.1.0"
description = "File support for asyncio." description = "File support for asyncio."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "aiofiles-25.1.0-py3-none-any.whl", hash = "sha256:abe311e527c862958650f9438e859c1fa7568a141b22abcd015e120e86a85695"}, {file = "aiofiles-24.1.0-py3-none-any.whl", hash = "sha256:b4ec55f4195e3eb5d7abd1bf7e061763e864dd4954231fb8539a0ef8bb8260e5"},
{file = "aiofiles-25.1.0.tar.gz", hash = "sha256:a8d728f0a29de45dc521f18f07297428d56992a742f0cd2701ba86e44d23d5b2"}, {file = "aiofiles-24.1.0.tar.gz", hash = "sha256:22a075c9e5a3810f0c2e48f3008c94d68c65d763b9b03857924c99e57355166c"},
] ]
[[package]] [[package]]
@@ -8440,4 +8440,4 @@ cffi = ["cffi (>=1.17,<2.0) ; platform_python_implementation != \"PyPy\" and pyt
[metadata] [metadata]
lock-version = "2.1" lock-version = "2.1"
python-versions = ">=3.10,<3.14" python-versions = ">=3.10,<3.14"
content-hash = "c06e96ad49388ba7a46786e9ea55ea2c1a57408e15613237b4bee40a592a12af" content-hash = "fc135114e01de39c8adf70f6132045e7d44a19473c1279aee0978de65aad1655"

View File

@@ -76,7 +76,7 @@ yt-dlp = "2025.12.08"
zerobouncesdk = "^1.1.2" zerobouncesdk = "^1.1.2"
# NOTE: please insert new dependencies in their alphabetical location # NOTE: please insert new dependencies in their alphabetical location
pytest-snapshot = "^0.9.0" pytest-snapshot = "^0.9.0"
aiofiles = "^25.1.0" aiofiles = "^24.1.0"
tiktoken = "^0.12.0" tiktoken = "^0.12.0"
aioclamd = "^1.0.0" aioclamd = "^1.0.0"
setuptools = "^80.9.0" setuptools = "^80.9.0"

View File

@@ -1,11 +1,11 @@
"use client"; "use client";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { SidebarProvider } from "@/components/ui/sidebar"; import { SidebarProvider } from "@/components/ui/sidebar";
import { ChatContainer } from "./components/ChatContainer/ChatContainer"; import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar"; import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer"; import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader"; import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { ScaleLoader } from "./components/ScaleLoader/ScaleLoader";
import { useCopilotPage } from "./useCopilotPage"; import { useCopilotPage } from "./useCopilotPage";
export function CopilotPage() { export function CopilotPage() {
@@ -34,11 +34,7 @@ export function CopilotPage() {
} = useCopilotPage(); } = useCopilotPage();
if (isUserLoading || !isLoggedIn) { if (isUserLoading || !isLoggedIn) {
return ( return <LoadingSpinner size="large" cover />;
<div className="fixed inset-0 z-50 flex items-center justify-center bg-[#f8f8f9]">
<ScaleLoader className="text-neutral-400" />
</div>
);
} }
return ( return (

View File

@@ -10,9 +10,8 @@ import {
MessageResponse, MessageResponse,
} from "@/components/ai-elements/message"; } from "@/components/ai-elements/message";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner"; import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { toast } from "@/components/molecules/Toast/use-toast";
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai"; import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
import { useEffect, useRef, useState } from "react"; import { useEffect, useState } from "react";
import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent"; import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../tools/EditAgent/EditAgent"; import { EditAgentTool } from "../../tools/EditAgent/EditAgent";
import { FindAgentsTool } from "../../tools/FindAgents/FindAgents"; import { FindAgentsTool } from "../../tools/FindAgents/FindAgents";
@@ -122,7 +121,6 @@ export const ChatMessagesContainer = ({
isLoading, isLoading,
}: ChatMessagesContainerProps) => { }: ChatMessagesContainerProps) => {
const [thinkingPhrase, setThinkingPhrase] = useState(getRandomPhrase); const [thinkingPhrase, setThinkingPhrase] = useState(getRandomPhrase);
const lastToastTimeRef = useRef(0);
useEffect(() => { useEffect(() => {
if (status === "submitted") { if (status === "submitted") {
@@ -130,20 +128,6 @@ export const ChatMessagesContainer = ({
} }
}, [status]); }, [status]);
// Show a toast when a new error occurs, debounced to avoid spam
useEffect(() => {
if (!error) return;
const now = Date.now();
if (now - lastToastTimeRef.current < 3_000) return;
lastToastTimeRef.current = now;
toast({
variant: "destructive",
title: "Something went wrong",
description:
"The assistant encountered an error. Please try sending your message again.",
});
}, [error]);
const lastMessage = messages[messages.length - 1]; const lastMessage = messages[messages.length - 1];
const lastAssistantHasVisibleContent = const lastAssistantHasVisibleContent =
lastMessage?.role === "assistant" && lastMessage?.role === "assistant" &&
@@ -159,10 +143,10 @@ export const ChatMessagesContainer = ({
return ( return (
<Conversation className="min-h-0 flex-1"> <Conversation className="min-h-0 flex-1">
<ConversationContent className="flex min-h-screen flex-1 flex-col gap-6 px-3 py-6"> <ConversationContent className="gap-6 px-3 py-6">
{isLoading && messages.length === 0 && ( {isLoading && messages.length === 0 && (
<div className="flex min-h-full flex-1 items-center justify-center"> <div className="flex flex-1 items-center justify-center">
<LoadingSpinner className="text-neutral-600" /> <LoadingSpinner size="large" className="text-neutral-400" />
</div> </div>
)} )}
{messages.map((message, messageIndex) => { {messages.map((message, messageIndex) => {
@@ -279,12 +263,8 @@ export const ChatMessagesContainer = ({
</Message> </Message>
)} )}
{error && ( {error && (
<div className="rounded-lg bg-red-50 p-4 text-sm text-red-700"> <div className="rounded-lg bg-red-50 p-3 text-red-600">
<p className="font-medium">Something went wrong</p> Error: {error.message}
<p className="mt-1 text-red-600">
The assistant encountered an error. Please try sending your
message again.
</p>
</div> </div>
)} )}
</ConversationContent> </ConversationContent>

View File

@@ -121,8 +121,8 @@ export function ChatSidebar() {
className="mt-4 flex flex-col gap-1" className="mt-4 flex flex-col gap-1"
> >
{isLoadingSessions ? ( {isLoadingSessions ? (
<div className="flex min-h-[30rem] items-center justify-center py-4"> <div className="flex items-center justify-center py-4">
<LoadingSpinner size="small" className="text-neutral-600" /> <LoadingSpinner size="small" className="text-neutral-400" />
</div> </div>
) : sessions.length === 0 ? ( ) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500"> <p className="py-4 text-center text-sm text-neutral-500">

View File

@@ -1,35 +0,0 @@
.loader {
width: 48px;
height: 48px;
display: inline-block;
position: relative;
}
.loader::after,
.loader::before {
content: "";
box-sizing: border-box;
width: 100%;
height: 100%;
border-radius: 50%;
background: currentColor;
position: absolute;
left: 0;
top: 0;
animation: animloader 2s linear infinite;
}
.loader::after {
animation-delay: 1s;
}
@keyframes animloader {
0% {
transform: scale(0);
opacity: 1;
}
100% {
transform: scale(1);
opacity: 0;
}
}

View File

@@ -1,16 +0,0 @@
import { cn } from "@/lib/utils";
import styles from "./ScaleLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function ScaleLoader({ size = 48, className }: Props) {
return (
<div
className={cn(styles.loader, className)}
style={{ width: size, height: size }}
/>
);
}

View File

@@ -30,7 +30,7 @@ export function ContentCard({
return ( return (
<div <div
className={cn( className={cn(
"min-w-0 rounded-lg bg-gradient-to-r from-purple-500/30 to-blue-500/30 p-[1px]", "rounded-lg bg-gradient-to-r from-purple-500/30 to-blue-500/30 p-[1px]",
className, className,
)} )}
> >

View File

@@ -4,6 +4,7 @@ import { WarningDiamondIcon } from "@phosphor-icons/react";
import type { ToolUIPart } from "ai"; import type { ToolUIPart } from "ai";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions"; import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation"; import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ProgressBar } from "../../components/ProgressBar/ProgressBar"; import { ProgressBar } from "../../components/ProgressBar/ProgressBar";
import { import {
ContentCardDescription, ContentCardDescription,
@@ -48,7 +49,12 @@ interface Props {
part: CreateAgentToolPart; part: CreateAgentToolPart;
} }
function getAccordionMeta(output: CreateAgentToolOutput) { function getAccordionMeta(output: CreateAgentToolOutput): {
icon: React.ReactNode;
title: React.ReactNode;
titleClassName?: string;
description?: string;
} {
const icon = <AccordionIcon />; const icon = <AccordionIcon />;
if (isAgentSavedOutput(output)) { if (isAgentSavedOutput(output)) {
@@ -67,7 +73,6 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
icon, icon,
title: "Needs clarification", title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`, description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
expanded: true,
}; };
} }
if ( if (
@@ -76,7 +81,7 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
isOperationInProgressOutput(output) isOperationInProgressOutput(output)
) { ) {
return { return {
icon, icon: <OrbitLoader size={32} />,
title: "Creating agent, this may take a few minutes. Sit back and relax.", title: "Creating agent, this may take a few minutes. Sit back and relax.",
}; };
} }
@@ -92,23 +97,18 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
export function CreateAgentTool({ part }: Props) { export function CreateAgentTool({ part }: Props) {
const text = getAnimationText(part); const text = getAnimationText(part);
const { onSend } = useCopilotChatActions(); const { onSend } = useCopilotChatActions();
const isStreaming = const isStreaming =
part.state === "input-streaming" || part.state === "input-available"; part.state === "input-streaming" || part.state === "input-available";
const output = getCreateAgentToolOutput(part); const output = getCreateAgentToolOutput(part);
const isError = const isError =
part.state === "output-error" || (!!output && isErrorOutput(output)); part.state === "output-error" || (!!output && isErrorOutput(output));
const isOperating = const isOperating =
!!output && !!output &&
(isOperationStartedOutput(output) || (isOperationStartedOutput(output) ||
isOperationPendingOutput(output) || isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)); isOperationInProgressOutput(output));
const progress = useAsymptoticProgress(isOperating); const progress = useAsymptoticProgress(isOperating);
const hasExpandableContent = const hasExpandableContent =
part.state === "output-available" && part.state === "output-available" &&
!!output && !!output &&
@@ -149,7 +149,10 @@ export function CreateAgentTool({ part }: Props) {
</div> </div>
{hasExpandableContent && output && ( {hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}> <ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isOperating || isClarificationNeededOutput(output)}
>
{isOperating && ( {isOperating && (
<ContentGrid> <ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" /> <ProgressBar value={progress} className="max-w-[280px]" />

View File

@@ -146,7 +146,10 @@ export function EditAgentTool({ part }: Props) {
</div> </div>
{hasExpandableContent && output && ( {hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}> <ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isOperating || isClarificationNeededOutput(output)}
>
{isOperating && ( {isOperating && (
<ContentGrid> <ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" /> <ProgressBar value={progress} className="max-w-[280px]" />

View File

@@ -61,7 +61,14 @@ export function RunAgentTool({ part }: Props) {
</div> </div>
{hasExpandableContent && output && ( {hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}> <ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={
isRunAgentExecutionStartedOutput(output) ||
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentAgentDetailsOutput(output)
}
>
{isRunAgentExecutionStartedOutput(output) && ( {isRunAgentExecutionStartedOutput(output) && (
<ExecutionStartedCard output={output} /> <ExecutionStartedCard output={output} />
)} )}

View File

@@ -10,7 +10,7 @@ import {
WarningDiamondIcon, WarningDiamondIcon,
} from "@phosphor-icons/react"; } from "@phosphor-icons/react";
import type { ToolUIPart } from "ai"; import type { ToolUIPart } from "ai";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader"; import { SpinnerLoader } from "../../components/SpinnerLoader/SpinnerLoader";
export interface RunAgentInput { export interface RunAgentInput {
username_agent_slug?: string; username_agent_slug?: string;
@@ -171,7 +171,7 @@ export function ToolIcon({
); );
} }
if (isStreaming) { if (isStreaming) {
return <OrbitLoader size={24} />; return <SpinnerLoader size={40} className="text-neutral-700" />;
} }
return <PlayIcon size={14} weight="regular" className="text-neutral-400" />; return <PlayIcon size={14} weight="regular" className="text-neutral-400" />;
} }
@@ -203,7 +203,7 @@ export function getAccordionMeta(output: RunAgentToolOutput): {
? output.status.trim() ? output.status.trim()
: "started"; : "started";
return { return {
icon, icon: <SpinnerLoader size={28} className="text-neutral-700" />,
title: output.graph_name, title: output.graph_name,
description: `Status: ${statusText}`, description: `Status: ${statusText}`,
}; };

View File

@@ -55,7 +55,13 @@ export function RunBlockTool({ part }: Props) {
</div> </div>
{hasExpandableContent && output && ( {hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}> <ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={
isRunBlockBlockOutput(output) ||
isRunBlockSetupRequirementsOutput(output)
}
>
{isRunBlockBlockOutput(output) && <BlockOutputCard output={output} />} {isRunBlockBlockOutput(output) && <BlockOutputCard output={output} />}
{isRunBlockSetupRequirementsOutput(output) && ( {isRunBlockSetupRequirementsOutput(output) && (

View File

@@ -8,7 +8,7 @@ import {
WarningDiamondIcon, WarningDiamondIcon,
} from "@phosphor-icons/react"; } from "@phosphor-icons/react";
import type { ToolUIPart } from "ai"; import type { ToolUIPart } from "ai";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader"; import { SpinnerLoader } from "../../components/SpinnerLoader/SpinnerLoader";
export interface RunBlockInput { export interface RunBlockInput {
block_id?: string; block_id?: string;
@@ -120,7 +120,7 @@ export function ToolIcon({
); );
} }
if (isStreaming) { if (isStreaming) {
return <OrbitLoader size={24} />; return <SpinnerLoader size={40} className="text-neutral-700" />;
} }
return <PlayIcon size={14} weight="regular" className="text-neutral-400" />; return <PlayIcon size={14} weight="regular" className="text-neutral-400" />;
} }
@@ -149,7 +149,7 @@ export function getAccordionMeta(output: RunBlockToolOutput): {
if (isRunBlockBlockOutput(output)) { if (isRunBlockBlockOutput(output)) {
const keys = Object.keys(output.outputs ?? {}); const keys = Object.keys(output.outputs ?? {});
return { return {
icon, icon: <SpinnerLoader size={32} className="text-neutral-700" />,
title: output.block_name, title: output.block_name,
description: description:
keys.length > 0 keys.length > 0

View File

@@ -3,6 +3,7 @@ import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase"; import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useChat } from "@ai-sdk/react"; import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai"; import { DefaultChatTransport } from "ai";
import { useRouter } from "next/navigation";
import { useEffect, useMemo, useState } from "react"; import { useEffect, useMemo, useState } from "react";
import { useChatSession } from "./useChatSession"; import { useChatSession } from "./useChatSession";
@@ -10,6 +11,7 @@ export function useCopilotPage() {
const { isUserLoading, isLoggedIn } = useSupabase(); const { isUserLoading, isLoggedIn } = useSupabase();
const [isDrawerOpen, setIsDrawerOpen] = useState(false); const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const [pendingMessage, setPendingMessage] = useState<string | null>(null); const [pendingMessage, setPendingMessage] = useState<string | null>(null);
const router = useRouter();
const { const {
sessionId, sessionId,
@@ -52,6 +54,10 @@ export function useCopilotPage() {
transport: transport ?? undefined, transport: transport ?? undefined,
}); });
useEffect(() => {
if (!isUserLoading && !isLoggedIn) router.replace("/login");
}, [isUserLoading, isLoggedIn]);
useEffect(() => { useEffect(() => {
if (!hydratedMessages || hydratedMessages.length === 0) return; if (!hydratedMessages || hydratedMessages.length === 0) return;
setMessages((prev) => { setMessages((prev) => {

View File

@@ -1,8 +1,11 @@
import { environment } from "@/services/environment"; import { environment } from "@/services/environment";
import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers"; import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers";
import { NextRequest } from "next/server"; import { NextRequest } from "next/server";
import { normalizeSSEStream, SSE_HEADERS } from "../../../sse-helpers";
/**
* SSE Proxy for chat streaming.
* Supports POST with context (page content + URL) in the request body.
*/
export async function POST( export async function POST(
request: NextRequest, request: NextRequest,
{ params }: { params: Promise<{ sessionId: string }> }, { params }: { params: Promise<{ sessionId: string }> },
@@ -20,14 +23,17 @@ export async function POST(
); );
} }
// Get auth token from server-side session
const token = await getServerAuthToken(); const token = await getServerAuthToken();
// Build backend URL
const backendUrl = environment.getAGPTServerBaseUrl(); const backendUrl = environment.getAGPTServerBaseUrl();
const streamUrl = new URL( const streamUrl = new URL(
`/api/chat/sessions/${sessionId}/stream`, `/api/chat/sessions/${sessionId}/stream`,
backendUrl, backendUrl,
); );
// Forward request to backend with auth header
const headers: Record<string, string> = { const headers: Record<string, string> = {
"Content-Type": "application/json", "Content-Type": "application/json",
Accept: "text/event-stream", Accept: "text/event-stream",
@@ -57,15 +63,14 @@ export async function POST(
}); });
} }
if (!response.body) { // Return the SSE stream directly
return new Response( return new Response(response.body, {
JSON.stringify({ error: "Empty response from chat service" }), headers: {
{ status: 502, headers: { "Content-Type": "application/json" } }, "Content-Type": "text/event-stream",
); "Cache-Control": "no-cache, no-transform",
} Connection: "keep-alive",
"X-Accel-Buffering": "no",
return new Response(normalizeSSEStream(response.body), { },
headers: SSE_HEADERS,
}); });
} catch (error) { } catch (error) {
console.error("SSE proxy error:", error); console.error("SSE proxy error:", error);
@@ -82,6 +87,13 @@ export async function POST(
} }
} }
/**
* Resume an active stream for a session.
*
* Called by the AI SDK's `useChat(resume: true)` on page load.
* Proxies to the backend which checks for an active stream and either
* replays it (200 + SSE) or returns 204 No Content.
*/
export async function GET( export async function GET(
_request: NextRequest, _request: NextRequest,
{ params }: { params: Promise<{ sessionId: string }> }, { params }: { params: Promise<{ sessionId: string }> },
@@ -112,6 +124,7 @@ export async function GET(
headers, headers,
}); });
// 204 = no active stream to resume
if (response.status === 204) { if (response.status === 204) {
return new Response(null, { status: 204 }); return new Response(null, { status: 204 });
} }
@@ -124,13 +137,12 @@ export async function GET(
}); });
} }
if (!response.body) { return new Response(response.body, {
return new Response(null, { status: 204 });
}
return new Response(normalizeSSEStream(response.body), {
headers: { headers: {
...SSE_HEADERS, "Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1", "x-vercel-ai-ui-message-stream": "v1",
}, },
}); });

View File

@@ -1,72 +0,0 @@
export const SSE_HEADERS = {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
} as const;
export function normalizeSSEStream(
input: ReadableStream<Uint8Array>,
): ReadableStream<Uint8Array> {
const decoder = new TextDecoder();
const encoder = new TextEncoder();
let buffer = "";
return input.pipeThrough(
new TransformStream<Uint8Array, Uint8Array>({
transform(chunk, controller) {
buffer += decoder.decode(chunk, { stream: true });
const parts = buffer.split("\n\n");
buffer = parts.pop() ?? "";
for (const part of parts) {
const normalized = normalizeSSEEvent(part);
controller.enqueue(encoder.encode(normalized + "\n\n"));
}
},
flush(controller) {
if (buffer.trim()) {
const normalized = normalizeSSEEvent(buffer);
controller.enqueue(encoder.encode(normalized + "\n\n"));
}
},
}),
);
}
function normalizeSSEEvent(event: string): string {
const lines = event.split("\n");
const dataLines: string[] = [];
const otherLines: string[] = [];
for (const line of lines) {
if (line.startsWith("data: ")) {
dataLines.push(line.slice(6));
} else {
otherLines.push(line);
}
}
if (dataLines.length === 0) return event;
const dataStr = dataLines.join("\n");
try {
const parsed = JSON.parse(dataStr) as Record<string, unknown>;
if (parsed.type === "error") {
const normalized = {
type: "error",
errorText:
typeof parsed.errorText === "string"
? parsed.errorText
: "An unexpected error occurred",
};
const newData = `data: ${JSON.stringify(normalized)}`;
return [...otherLines.filter((l) => l.length > 0), newData].join("\n");
}
} catch {
// Not valid JSON — pass through as-is
}
return event;
}

View File

@@ -1,8 +1,20 @@
import { environment } from "@/services/environment"; import { environment } from "@/services/environment";
import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers"; import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers";
import { NextRequest } from "next/server"; import { NextRequest } from "next/server";
import { normalizeSSEStream, SSE_HEADERS } from "../../../sse-helpers";
/**
* SSE Proxy for task stream reconnection.
*
* This endpoint allows clients to reconnect to an ongoing or recently completed
* background task's stream. It replays missed messages from Redis Streams and
* subscribes to live updates if the task is still running.
*
* Client contract:
* 1. When receiving an operation_started event, store the task_id
* 2. To reconnect: GET /api/chat/tasks/{taskId}/stream?last_message_id={idx}
* 3. Messages are replayed from the last_message_id position
* 4. Stream ends when "finish" event is received
*/
export async function GET( export async function GET(
request: NextRequest, request: NextRequest,
{ params }: { params: Promise<{ taskId: string }> }, { params }: { params: Promise<{ taskId: string }> },
@@ -12,12 +24,15 @@ export async function GET(
const lastMessageId = searchParams.get("last_message_id") || "0-0"; const lastMessageId = searchParams.get("last_message_id") || "0-0";
try { try {
// Get auth token from server-side session
const token = await getServerAuthToken(); const token = await getServerAuthToken();
// Build backend URL
const backendUrl = environment.getAGPTServerBaseUrl(); const backendUrl = environment.getAGPTServerBaseUrl();
const streamUrl = new URL(`/api/chat/tasks/${taskId}/stream`, backendUrl); const streamUrl = new URL(`/api/chat/tasks/${taskId}/stream`, backendUrl);
streamUrl.searchParams.set("last_message_id", lastMessageId); streamUrl.searchParams.set("last_message_id", lastMessageId);
// Forward request to backend with auth header
const headers: Record<string, string> = { const headers: Record<string, string> = {
Accept: "text/event-stream", Accept: "text/event-stream",
"Cache-Control": "no-cache", "Cache-Control": "no-cache",
@@ -41,12 +56,14 @@ export async function GET(
}); });
} }
if (!response.body) { // Return the SSE stream directly
return new Response(null, { status: 204 }); return new Response(response.body, {
} headers: {
"Content-Type": "text/event-stream",
return new Response(normalizeSSEStream(response.body), { "Cache-Control": "no-cache, no-transform",
headers: SSE_HEADERS, Connection: "keep-alive",
"X-Accel-Buffering": "no",
},
}); });
} catch (error) { } catch (error) {
console.error("Task stream proxy error:", error); console.error("Task stream proxy error:", error);

View File

@@ -6,7 +6,6 @@ import { SupabaseClient } from "@supabase/supabase-js";
export const PROTECTED_PAGES = [ export const PROTECTED_PAGES = [
"/auth/authorize", "/auth/authorize",
"/auth/integrations", "/auth/integrations",
"/copilot",
"/monitor", "/monitor",
"/build", "/build",
"/onboarding", "/onboarding",

View File

@@ -61,7 +61,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [Get List Item](block-integrations/basic.md#get-list-item) | Returns the element at the given index | | [Get List Item](block-integrations/basic.md#get-list-item) | Returns the element at the given index |
| [Get Store Agent Details](block-integrations/system/store_operations.md#get-store-agent-details) | Get detailed information about an agent from the store | | [Get Store Agent Details](block-integrations/system/store_operations.md#get-store-agent-details) | Get detailed information about an agent from the store |
| [Get Weather Information](block-integrations/basic.md#get-weather-information) | Retrieves weather information for a specified location using OpenWeatherMap API | | [Get Weather Information](block-integrations/basic.md#get-weather-information) | Retrieves weather information for a specified location using OpenWeatherMap API |
| [Human In The Loop](block-integrations/basic.md#human-in-the-loop) | Pause execution for human review | | [Human In The Loop](block-integrations/basic.md#human-in-the-loop) | Pause execution and wait for human approval or modification of data |
| [List Is Empty](block-integrations/basic.md#list-is-empty) | Checks if a list is empty | | [List Is Empty](block-integrations/basic.md#list-is-empty) | Checks if a list is empty |
| [List Library Agents](block-integrations/system/library_operations.md#list-library-agents) | List all agents in your personal library | | [List Library Agents](block-integrations/system/library_operations.md#list-library-agents) | List all agents in your personal library |
| [Note](block-integrations/basic.md#note) | A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes | | [Note](block-integrations/basic.md#note) | A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes |

View File

@@ -975,7 +975,7 @@ A travel planning application could use this block to provide users with current
## Human In The Loop ## Human In The Loop
### What it is ### What it is
Pause execution for human review. Data flows through approved_data or rejected_data output based on the reviewer's decision. Outputs contain the actual data, not status strings. Pause execution and wait for human approval or modification of data
### How it works ### How it works
<!-- MANUAL: how_it_works --> <!-- MANUAL: how_it_works -->
@@ -988,18 +988,18 @@ This enables human oversight at critical points in automated workflows, ensuring
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| data | The data to be reviewed by a human user. This exact data will be passed through to either approved_data or rejected_data output based on the reviewer's decision. | Data | Yes | | data | The data to be reviewed by a human user | Data | Yes |
| name | A descriptive name for what this data represents. This helps the reviewer understand what they are reviewing. | str | Yes | | name | A descriptive name for what this data represents | str | Yes |
| editable | Whether the human reviewer can edit the data before approving or rejecting it | bool | No | | editable | Whether the human reviewer can edit the data | bool | No |
### Outputs ### Outputs
| Output | Description | Type | | Output | Description | Type |
|--------|-------------|------| |--------|-------------|------|
| error | Error message if the operation failed | str | | error | Error message if the operation failed | str |
| approved_data | Outputs the input data when the reviewer APPROVES it. The value is the actual data itself (not a status string like 'APPROVED'). If the reviewer edited the data, this contains the modified version. Connect downstream blocks here for the 'approved' workflow path. | Approved Data | | approved_data | The data when approved (may be modified by reviewer) | Approved Data |
| rejected_data | Outputs the input data when the reviewer REJECTS it. The value is the actual data itself (not a status string like 'REJECTED'). If the reviewer edited the data, this contains the modified version. Connect downstream blocks here for the 'rejected' workflow path. | Rejected Data | | rejected_data | The data when rejected (may be modified by reviewer) | Rejected Data |
| review_message | Optional message provided by the reviewer explaining their decision. Only outputs when the reviewer provides a message; this pin does not fire if no message was given. | str | | review_message | Any message provided by the reviewer | str |
### Possible use case ### Possible use case
<!-- MANUAL: use_case --> <!-- MANUAL: use_case -->