Compare commits

..

15 Commits

Author SHA1 Message Date
Zamil Majdy
5efb80d47b fix(backend/chat): Address PR review comments for Claude SDK integration
- Add StreamFinish after ErrorMessage in response adapter
- Fix str.replace to removeprefix in security hooks
- Apply max_context_messages limit as safety guard in history formatting
- Add empty prompt guard before sending to SDK
- Sanitize error messages to avoid exposing internal details
- Fix fire-and-forget asyncio.create_task by storing task reference
- Fix tool_calls population on assistant messages
- Rewrite Anthropic fallback to persist messages and merge consecutive roles
- Only use ANTHROPIC_API_KEY for fallback (not OpenRouter keys)
- Fix IndexError when tool result content list is empty
2026-02-06 13:25:10 +04:00
Zamil Majdy
b49d8e2cba fix lock 2026-02-06 13:19:53 +04:00
Zamil Majdy
452544530d feat(chat/sdk): Enable native SDK context compaction
- Remove manual truncation in conversation history formatting
- SDK's automatic compaction handles context limits intelligently
- Add observability hooks:
  - PreCompact: Log when SDK triggers context compaction
  - PostToolUse: Log successful tool executions
  - PostToolUseFailure: Log and debug failed tool executions
- Update config: increase max_context_messages (SDK handles compaction)
2026-02-06 12:44:48 +04:00
Zamil Majdy
32ee7e6cf8 fix(chat): Remove aggressive stale task detection
The 60-second timeout was too aggressive and could incorrectly mark
legitimate long-running tool calls as stale. Relying on Redis TTL
(1 hour) for cleanup is sufficient and more reliable.
2026-02-06 11:45:54 +04:00
Zamil Majdy
670663c406 Merge dev and resolve poetry.lock conflict 2026-02-06 11:40:41 +04:00
Zamil Majdy
0dbe4cf51e feat(backend/chat): Add Claude Agent SDK integration for CoPilot
This PR adds Claude Agent SDK as the default backend for CoPilot chat completions,
replacing the direct OpenAI API integration.

Key changes:
- Add Claude Agent SDK service layer with MCP tool adapter
- Fix message persistence after tool calls (messages no longer disappear on refresh)
- Add OpenRouter tracing for session title generation
- Add security hooks for user context validation
- Add Anthropic fallback when SDK is not available
- Clean up excessive debug logging
2026-02-06 11:38:17 +04:00
Nicholas Tindle
29ee85c86f fix: add virus scanning to WorkspaceManager.write_file() (#11990)
## Summary

Adds virus scanning at the `WorkspaceManager.write_file()` layer for
defense in depth.

## Problem

Previously, virus scanning was only performed at entry points:
- `store_media_file()` in `backend/util/file.py`
- `WriteWorkspaceFileTool` in
`backend/api/features/chat/tools/workspace_files.py`

This created a trust boundary where any new caller of
`WorkspaceManager.write_file()` would need to remember to scan first.

## Solution

Add `scan_content_safe()` call directly in
`WorkspaceManager.write_file()` before persisting to storage. This
ensures all content is scanned regardless of the caller.

## Changes

- Added import for `scan_content_safe` from `backend.util.virus_scanner`
- Added virus scan call after file size validation, before storage

## Testing

Existing tests should pass. The scan is a no-op in test environments
where ClamAV isn't running.

Closes https://linear.app/autogpt/issue/OPEN-2993

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Introduces a new required async scan step in the workspace write path,
which can add latency or cause new failures if the scanner/ClamAV is
misconfigured or unavailable.
> 
> **Overview**
> Adds a **defense-in-depth** virus scan to
`WorkspaceManager.write_file()` by invoking `scan_content_safe()` after
file-size validation and before any storage/database persistence.
> 
> This centralizes scanning so any caller writing workspace files gets
the same malware check without relying on upstream entry points to
remember to scan.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
0f5ac68b92. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2026-02-06 04:38:32 +00:00
Nicholas Tindle
85b6520710 feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)

**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations

**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
  - [x] Resource cleanup is implemented in `finally` blocks
  - [x] Exception chaining is properly implemented
  - [x] Input validation is in place
  - [x] Test mocks are provided for CI environments

#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

N/A - No configuration changes required.


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
> 
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
> 
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
> 
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
da7a44d794. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 22:22:33 +00:00
Bently
bfa942e032 feat(platform): Add Claude Opus 4.6 model support (#11983)
## Summary
Adds support for Anthropic's newly released Claude Opus 4.6 model.

## Changes
- Added `claude-opus-4-6` to the `LlmModel` enum
- Added model metadata: 200K context window (1M beta), **128K max output
tokens**
- Added block cost config (same pricing tier as Opus 4.5: $5/MTok input,
$25/MTok output)
- Updated chat config default model to Claude Opus 4.6

## Model Details
From [Anthropic's
docs](https://docs.anthropic.com/en/docs/about-claude/models):
- **API ID:** `claude-opus-4-6`
- **Context window:** 200K tokens (1M beta)
- **Max output:** 128K tokens (up from 64K on Opus 4.5)
- **Extended thinking:** Yes
- **Adaptive thinking:** Yes (new, Opus 4.6 exclusive)
- **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training)
- **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5)

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2026-02-05 19:19:51 +00:00
Otto
11256076d8 fix(frontend): Rename "Tasks" tab to "Agents" in navbar (#11982)
## Summary
Renames the "Tasks" tab in the navbar to "Agents" per the Figma design.

## Changes
- `Navbar.tsx`: Changed label from "Tasks" to "Agents"

<img width="1069" height="153" alt="image"
src="https://github.com/user-attachments/assets/3869d2a2-9bd9-4346-b650-15dabbdb46c4"
/>


## Why
- "Tasks" was incorrectly named and confusing for users trying to find
their agent builds
- Matches the Figma design

## Linear Ticket
Fixes [SECRT-1894](https://linear.app/autogpt/issue/SECRT-1894)

## Related
- [SECRT-1865](https://linear.app/autogpt/issue/SECRT-1865) - Find and
Manage Existing/Unpublished or Recent Agent Builds Is Unintuitive
2026-02-05 17:54:39 +00:00
Bently
3ca2387631 feat(blocks): Implement Text Encode block (#11857)
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).

## Changes

### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`

### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)

### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines

## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).

## Related Issue
Fixes #11111

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-05 17:31:02 +00:00
Otto
ed07f02738 fix(copilot): edit_agent updates existing agent instead of creating duplicate (#11981)
## Summary

When editing an agent via CoPilot's `edit_agent` tool, the code was
always creating a new `LibraryAgent` entry instead of updating the
existing one to point to the new graph version. This caused duplicate
agents to appear in the user's library.

## Changes

In `save_agent_to_library()`:
- When `is_update=True`, now checks if there's an existing library agent
for the graph using `get_library_agent_by_graph_id()`
- If found, uses `update_agent_version_in_library()` to update the
existing library agent to point to the new version
- Falls back to creating a new library agent if no existing one is found
(e.g., if editing a graph that wasn't added to library yet)

## Testing

- Verified lint/format checks pass
- Plan reviewed and approved by Staff Engineer Plan Reviewer agent

## Related

Fixes [SECRT-1857](https://linear.app/autogpt/issue/SECRT-1857)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-05 15:02:26 +00:00
Swifty
b121030c94 feat(frontend): Add progress indicator during agent generation [SECRT-1883] (#11974)
## Summary
- Add asymptotic progress bar that appears during long-running chat
tasks
- Progress bar shows after 10 seconds with "Working on it..." label and
percentage
- Uses half-life formula: ~50% at 30s, ~75% at 60s, ~87.5% at 90s, etc.
- Creates the classic "game loading bar" effect that never reaches 100%



https://github.com/user-attachments/assets/3c59289e-793c-4a08-b3fc-69e1eef28b1f



## Test plan
- [x] Start a chat that triggers agent generation
- [x] Wait 10+ seconds for the progress bar to appear
- [x] Verify progress bar is centered with label and percentage
- [x] Verify progress follows expected timing (~50% at 30s)
- [x] Verify progress bar disappears when task completes

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:37:51 +01:00
Swifty
c22c18374d feat(frontend): Add ready-to-test prompt after agent creation [SECRT-1882] (#11975)
## Summary
- Add special UI prompt when agent is successfully created in chat
- Show "Agent Created Successfully" with agent name
- Provide two action buttons:
- **Run with example values**: Sends chat message asking AI to run with
placeholders
- **Run with my inputs**: Opens RunAgentModal for custom input
configuration
- After run/schedule, automatically send chat message with execution
details for AI monitoring



https://github.com/user-attachments/assets/b11e118c-de59-4b79-a629-8bd0d52d9161



## Test plan
- [x] Create an agent through chat
- [x] Verify "Agent Created Successfully" prompt appears
- [x] Click "Run with example values" - verify chat message is sent
- [x] Click "Run with my inputs" - verify RunAgentModal opens
- [x] Fill inputs and run - verify chat message with execution ID is
sent
- [x] Fill inputs and schedule - verify chat message with schedule
details is sent

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:37:31 +01:00
Swifty
e40233a3ac fix(backend/chat): Guide find_agent users toward action with CTAs (#11976)
When users search for agents, guide them toward creating custom agents
if no results are found or after showing results. This improves user
engagement by offering a clear next step.

### Changes 🏗️

- Updated `agent_search.py` to add CTAs in search responses
- Added messaging to inform users they can create custom agents based on
their needs
- Applied to both "no results found" and "agents found" scenarios

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents in marketplace with matching results
  - [x] Search for agents in marketplace with no results
  - [x] Search for agents in library with matching results  
  - [x] Search for agents in library with no results
  - [x] Verify CTA message appears in all cases

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:36:55 +01:00
122 changed files with 4801 additions and 8838 deletions

View File

@@ -152,6 +152,7 @@ REPLICATE_API_KEY=
REVID_API_KEY=
SCREENSHOTONE_API_KEY=
UNREAL_SPEECH_API_KEY=
ELEVENLABS_API_KEY=
# Data & Search Services
E2B_API_KEY=

View File

@@ -19,3 +19,6 @@ load-tests/*.json
load-tests/*.log
load-tests/node_modules/*
migrations/*/rollback*.sql
# Workspace files
workspaces/

View File

@@ -62,10 +62,12 @@ ENV POETRY_HOME=/opt/poetry \
DEBIAN_FRONTEND=noninteractive
ENV PATH=/opt/poetry/bin:$PATH
# Install Python without upgrading system-managed packages
# Install Python, FFmpeg, and ImageMagick (required for video processing blocks)
RUN apt-get update && apt-get install -y \
python3.13 \
python3-pip \
ffmpeg \
imagemagick \
&& rm -rf /var/lib/apt/lists/*
# Copy only necessary files from builder

View File

@@ -11,7 +11,7 @@ class ChatConfig(BaseSettings):
# OpenAI API Configuration
model: str = Field(
default="anthropic/claude-opus-4.5", description="Default model to use"
default="anthropic/claude-opus-4.6", description="Default model to use"
)
title_model: str = Field(
default="openai/gpt-4o-mini",
@@ -27,12 +27,20 @@ class ChatConfig(BaseSettings):
session_ttl: int = Field(default=43200, description="Session TTL in seconds")
# Streaming Configuration
# Note: When using Claude Agent SDK, context management is handled automatically
# via the SDK's built-in compaction. This is mainly used for the fallback path.
max_context_messages: int = Field(
default=50, ge=1, le=200, description="Maximum context messages"
default=100,
ge=1,
le=500,
description="Max context messages (SDK handles compaction automatically)",
)
stream_timeout: int = Field(default=300, description="Stream timeout in seconds")
max_retries: int = Field(default=3, description="Maximum number of retries")
max_retries: int = Field(
default=3,
description="Max retries for fallback path (SDK handles retries internally)",
)
max_agent_runs: int = Field(default=30, description="Maximum number of agent runs")
max_agent_schedules: int = Field(
default=30, description="Maximum number of agent schedules"
@@ -93,6 +101,12 @@ class ChatConfig(BaseSettings):
description="Name of the prompt in Langfuse to fetch",
)
# Claude Agent SDK Configuration
use_claude_agent_sdk: bool = Field(
default=True,
description="Use Claude Agent SDK for chat completions",
)
@field_validator("api_key", mode="before")
@classmethod
def get_api_key(cls, v):
@@ -132,6 +146,17 @@ class ChatConfig(BaseSettings):
v = os.getenv("CHAT_INTERNAL_API_KEY")
return v
@field_validator("use_claude_agent_sdk", mode="before")
@classmethod
def get_use_claude_agent_sdk(cls, v):
"""Get use_claude_agent_sdk from environment if not provided."""
# Check environment variable - default to True if not set
env_val = os.getenv("CHAT_USE_CLAUDE_AGENT_SDK", "").lower()
if env_val:
return env_val in ("true", "1", "yes", "on")
# Default to True (SDK enabled by default)
return True if v is None else v
# Prompt paths for different contexts
PROMPT_PATHS: dict[str, str] = {
"default": "prompts/chat_system.md",

View File

@@ -273,9 +273,8 @@ async def _get_session_from_cache(session_id: str) -> ChatSession | None:
try:
session = ChatSession.model_validate_json(raw_session)
logger.info(
f"Loading session {session_id} from cache: "
f"message_count={len(session.messages)}, "
f"roles={[m.role for m in session.messages]}"
f"[CACHE] Loaded session {session_id}: {len(session.messages)} messages, "
f"last_roles={[m.role for m in session.messages[-3:]]}" # Last 3 roles
)
return session
except Exception as e:
@@ -317,11 +316,9 @@ async def _get_session_from_db(session_id: str) -> ChatSession | None:
return None
messages = prisma_session.Messages
logger.info(
f"Loading session {session_id} from DB: "
f"has_messages={messages is not None}, "
f"message_count={len(messages) if messages else 0}, "
f"roles={[m.role for m in messages] if messages else []}"
logger.debug(
f"[DB] Loaded session {session_id}: {len(messages) if messages else 0} messages, "
f"roles={[m.role for m in messages[-3:]] if messages else []}" # Last 3 roles
)
return ChatSession.from_db(prisma_session, messages)
@@ -372,10 +369,9 @@ async def _save_session_to_db(
"function_call": msg.function_call,
}
)
logger.info(
f"Saving {len(new_messages)} new messages to DB for session {session.session_id}: "
f"roles={[m['role'] for m in messages_data]}, "
f"start_sequence={existing_message_count}"
logger.debug(
f"[DB] Saving {len(new_messages)} messages to session {session.session_id}, "
f"roles={[m['role'] for m in messages_data]}"
)
await chat_db.add_chat_messages_batch(
session_id=session.session_id,
@@ -415,7 +411,7 @@ async def get_chat_session(
logger.warning(f"Unexpected cache error for session {session_id}: {e}")
# Fall back to database
logger.info(f"Session {session_id} not in cache, checking database")
logger.debug(f"Session {session_id} not in cache, checking database")
session = await _get_session_from_db(session_id)
if session is None:
@@ -432,7 +428,6 @@ async def get_chat_session(
# Cache the session from DB
try:
await _cache_session(session)
logger.info(f"Cached session {session_id} from database")
except Exception as e:
logger.warning(f"Failed to cache session {session_id}: {e}")
@@ -603,13 +598,19 @@ async def update_session_title(session_id: str, title: str) -> bool:
logger.warning(f"Session {session_id} not found for title update")
return False
# Invalidate cache so next fetch gets updated title
# Update title in cache if it exists (instead of invalidating).
# This prevents race conditions where cache invalidation causes
# the frontend to see stale DB data while streaming is still in progress.
try:
redis_key = _get_session_cache_key(session_id)
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
cached = await _get_session_from_cache(session_id)
if cached:
cached.title = title
await _cache_session(cached)
except Exception as e:
logger.warning(f"Failed to invalidate cache for session {session_id}: {e}")
# Not critical - title will be correct on next full cache refresh
logger.warning(
f"Failed to update title in cache for session {session_id}: {e}"
)
return True
except Exception as e:

View File

@@ -18,10 +18,6 @@ class ResponseType(str, Enum):
START = "start"
FINISH = "finish"
# Step lifecycle (one LLM API call within a message)
START_STEP = "start-step"
FINISH_STEP = "finish-step"
# Text streaming
TEXT_START = "text-start"
TEXT_DELTA = "text-delta"
@@ -61,16 +57,6 @@ class StreamStart(StreamBaseResponse):
description="Task ID for SSE reconnection. Clients can reconnect using GET /tasks/{taskId}/stream",
)
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-protocol fields like taskId."""
import json
data: dict[str, Any] = {
"type": self.type.value,
"messageId": self.messageId,
}
return f"data: {json.dumps(data)}\n\n"
class StreamFinish(StreamBaseResponse):
"""End of message/stream."""
@@ -78,26 +64,6 @@ class StreamFinish(StreamBaseResponse):
type: ResponseType = ResponseType.FINISH
class StreamStartStep(StreamBaseResponse):
"""Start of a step (one LLM API call within a message).
The AI SDK uses this to add a step-start boundary to message.parts,
enabling visual separation between multiple LLM calls in a single message.
"""
type: ResponseType = ResponseType.START_STEP
class StreamFinishStep(StreamBaseResponse):
"""End of a step (one LLM API call within a message).
The AI SDK uses this to reset activeTextParts and activeReasoningParts,
so the next LLM call in a tool-call continuation starts with clean state.
"""
type: ResponseType = ResponseType.FINISH_STEP
# ========== Text Streaming ==========
@@ -151,7 +117,7 @@ class StreamToolOutputAvailable(StreamBaseResponse):
type: ResponseType = ResponseType.TOOL_OUTPUT_AVAILABLE
toolCallId: str = Field(..., description="Tool call ID this responds to")
output: str | dict[str, Any] = Field(..., description="Tool execution output")
# Keep these for internal backend use
# Additional fields for internal use (not part of AI SDK spec but useful)
toolName: str | None = Field(
default=None, description="Name of the tool that was executed"
)
@@ -159,17 +125,6 @@ class StreamToolOutputAvailable(StreamBaseResponse):
default=True, description="Whether the tool execution succeeded"
)
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-spec fields."""
import json
data = {
"type": self.type.value,
"toolCallId": self.toolCallId,
"output": self.output,
}
return f"data: {json.dumps(data)}\n\n"
# ========== Other ==========

View File

@@ -1,12 +1,13 @@
"""Chat API routes for chat session management and streaming via SSE."""
import asyncio
import logging
import uuid as uuid_module
from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Response, Security
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
@@ -16,30 +17,17 @@ from . import service as chat_service
from . import stream_registry
from .completion_handler import process_operation_failure, process_operation_success
from .config import ChatConfig
from .model import ChatSession, create_chat_session, get_chat_session, get_user_sessions
from .response_model import StreamFinish, StreamHeartbeat
from .tools.models import (
AgentDetailsResponse,
AgentOutputResponse,
AgentPreviewResponse,
AgentSavedResponse,
AgentsFoundResponse,
BlockListResponse,
BlockOutputResponse,
ClarificationNeededResponse,
DocPageResponse,
DocSearchResultsResponse,
ErrorResponse,
ExecutionStartedResponse,
InputValidationErrorResponse,
NeedLoginResponse,
NoResultsResponse,
OperationInProgressResponse,
OperationPendingResponse,
OperationStartedResponse,
SetupRequirementsResponse,
UnderstandingUpdatedResponse,
from .model import (
ChatMessage,
ChatSession,
create_chat_session,
get_chat_session,
get_user_sessions,
upsert_chat_session,
)
from .response_model import StreamFinish, StreamHeartbeat, StreamStart
from .sdk import service as sdk_service
from .tracking import track_user_message
config = ChatConfig()
@@ -231,6 +219,10 @@ async def get_session(
active_task, last_message_id = await stream_registry.get_active_task_for_session(
session_id, user_id
)
logger.info(
f"[GET_SESSION] session={session_id}, active_task={active_task is not None}, "
f"msg_count={len(messages)}, last_role={messages[-1].get('role') if messages else 'none'}"
)
if active_task:
# Filter out the in-progress assistant message from the session response.
# The client will receive the complete assistant response through the SSE
@@ -287,10 +279,30 @@ async def stream_chat_post(
containing the task_id for reconnection.
"""
import asyncio
session = await _validate_and_get_session(session_id, user_id)
# Add user message to session BEFORE creating task to avoid race condition
# where GET_SESSION sees the task as "running" but the message isn't saved yet
if request.message:
session.messages.append(
ChatMessage(
role="user" if request.is_user_message else "assistant",
content=request.message,
)
)
if request.is_user_message:
track_user_message(
user_id=user_id,
session_id=session_id,
message_length=len(request.message),
)
logger.info(
f"[STREAM] Saving user message to session {session_id}, "
f"msg_count={len(session.messages)}"
)
session = await upsert_chat_session(session)
logger.info(f"[STREAM] User message saved for session {session_id}")
# Create a task in the stream registry for reconnection support
task_id = str(uuid_module.uuid4())
operation_id = str(uuid_module.uuid4())
@@ -305,21 +317,38 @@ async def stream_chat_post(
# Background task that runs the AI generation independently of SSE connection
async def run_ai_generation():
chunk_count = 0
try:
async for chunk in chat_service.stream_chat_completion(
# Emit a start event with task_id for reconnection
start_chunk = StreamStart(messageId=task_id, taskId=task_id)
await stream_registry.publish_chunk(task_id, start_chunk)
# Choose service based on configuration
use_sdk = config.use_claude_agent_sdk
stream_fn = (
sdk_service.stream_chat_completion_sdk
if use_sdk
else chat_service.stream_chat_completion
)
# Pass message=None since we already added it to the session above
async for chunk in stream_fn(
session_id,
request.message,
None, # Message already in session
is_user_message=request.is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
session=session, # Pass session with message already added
context=request.context,
_task_id=task_id, # Pass task_id so service emits start with taskId for reconnection
):
chunk_count += 1
# Write to Redis (subscribers will receive via XREAD)
await stream_registry.publish_chunk(task_id, chunk)
# Mark task as completed
await stream_registry.mark_task_completed(task_id, "completed")
logger.info(
f"[BG_TASK] AI generation completed for session {session_id}: {chunk_count} chunks, marking task {task_id} as completed"
)
# Mark task as completed (also publishes StreamFinish)
completed = await stream_registry.mark_task_completed(task_id, "completed")
logger.info(f"[BG_TASK] mark_task_completed returned: {completed}")
except Exception as e:
logger.error(
f"Error in background AI generation for session {session_id}: {e}"
@@ -334,7 +363,7 @@ async def stream_chat_post(
async def event_generator() -> AsyncGenerator[str, None]:
subscriber_queue = None
try:
# Subscribe to the task stream (this replays existing messages + live updates)
# Subscribe to the task stream (replays + live updates)
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=task_id,
user_id=user_id,
@@ -342,6 +371,7 @@ async def stream_chat_post(
)
if subscriber_queue is None:
logger.warning(f"Failed to subscribe to task {task_id}")
yield StreamFinish().to_sse()
yield "data: [DONE]\n\n"
return
@@ -360,11 +390,11 @@ async def stream_chat_post(
yield StreamHeartbeat().to_sse()
except GeneratorExit:
pass # Client disconnected - background task continues
pass # Client disconnected - normal behavior
except Exception as e:
logger.error(f"Error in SSE stream for task {task_id}: {e}")
finally:
# Unsubscribe when client disconnects or stream ends to prevent resource leak
# Unsubscribe when client disconnects or stream ends
if subscriber_queue is not None:
try:
await stream_registry.unsubscribe_from_task(
@@ -393,73 +423,49 @@ async def stream_chat_post(
@router.get(
"/sessions/{session_id}/stream",
)
async def resume_session_stream(
async def stream_chat_get(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Resume an active stream for a session.
Stream chat responses for a session (GET - legacy endpoint).
Called by the AI SDK's ``useChat(resume: true)`` on page load.
Checks for an active (in-progress) task on the session and either replays
the full SSE stream or returns 204 No Content if nothing is running.
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Args:
session_id: The chat session identifier.
session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns:
StreamingResponse (SSE) when an active stream exists,
or 204 No Content when there is nothing to resume.
StreamingResponse: SSE-formatted response chunks.
"""
import asyncio
active_task, _last_id = await stream_registry.get_active_task_for_session(
session_id, user_id
)
if not active_task:
return Response(status_code=204)
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=active_task.task_id,
user_id=user_id,
last_message_id="0-0", # Full replay so useChat rebuilds the message
)
if subscriber_queue is None:
return Response(status_code=204)
session = await _validate_and_get_session(session_id, user_id)
async def event_generator() -> AsyncGenerator[str, None]:
try:
while True:
try:
chunk = await asyncio.wait_for(
subscriber_queue.get(), timeout=30.0
)
yield chunk.to_sse()
if isinstance(chunk, StreamFinish):
break
except asyncio.TimeoutError:
yield StreamHeartbeat().to_sse()
except GeneratorExit:
pass
except Exception as e:
logger.error(
f"Error in resume stream for session {session_id}: {e}"
)
finally:
try:
await stream_registry.unsubscribe_from_task(
active_task.task_id, subscriber_queue
)
except Exception as unsub_err:
logger.error(
f"Error unsubscribing from task {active_task.task_id}: {unsub_err}",
exc_info=True,
)
yield "data: [DONE]\n\n"
# Choose service based on configuration
use_sdk = config.use_claude_agent_sdk
stream_fn = (
sdk_service.stream_chat_completion_sdk
if use_sdk
else chat_service.stream_chat_completion
)
async for chunk in stream_fn(
session_id,
message,
is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
yield chunk.to_sse()
# AI SDK protocol termination
yield "data: [DONE]\n\n"
return StreamingResponse(
event_generator(),
@@ -467,8 +473,8 @@ async def resume_session_stream(
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
"X-Accel-Buffering": "no", # Disable nginx buffering
"x-vercel-ai-ui-message-stream": "v1", # AI SDK protocol header
},
)
@@ -579,8 +585,6 @@ async def stream_task(
)
async def event_generator() -> AsyncGenerator[str, None]:
import asyncio
heartbeat_interval = 15.0 # Send heartbeat every 15 seconds
try:
while True:
@@ -780,42 +784,3 @@ async def health_check() -> dict:
"service": "chat",
"version": "0.1.0",
}
# ========== Schema Export (for OpenAPI / Orval codegen) ==========
ToolResponseUnion = (
AgentsFoundResponse
| NoResultsResponse
| AgentDetailsResponse
| SetupRequirementsResponse
| ExecutionStartedResponse
| NeedLoginResponse
| ErrorResponse
| InputValidationErrorResponse
| AgentOutputResponse
| UnderstandingUpdatedResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| BlockListResponse
| BlockOutputResponse
| DocSearchResultsResponse
| DocPageResponse
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
)
@router.get(
"/schema/tool-responses",
response_model=ToolResponseUnion,
include_in_schema=True,
summary="[Dummy] Tool response type export for codegen",
description="This endpoint is not meant to be called. It exists solely to "
"expose tool response models in the OpenAPI schema for frontend codegen.",
)
async def _tool_response_schema() -> ToolResponseUnion: # type: ignore[return]
"""Never called at runtime. Exists only so Orval generates TS types."""
raise HTTPException(status_code=501, detail="Schema-only endpoint")

View File

@@ -0,0 +1,14 @@
"""Claude Agent SDK integration for CoPilot.
This module provides the integration layer between the Claude Agent SDK
and the existing CoPilot tool system, enabling drop-in replacement of
the current LLM orchestration with the battle-tested Claude Agent SDK.
"""
from .service import stream_chat_completion_sdk
from .tool_adapter import create_copilot_mcp_server
__all__ = [
"stream_chat_completion_sdk",
"create_copilot_mcp_server",
]

View File

@@ -0,0 +1,348 @@
"""Anthropic SDK fallback implementation.
This module provides the fallback streaming implementation using the Anthropic SDK
directly when the Claude Agent SDK is not available.
"""
import json
import logging
import os
import uuid
from collections.abc import AsyncGenerator
from typing import Any, cast
from ..model import ChatMessage, ChatSession
from ..response_model import (
StreamBaseResponse,
StreamError,
StreamFinish,
StreamTextDelta,
StreamTextEnd,
StreamTextStart,
StreamToolInputAvailable,
StreamToolInputStart,
StreamToolOutputAvailable,
StreamUsage,
)
from .tool_adapter import get_tool_definitions, get_tool_handlers
logger = logging.getLogger(__name__)
async def stream_with_anthropic(
session: ChatSession,
system_prompt: str,
text_block_id: str,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Stream using Anthropic SDK directly with tool calling support.
This function accumulates messages into the session for persistence.
The caller should NOT yield an additional StreamFinish - this function handles it.
"""
import anthropic
# Only use ANTHROPIC_API_KEY - don't fall back to OpenRouter keys
api_key = os.getenv("ANTHROPIC_API_KEY")
if not api_key:
yield StreamError(
errorText="ANTHROPIC_API_KEY not configured for fallback",
code="config_error",
)
yield StreamFinish()
return
client = anthropic.AsyncAnthropic(api_key=api_key)
tool_definitions = get_tool_definitions()
tool_handlers = get_tool_handlers()
anthropic_tools = [
{
"name": t["name"],
"description": t["description"],
"input_schema": t["inputSchema"],
}
for t in tool_definitions
]
anthropic_messages = _convert_session_to_anthropic(session)
if not anthropic_messages or anthropic_messages[-1]["role"] != "user":
anthropic_messages.append(
{"role": "user", "content": "Continue with the task."}
)
has_started_text = False
max_iterations = 10
accumulated_text = ""
accumulated_tool_calls: list[dict[str, Any]] = []
for _ in range(max_iterations):
try:
async with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=4096,
system=system_prompt,
messages=cast(Any, anthropic_messages),
tools=cast(Any, anthropic_tools) if anthropic_tools else [],
) as stream:
async for event in stream:
if event.type == "content_block_start":
block = event.content_block
if hasattr(block, "type"):
if block.type == "text" and not has_started_text:
yield StreamTextStart(id=text_block_id)
has_started_text = True
elif block.type == "tool_use":
yield StreamToolInputStart(
toolCallId=block.id, toolName=block.name
)
elif event.type == "content_block_delta":
delta = event.delta
if hasattr(delta, "type") and delta.type == "text_delta":
accumulated_text += delta.text
yield StreamTextDelta(id=text_block_id, delta=delta.text)
final_message = await stream.get_final_message()
if final_message.stop_reason == "tool_use":
if has_started_text:
yield StreamTextEnd(id=text_block_id)
has_started_text = False
text_block_id = str(uuid.uuid4())
tool_results = []
assistant_content: list[dict[str, Any]] = []
for block in final_message.content:
if block.type == "text":
assistant_content.append(
{"type": "text", "text": block.text}
)
elif block.type == "tool_use":
assistant_content.append(
{
"type": "tool_use",
"id": block.id,
"name": block.name,
"input": block.input,
}
)
# Track tool call for session persistence
accumulated_tool_calls.append(
{
"id": block.id,
"type": "function",
"function": {
"name": block.name,
"arguments": json.dumps(
block.input
if isinstance(block.input, dict)
else {}
),
},
}
)
yield StreamToolInputAvailable(
toolCallId=block.id,
toolName=block.name,
input=(
block.input if isinstance(block.input, dict) else {}
),
)
output, is_error = await _execute_tool(
block.name, block.input, tool_handlers
)
yield StreamToolOutputAvailable(
toolCallId=block.id,
toolName=block.name,
output=output,
success=not is_error,
)
# Save tool result to session
session.messages.append(
ChatMessage(
role="tool",
content=output,
tool_call_id=block.id,
)
)
tool_results.append(
{
"type": "tool_result",
"tool_use_id": block.id,
"content": output,
"is_error": is_error,
}
)
# Save assistant message with tool calls to session
session.messages.append(
ChatMessage(
role="assistant",
content=accumulated_text or None,
tool_calls=(
accumulated_tool_calls
if accumulated_tool_calls
else None
),
)
)
# Reset for next iteration
accumulated_text = ""
accumulated_tool_calls = []
anthropic_messages.append(
{"role": "assistant", "content": assistant_content}
)
anthropic_messages.append({"role": "user", "content": tool_results})
continue
else:
if has_started_text:
yield StreamTextEnd(id=text_block_id)
# Save final assistant response to session
if accumulated_text:
session.messages.append(
ChatMessage(role="assistant", content=accumulated_text)
)
yield StreamUsage(
promptTokens=final_message.usage.input_tokens,
completionTokens=final_message.usage.output_tokens,
totalTokens=final_message.usage.input_tokens
+ final_message.usage.output_tokens,
)
yield StreamFinish()
return
except Exception as e:
logger.error(f"[Anthropic Fallback] Error: {e}", exc_info=True)
yield StreamError(
errorText="An error occurred. Please try again.",
code="anthropic_error",
)
yield StreamFinish()
return
yield StreamError(errorText="Max tool iterations reached", code="max_iterations")
yield StreamFinish()
def _convert_session_to_anthropic(session: ChatSession) -> list[dict[str, Any]]:
"""Convert session messages to Anthropic format.
Handles merging consecutive same-role messages (Anthropic requires alternating roles).
"""
messages: list[dict[str, Any]] = []
for msg in session.messages:
if msg.role == "user":
new_msg = {"role": "user", "content": msg.content or ""}
elif msg.role == "assistant":
content: list[dict[str, Any]] = []
if msg.content:
content.append({"type": "text", "text": msg.content})
if msg.tool_calls:
for tc in msg.tool_calls:
func = tc.get("function", {})
args = func.get("arguments", {})
if isinstance(args, str):
try:
args = json.loads(args)
except json.JSONDecodeError:
args = {}
content.append(
{
"type": "tool_use",
"id": tc.get("id", str(uuid.uuid4())),
"name": func.get("name", ""),
"input": args,
}
)
if content:
new_msg = {"role": "assistant", "content": content}
else:
continue # Skip empty assistant messages
elif msg.role == "tool":
new_msg = {
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": msg.tool_call_id or "",
"content": msg.content or "",
}
],
}
else:
continue
messages.append(new_msg)
# Merge consecutive same-role messages (Anthropic requires alternating roles)
return _merge_consecutive_roles(messages)
def _merge_consecutive_roles(messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Merge consecutive messages with the same role.
Anthropic API requires alternating user/assistant roles.
"""
if not messages:
return []
merged: list[dict[str, Any]] = []
for msg in messages:
if merged and merged[-1]["role"] == msg["role"]:
# Merge with previous message
prev_content = merged[-1]["content"]
new_content = msg["content"]
# Normalize both to list-of-blocks form
if isinstance(prev_content, str):
prev_content = [{"type": "text", "text": prev_content}]
if isinstance(new_content, str):
new_content = [{"type": "text", "text": new_content}]
# Ensure both are lists
if not isinstance(prev_content, list):
prev_content = [prev_content]
if not isinstance(new_content, list):
new_content = [new_content]
merged[-1]["content"] = prev_content + new_content
else:
merged.append(msg)
return merged
async def _execute_tool(
tool_name: str, tool_input: Any, handlers: dict[str, Any]
) -> tuple[str, bool]:
"""Execute a tool and return (output, is_error)."""
handler = handlers.get(tool_name)
if not handler:
return f"Unknown tool: {tool_name}", True
try:
result = await handler(tool_input)
# Safely extract output - handle empty or missing content
content = result.get("content") or []
if content and isinstance(content, list) and len(content) > 0:
first_item = content[0]
output = first_item.get("text", "") if isinstance(first_item, dict) else ""
else:
output = ""
is_error = result.get("isError", False)
return output, is_error
except Exception as e:
return f"Error: {str(e)}", True

View File

@@ -0,0 +1,300 @@
"""Response adapter for converting Claude Agent SDK messages to Vercel AI SDK format.
This module provides the adapter layer that converts streaming messages from
the Claude Agent SDK into the Vercel AI SDK UI Stream Protocol format that
the frontend expects.
"""
import json
import logging
import uuid
from typing import Any, AsyncGenerator
from backend.api.features.chat.response_model import (
StreamBaseResponse,
StreamError,
StreamFinish,
StreamHeartbeat,
StreamStart,
StreamTextDelta,
StreamTextEnd,
StreamTextStart,
StreamToolInputAvailable,
StreamToolInputStart,
StreamToolOutputAvailable,
StreamUsage,
)
logger = logging.getLogger(__name__)
class SDKResponseAdapter:
"""Adapter for converting Claude Agent SDK messages to Vercel AI SDK format.
This class maintains state during a streaming session to properly track
text blocks, tool calls, and message lifecycle.
"""
def __init__(self, message_id: str | None = None):
"""Initialize the adapter.
Args:
message_id: Optional message ID. If not provided, one will be generated.
"""
self.message_id = message_id or str(uuid.uuid4())
self.text_block_id = str(uuid.uuid4())
self.has_started_text = False
self.has_ended_text = False
self.current_tool_calls: dict[str, dict[str, Any]] = {}
self.task_id: str | None = None
def set_task_id(self, task_id: str) -> None:
"""Set the task ID for reconnection support."""
self.task_id = task_id
def convert_message(self, sdk_message: Any) -> list[StreamBaseResponse]:
"""Convert a single SDK message to Vercel AI SDK format.
Args:
sdk_message: A message from the Claude Agent SDK.
Returns:
List of StreamBaseResponse objects (may be empty or multiple).
"""
responses: list[StreamBaseResponse] = []
# Handle different SDK message types - use class name since SDK uses dataclasses
class_name = type(sdk_message).__name__
msg_subtype = getattr(sdk_message, "subtype", None)
if class_name == "SystemMessage":
if msg_subtype == "init":
# Session initialization - emit start
responses.append(
StreamStart(
messageId=self.message_id,
taskId=self.task_id,
)
)
elif class_name == "AssistantMessage":
# Assistant message with content blocks
content = getattr(sdk_message, "content", [])
for block in content:
# Check block type by class name (SDK uses dataclasses) or dict type
block_class = type(block).__name__
block_type = block.get("type") if isinstance(block, dict) else None
if block_class == "TextBlock" or block_type == "text":
# Text content
text = getattr(block, "text", None) or (
block.get("text") if isinstance(block, dict) else ""
)
if text:
# Start text block if needed (or restart after tool calls)
if not self.has_started_text or self.has_ended_text:
# Generate new text block ID for text after tools
if self.has_ended_text:
self.text_block_id = str(uuid.uuid4())
self.has_ended_text = False
responses.append(StreamTextStart(id=self.text_block_id))
self.has_started_text = True
# Emit text delta
responses.append(
StreamTextDelta(
id=self.text_block_id,
delta=text,
)
)
elif block_class == "ToolUseBlock" or block_type == "tool_use":
# Tool call
tool_id_raw = getattr(block, "id", None) or (
block.get("id") if isinstance(block, dict) else None
)
tool_id: str = (
str(tool_id_raw) if tool_id_raw else str(uuid.uuid4())
)
tool_name_raw = getattr(block, "name", None) or (
block.get("name") if isinstance(block, dict) else None
)
tool_name: str = str(tool_name_raw) if tool_name_raw else "unknown"
tool_input = getattr(block, "input", None) or (
block.get("input") if isinstance(block, dict) else {}
)
# End text block if we were streaming text
if self.has_started_text and not self.has_ended_text:
responses.append(StreamTextEnd(id=self.text_block_id))
self.has_ended_text = True
# Emit tool input start
responses.append(
StreamToolInputStart(
toolCallId=tool_id,
toolName=tool_name,
)
)
# Emit tool input available with full input
responses.append(
StreamToolInputAvailable(
toolCallId=tool_id,
toolName=tool_name,
input=tool_input if isinstance(tool_input, dict) else {},
)
)
# Track the tool call
self.current_tool_calls[tool_id] = {
"name": tool_name,
"input": tool_input,
}
elif class_name in ("ToolResultMessage", "UserMessage"):
# Tool result - check for tool_result content
content = getattr(sdk_message, "content", [])
for block in content:
block_class = type(block).__name__
block_type = block.get("type") if isinstance(block, dict) else None
if block_class == "ToolResultBlock" or block_type == "tool_result":
tool_use_id = getattr(block, "tool_use_id", None) or (
block.get("tool_use_id") if isinstance(block, dict) else None
)
result_content = getattr(block, "content", None) or (
block.get("content") if isinstance(block, dict) else ""
)
is_error = getattr(block, "is_error", False) or (
block.get("is_error", False)
if isinstance(block, dict)
else False
)
if tool_use_id:
tool_info = self.current_tool_calls.get(tool_use_id, {})
tool_name = tool_info.get("name", "unknown")
# Format the output
if isinstance(result_content, list):
# Extract text from content blocks
output_text = ""
for item in result_content:
if (
isinstance(item, dict)
and item.get("type") == "text"
):
output_text += item.get("text", "")
elif hasattr(item, "text"):
output_text += getattr(item, "text", "")
output = output_text
elif isinstance(result_content, str):
output = result_content
else:
output = json.dumps(result_content)
responses.append(
StreamToolOutputAvailable(
toolCallId=tool_use_id,
toolName=tool_name,
output=output,
success=not is_error,
)
)
elif class_name == "ResultMessage":
# Final result
if msg_subtype == "success":
# End text block if still open
if self.has_started_text and not self.has_ended_text:
responses.append(StreamTextEnd(id=self.text_block_id))
self.has_ended_text = True
# Emit finish
responses.append(StreamFinish())
elif msg_subtype in ("error", "error_during_execution"):
error_msg = getattr(sdk_message, "error", "Unknown error")
responses.append(
StreamError(
errorText=str(error_msg),
code="sdk_error",
)
)
responses.append(StreamFinish())
elif class_name == "ErrorMessage":
# Error message
error_msg = getattr(sdk_message, "message", None) or getattr(
sdk_message, "error", "Unknown error"
)
responses.append(
StreamError(
errorText=str(error_msg),
code="sdk_error",
)
)
responses.append(StreamFinish())
return responses
def create_heartbeat(self, tool_call_id: str | None = None) -> StreamHeartbeat:
"""Create a heartbeat response."""
return StreamHeartbeat(toolCallId=tool_call_id)
def create_usage(
self,
prompt_tokens: int,
completion_tokens: int,
) -> StreamUsage:
"""Create a usage statistics response."""
return StreamUsage(
promptTokens=prompt_tokens,
completionTokens=completion_tokens,
totalTokens=prompt_tokens + completion_tokens,
)
async def adapt_sdk_stream(
sdk_stream: AsyncGenerator[Any, None],
message_id: str | None = None,
task_id: str | None = None,
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Adapt a Claude Agent SDK stream to Vercel AI SDK format.
Args:
sdk_stream: The async generator from the Claude Agent SDK.
message_id: Optional message ID for the response.
task_id: Optional task ID for reconnection support.
Yields:
StreamBaseResponse objects in Vercel AI SDK format.
"""
adapter = SDKResponseAdapter(message_id=message_id)
if task_id:
adapter.set_task_id(task_id)
# Emit start immediately
yield StreamStart(messageId=adapter.message_id, taskId=task_id)
try:
async for sdk_message in sdk_stream:
responses = adapter.convert_message(sdk_message)
for response in responses:
# Skip duplicate start messages
if isinstance(response, StreamStart):
continue
yield response
except Exception as e:
logger.error(f"Error in SDK stream: {e}", exc_info=True)
yield StreamError(
errorText=f"Stream error: {str(e)}",
code="stream_error",
)
yield StreamFinish()

View File

@@ -0,0 +1,278 @@
"""Security hooks for Claude Agent SDK integration.
This module provides security hooks that validate tool calls before execution,
ensuring multi-user isolation and preventing unauthorized operations.
"""
import logging
import re
from typing import Any, cast
logger = logging.getLogger(__name__)
# Tools that are blocked entirely (CLI/system access)
BLOCKED_TOOLS = {
"Bash",
"bash",
"shell",
"exec",
"terminal",
"command",
"Read", # Block raw file read - use workspace tools instead
"Write", # Block raw file write - use workspace tools instead
"Edit", # Block raw file edit - use workspace tools instead
"Glob", # Block raw file glob - use workspace tools instead
"Grep", # Block raw file grep - use workspace tools instead
}
# Dangerous patterns in tool inputs
DANGEROUS_PATTERNS = [
r"sudo",
r"rm\s+-rf",
r"dd\s+if=",
r"/etc/passwd",
r"/etc/shadow",
r"chmod\s+777",
r"curl\s+.*\|.*sh",
r"wget\s+.*\|.*sh",
r"eval\s*\(",
r"exec\s*\(",
r"__import__",
r"os\.system",
r"subprocess",
]
def _validate_tool_access(tool_name: str, tool_input: dict[str, Any]) -> dict[str, Any]:
"""Validate that a tool call is allowed.
Returns:
Empty dict to allow, or dict with hookSpecificOutput to deny
"""
# Block forbidden tools
if tool_name in BLOCKED_TOOLS:
logger.warning(f"Blocked tool access attempt: {tool_name}")
return {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": (
f"Tool '{tool_name}' is not available. "
"Use the CoPilot-specific tools instead."
),
}
}
# Check for dangerous patterns in tool input
input_str = str(tool_input)
for pattern in DANGEROUS_PATTERNS:
if re.search(pattern, input_str, re.IGNORECASE):
logger.warning(
f"Blocked dangerous pattern in tool input: {pattern} in {tool_name}"
)
return {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": "Input contains blocked pattern",
}
}
return {}
def _validate_user_isolation(
tool_name: str, tool_input: dict[str, Any], user_id: str | None
) -> dict[str, Any]:
"""Validate that tool calls respect user isolation."""
# For workspace file tools, ensure path doesn't escape
if "workspace" in tool_name.lower():
path = tool_input.get("path", "") or tool_input.get("file_path", "")
if path:
# Check for path traversal
if ".." in path or path.startswith("/"):
logger.warning(
f"Blocked path traversal attempt: {path} by user {user_id}"
)
return {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": "Path traversal not allowed",
}
}
return {}
def create_security_hooks(user_id: str | None) -> dict[str, Any]:
"""Create the security hooks configuration for Claude Agent SDK.
Includes security validation and observability hooks:
- PreToolUse: Security validation before tool execution
- PostToolUse: Log successful tool executions
- PostToolUseFailure: Log and handle failed tool executions
- PreCompact: Log context compaction events (SDK handles compaction automatically)
Args:
user_id: Current user ID for isolation validation
Returns:
Hooks configuration dict for ClaudeAgentOptions
"""
try:
from claude_agent_sdk import HookMatcher
from claude_agent_sdk.types import HookContext, HookInput, SyncHookJSONOutput
async def pre_tool_use_hook(
input_data: HookInput,
tool_use_id: str | None,
context: HookContext,
) -> SyncHookJSONOutput:
"""Combined pre-tool-use validation hook."""
_ = context # unused but required by signature
tool_name = cast(str, input_data.get("tool_name", ""))
tool_input = cast(dict[str, Any], input_data.get("tool_input", {}))
# Validate basic tool access
result = _validate_tool_access(tool_name, tool_input)
if result:
return cast(SyncHookJSONOutput, result)
# Validate user isolation
result = _validate_user_isolation(tool_name, tool_input, user_id)
if result:
return cast(SyncHookJSONOutput, result)
logger.debug(f"[SDK] Tool start: {tool_name}, user={user_id}")
return cast(SyncHookJSONOutput, {})
async def post_tool_use_hook(
input_data: HookInput,
tool_use_id: str | None,
context: HookContext,
) -> SyncHookJSONOutput:
"""Log successful tool executions for observability."""
_ = context
tool_name = cast(str, input_data.get("tool_name", ""))
logger.debug(f"[SDK] Tool success: {tool_name}, tool_use_id={tool_use_id}")
return cast(SyncHookJSONOutput, {})
async def post_tool_failure_hook(
input_data: HookInput,
tool_use_id: str | None,
context: HookContext,
) -> SyncHookJSONOutput:
"""Log failed tool executions for debugging."""
_ = context
tool_name = cast(str, input_data.get("tool_name", ""))
error = input_data.get("error", "Unknown error")
logger.warning(
f"[SDK] Tool failed: {tool_name}, error={error}, "
f"user={user_id}, tool_use_id={tool_use_id}"
)
return cast(SyncHookJSONOutput, {})
async def pre_compact_hook(
input_data: HookInput,
tool_use_id: str | None,
context: HookContext,
) -> SyncHookJSONOutput:
"""Log when SDK triggers context compaction.
The SDK automatically compacts conversation history when it grows too large.
This hook provides visibility into when compaction happens.
"""
_ = context, tool_use_id
trigger = input_data.get("trigger", "auto")
logger.info(
f"[SDK] Context compaction triggered: {trigger}, user={user_id}"
)
return cast(SyncHookJSONOutput, {})
return {
"PreToolUse": [HookMatcher(matcher="*", hooks=[pre_tool_use_hook])],
"PostToolUse": [HookMatcher(matcher="*", hooks=[post_tool_use_hook])],
"PostToolUseFailure": [
HookMatcher(matcher="*", hooks=[post_tool_failure_hook])
],
"PreCompact": [HookMatcher(matcher="*", hooks=[pre_compact_hook])],
}
except ImportError:
# Fallback for when SDK isn't available - return empty hooks
return {}
def create_strict_security_hooks(
user_id: str | None,
allowed_tools: list[str] | None = None,
) -> dict[str, Any]:
"""Create strict security hooks that only allow specific tools.
Args:
user_id: Current user ID
allowed_tools: List of allowed tool names (defaults to CoPilot tools)
Returns:
Hooks configuration dict
"""
try:
from claude_agent_sdk import HookMatcher
from claude_agent_sdk.types import HookContext, HookInput, SyncHookJSONOutput
from .tool_adapter import RAW_TOOL_NAMES
tools_list = allowed_tools if allowed_tools is not None else RAW_TOOL_NAMES
allowed_set = set(tools_list)
async def strict_pre_tool_use(
input_data: HookInput,
tool_use_id: str | None,
context: HookContext,
) -> SyncHookJSONOutput:
"""Strict validation that only allows whitelisted tools."""
_ = context # unused but required by signature
tool_name = cast(str, input_data.get("tool_name", ""))
tool_input = cast(dict[str, Any], input_data.get("tool_input", {}))
# Remove MCP prefix if present
clean_name = tool_name.removeprefix("mcp__copilot__")
if clean_name not in allowed_set:
logger.warning(f"Blocked non-whitelisted tool: {tool_name}")
return cast(
SyncHookJSONOutput,
{
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": (
f"Tool '{tool_name}' is not in the allowed list"
),
}
},
)
# Run standard validations
result = _validate_tool_access(tool_name, tool_input)
if result:
return cast(SyncHookJSONOutput, result)
result = _validate_user_isolation(tool_name, tool_input, user_id)
if result:
return cast(SyncHookJSONOutput, result)
logger.debug(
f"[SDK Audit] Tool call: tool={tool_name}, "
f"user={user_id}, tool_use_id={tool_use_id}"
)
return cast(SyncHookJSONOutput, {})
return {
"PreToolUse": [
HookMatcher(matcher="*", hooks=[strict_pre_tool_use]),
],
}
except ImportError:
return {}

View File

@@ -0,0 +1,471 @@
"""Claude Agent SDK service layer for CoPilot chat completions."""
import asyncio
import json
import logging
import uuid
from collections.abc import AsyncGenerator
from typing import Any
import openai
from backend.data.understanding import (
format_understanding_for_prompt,
get_business_understanding,
)
from backend.util.exceptions import NotFoundError
from ..config import ChatConfig
from ..model import (
ChatMessage,
ChatSession,
get_chat_session,
update_session_title,
upsert_chat_session,
)
from ..response_model import (
StreamBaseResponse,
StreamError,
StreamFinish,
StreamStart,
StreamTextDelta,
StreamToolInputAvailable,
StreamToolOutputAvailable,
)
from ..tracking import track_user_message
from .anthropic_fallback import stream_with_anthropic
from .response_adapter import SDKResponseAdapter
from .security_hooks import create_security_hooks
from .tool_adapter import (
COPILOT_TOOL_NAMES,
create_copilot_mcp_server,
set_execution_context,
)
logger = logging.getLogger(__name__)
config = ChatConfig()
# Set to hold background tasks to prevent garbage collection
_background_tasks: set[asyncio.Task[Any]] = set()
DEFAULT_SYSTEM_PROMPT = """You are **Otto**, an AI Co-Pilot for AutoGPT and a Forward-Deployed Automation Engineer serving small business owners. Your mission is to help users automate business tasks with AI by delivering tangible value through working automations—not through documentation or lengthy explanations.
Here is everything you know about the current user from previous interactions:
<users_information>
{users_information}
</users_information>
## YOUR CORE MANDATE
You are action-oriented. Your success is measured by:
- **Value Delivery**: Does the user think "wow, that was amazing" or "what was the point"?
- **Demonstrable Proof**: Show working automations, not descriptions of what's possible
- **Time Saved**: Focus on tangible efficiency gains
- **Quality Output**: Deliver results that meet or exceed expectations
## YOUR WORKFLOW
Adapt flexibly to the conversation context. Not every interaction requires all stages:
1. **Explore & Understand**: Learn about the user's business, tasks, and goals. Use `add_understanding` to capture important context that will improve future conversations.
2. **Assess Automation Potential**: Help the user understand whether and how AI can automate their task.
3. **Prepare for AI**: Provide brief, actionable guidance on prerequisites (data, access, etc.).
4. **Discover or Create Agents**:
- **Always check the user's library first** with `find_library_agent` (these may be customized to their needs)
- Search the marketplace with `find_agent` for pre-built automations
- Find reusable components with `find_block`
- Create custom solutions with `create_agent` if nothing suitable exists
- Modify existing library agents with `edit_agent`
5. **Execute**: Run automations immediately, schedule them, or set up webhooks using `run_agent`. Test specific components with `run_block`.
6. **Show Results**: Display outputs using `agent_output`.
## BEHAVIORAL GUIDELINES
**Be Concise:**
- Target 2-5 short lines maximum
- Make every word count—no repetition or filler
- Use lightweight structure for scannability (bullets, numbered lists, short prompts)
- Avoid jargon (blocks, slugs, cron) unless the user asks
**Be Proactive:**
- Suggest next steps before being asked
- Anticipate needs based on conversation context and user information
- Look for opportunities to expand scope when relevant
- Reveal capabilities through action, not explanation
**Use Tools Effectively:**
- Select the right tool for each task
- **Always check `find_library_agent` before searching the marketplace**
- Use `add_understanding` to capture valuable business context
- When tool calls fail, try alternative approaches
## CRITICAL REMINDER
You are NOT a chatbot. You are NOT documentation. You are a partner who helps busy business owners get value quickly by showing proof through working automations. Bias toward action over explanation."""
async def _build_system_prompt(
user_id: str | None, has_conversation_history: bool = False
) -> tuple[str, Any]:
"""Build the system prompt with user's business understanding context.
Args:
user_id: The user ID to fetch understanding for.
has_conversation_history: Whether there's existing conversation history.
If True, we don't tell the model to greet/introduce (since they're
already in a conversation).
"""
understanding = None
if user_id:
try:
understanding = await get_business_understanding(user_id)
except Exception as e:
logger.warning(f"Failed to fetch business understanding: {e}")
if understanding:
context = format_understanding_for_prompt(understanding)
elif has_conversation_history:
# Don't tell model to greet if there's conversation history
context = "No prior understanding saved yet. Continue the existing conversation naturally."
else:
context = "This is the first time you are meeting the user. Greet them and introduce them to the platform"
return DEFAULT_SYSTEM_PROMPT.format(users_information=context), understanding
def _format_conversation_history(session: ChatSession) -> str:
"""Format conversation history as a prompt context.
The SDK handles context compaction automatically, but we apply
max_context_messages as a safety guard to limit initial prompt size.
"""
if not session.messages:
return ""
# Get all messages except the last user message (which will be the prompt)
messages = session.messages[:-1] if session.messages else []
if not messages:
return ""
# Apply max_context_messages limit as a safety guard
# (SDK handles compaction, but this prevents excessively large initial prompts)
max_messages = config.max_context_messages
if len(messages) > max_messages:
messages = messages[-max_messages:]
history_parts = ["<conversation_history>"]
for msg in messages:
if msg.role == "user":
history_parts.append(f"User: {msg.content or ''}")
elif msg.role == "assistant":
# Pass full content - SDK handles compaction automatically
history_parts.append(f"Assistant: {msg.content or ''}")
if msg.tool_calls:
for tc in msg.tool_calls:
func = tc.get("function", {})
history_parts.append(
f" [Called tool: {func.get('name', 'unknown')}]"
)
elif msg.role == "tool":
# Pass full tool results - SDK handles compaction
history_parts.append(f" [Tool result: {msg.content or ''}]")
history_parts.append("</conversation_history>")
history_parts.append("")
history_parts.append(
"Continue this conversation. Respond to the user's latest message:"
)
history_parts.append("")
return "\n".join(history_parts)
async def _generate_session_title(
message: str,
user_id: str | None = None,
session_id: str | None = None,
) -> str | None:
"""Generate a concise title for a chat session."""
from backend.util.settings import Settings
settings = Settings()
try:
# Build extra_body for OpenRouter tracing
extra_body: dict[str, Any] = {
"posthogProperties": {"environment": settings.config.app_env.value},
}
if user_id:
extra_body["user"] = user_id[:128]
extra_body["posthogDistinctId"] = user_id
if session_id:
extra_body["session_id"] = session_id[:128]
client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
response = await client.chat.completions.create(
model=config.title_model,
messages=[
{
"role": "system",
"content": "Generate a very short title (3-6 words) for a chat conversation based on the user's first message. Return ONLY the title, no quotes or punctuation.",
},
{"role": "user", "content": message[:500]},
],
max_tokens=20,
extra_body=extra_body,
)
title = response.choices[0].message.content
if title:
title = title.strip().strip("\"'")
return title[:47] + "..." if len(title) > 50 else title
return None
except Exception as e:
logger.warning(f"Failed to generate session title: {e}")
return None
async def stream_chat_completion_sdk(
session_id: str,
message: str | None = None,
tool_call_response: str | None = None, # noqa: ARG001
is_user_message: bool = True,
user_id: str | None = None,
retry_count: int = 0, # noqa: ARG001
session: ChatSession | None = None,
context: dict[str, str] | None = None, # noqa: ARG001
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Stream chat completion using Claude Agent SDK.
Drop-in replacement for stream_chat_completion with improved reliability.
"""
if session is None:
session = await get_chat_session(session_id, user_id)
if not session:
raise NotFoundError(
f"Session {session_id} not found. Please create a new session first."
)
if message:
session.messages.append(
ChatMessage(
role="user" if is_user_message else "assistant", content=message
)
)
if is_user_message:
track_user_message(
user_id=user_id, session_id=session_id, message_length=len(message)
)
session = await upsert_chat_session(session)
# Generate title for new sessions (first user message)
if is_user_message and not session.title:
user_messages = [m for m in session.messages if m.role == "user"]
if len(user_messages) == 1:
first_message = user_messages[0].content or message or ""
if first_message:
task = asyncio.create_task(
_update_title_async(session_id, first_message, user_id)
)
# Store reference to prevent garbage collection
_background_tasks.add(task)
task.add_done_callback(_background_tasks.discard)
# Check if there's conversation history (more than just the current message)
has_history = len(session.messages) > 1
system_prompt, _ = await _build_system_prompt(
user_id, has_conversation_history=has_history
)
set_execution_context(user_id, session, None)
message_id = str(uuid.uuid4())
text_block_id = str(uuid.uuid4())
task_id = str(uuid.uuid4())
yield StreamStart(messageId=message_id, taskId=task_id)
# Track whether the stream completed normally via ResultMessage
stream_completed = False
try:
try:
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
# Create MCP server with CoPilot tools
mcp_server = create_copilot_mcp_server()
options = ClaudeAgentOptions(
system_prompt=system_prompt,
mcp_servers={"copilot": mcp_server}, # type: ignore[arg-type]
allowed_tools=COPILOT_TOOL_NAMES,
hooks=create_security_hooks(user_id), # type: ignore[arg-type]
continue_conversation=True, # Enable conversation continuation
)
adapter = SDKResponseAdapter(message_id=message_id)
adapter.set_task_id(task_id)
async with ClaudeSDKClient(options=options) as client:
# Build prompt with conversation history for context
# The SDK doesn't support replaying full conversation history,
# so we include it as context in the prompt
current_message = message or ""
if not current_message and session.messages:
last_user = [m for m in session.messages if m.role == "user"]
if last_user:
current_message = last_user[-1].content or ""
# Include conversation history if there are prior messages
if len(session.messages) > 1:
history_context = _format_conversation_history(session)
prompt = f"{history_context}{current_message}"
else:
prompt = current_message
# Guard against empty prompts
if not prompt.strip():
yield StreamError(
errorText="Message cannot be empty.",
code="empty_prompt",
)
yield StreamFinish()
return
await client.query(prompt, session_id=session_id)
# Track assistant response to save to session
# We may need multiple assistant messages if text comes after tool results
assistant_response = ChatMessage(role="assistant", content="")
accumulated_tool_calls: list[dict[str, Any]] = []
has_appended_assistant = False
has_tool_results = False # Track if we've received tool results
# Receive messages from the SDK
async for sdk_msg in client.receive_messages():
for response in adapter.convert_message(sdk_msg):
if isinstance(response, StreamStart):
continue
yield response
# Accumulate text deltas into assistant response
if isinstance(response, StreamTextDelta):
delta = response.delta or ""
# After tool results, create new assistant message for post-tool text
if has_tool_results and has_appended_assistant:
assistant_response = ChatMessage(
role="assistant", content=delta
)
accumulated_tool_calls = [] # Reset for new message
session.messages.append(assistant_response)
has_tool_results = False
else:
assistant_response.content = (
assistant_response.content or ""
) + delta
if not has_appended_assistant:
session.messages.append(assistant_response)
has_appended_assistant = True
# Track tool calls on the assistant message
elif isinstance(response, StreamToolInputAvailable):
accumulated_tool_calls.append(
{
"id": response.toolCallId,
"type": "function",
"function": {
"name": response.toolName,
"arguments": json.dumps(response.input or {}),
},
}
)
# Update assistant message with tool calls
assistant_response.tool_calls = accumulated_tool_calls
# Append assistant message if not already (tool-only response)
if not has_appended_assistant:
session.messages.append(assistant_response)
has_appended_assistant = True
elif isinstance(response, StreamToolOutputAvailable):
session.messages.append(
ChatMessage(
role="tool",
content=(
response.output
if isinstance(response.output, str)
else str(response.output)
),
tool_call_id=response.toolCallId,
)
)
has_tool_results = True
elif isinstance(response, StreamFinish):
stream_completed = True
# Break out of the message loop if we received finish signal
if stream_completed:
break
# Ensure assistant response is saved even if no text deltas
# (e.g., only tool calls were made)
if (
assistant_response.content or assistant_response.tool_calls
) and not has_appended_assistant:
session.messages.append(assistant_response)
except ImportError:
logger.warning(
"[SDK] claude-agent-sdk not available, using Anthropic fallback"
)
async for response in stream_with_anthropic(
session, system_prompt, text_block_id
):
yield response
# Save the session with accumulated messages
await upsert_chat_session(session)
logger.debug(
f"[SDK] Session {session_id} saved with {len(session.messages)} messages"
)
# Always yield StreamFinish to signal completion to the caller
# The adapter yields StreamFinish for the SSE stream, but we need to
# yield it here so the background task in routes.py knows to call mark_task_completed
yield StreamFinish()
except Exception as e:
logger.error(f"[SDK] Error: {e}", exc_info=True)
# Save session even on error to preserve any partial response
try:
await upsert_chat_session(session)
except Exception as save_err:
logger.error(f"[SDK] Failed to save session on error: {save_err}")
# Sanitize error message to avoid exposing internal details
yield StreamError(
errorText="An error occurred. Please try again.",
code="sdk_error",
)
yield StreamFinish()
async def _update_title_async(
session_id: str, message: str, user_id: str | None = None
) -> None:
"""Background task to update session title."""
try:
title = await _generate_session_title(
message, user_id=user_id, session_id=session_id
)
if title:
await update_session_title(session_id, title)
logger.debug(f"[SDK] Generated title for {session_id}: {title}")
except Exception as e:
logger.warning(f"[SDK] Failed to update session title: {e}")

View File

@@ -0,0 +1,213 @@
"""Tool adapter for wrapping existing CoPilot tools as Claude Agent SDK MCP tools.
This module provides the adapter layer that converts existing BaseTool implementations
into in-process MCP tools that can be used with the Claude Agent SDK.
"""
import json
import logging
from contextvars import ContextVar
from typing import Any
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools import TOOL_REGISTRY
from backend.api.features.chat.tools.base import BaseTool
logger = logging.getLogger(__name__)
# Context variables to pass user/session info to tool execution
_current_user_id: ContextVar[str | None] = ContextVar("current_user_id", default=None)
_current_session: ContextVar[ChatSession | None] = ContextVar(
"current_session", default=None
)
_current_tool_call_id: ContextVar[str | None] = ContextVar(
"current_tool_call_id", default=None
)
def set_execution_context(
user_id: str | None,
session: ChatSession,
tool_call_id: str | None = None,
) -> None:
"""Set the execution context for tool calls.
This must be called before streaming begins to ensure tools have access
to user_id and session information.
"""
_current_user_id.set(user_id)
_current_session.set(session)
_current_tool_call_id.set(tool_call_id)
def get_execution_context() -> tuple[str | None, ChatSession | None, str | None]:
"""Get the current execution context."""
return (
_current_user_id.get(),
_current_session.get(),
_current_tool_call_id.get(),
)
def create_tool_handler(base_tool: BaseTool):
"""Create an async handler function for a BaseTool.
This wraps the existing BaseTool._execute method to be compatible
with the Claude Agent SDK MCP tool format.
"""
async def tool_handler(args: dict[str, Any]) -> dict[str, Any]:
"""Execute the wrapped tool and return MCP-formatted response."""
user_id, session, tool_call_id = get_execution_context()
if session is None:
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"error": "No session context available",
"type": "error",
}
),
}
],
"isError": True,
}
try:
# Call the existing tool's execute method
result = await base_tool.execute(
user_id=user_id,
session=session,
tool_call_id=tool_call_id or "sdk-call",
**args,
)
# The result is a StreamToolOutputAvailable, extract the output
return {
"content": [
{
"type": "text",
"text": (
result.output
if isinstance(result.output, str)
else json.dumps(result.output)
),
}
],
"isError": not result.success,
}
except Exception as e:
logger.error(f"Error executing tool {base_tool.name}: {e}", exc_info=True)
return {
"content": [
{
"type": "text",
"text": json.dumps(
{
"error": str(e),
"type": "error",
"message": f"Failed to execute {base_tool.name}",
}
),
}
],
"isError": True,
}
return tool_handler
def get_tool_definitions() -> list[dict[str, Any]]:
"""Get all tool definitions in MCP format.
Returns a list of tool definitions that can be used with
create_sdk_mcp_server or as raw tool definitions.
"""
tool_definitions = []
for tool_name, base_tool in TOOL_REGISTRY.items():
tool_def = {
"name": tool_name,
"description": base_tool.description,
"inputSchema": {
"type": "object",
"properties": base_tool.parameters.get("properties", {}),
"required": base_tool.parameters.get("required", []),
},
}
tool_definitions.append(tool_def)
return tool_definitions
def get_tool_handlers() -> dict[str, Any]:
"""Get all tool handlers mapped by name.
Returns a dictionary mapping tool names to their handler functions.
"""
handlers = {}
for tool_name, base_tool in TOOL_REGISTRY.items():
handlers[tool_name] = create_tool_handler(base_tool)
return handlers
# Create the MCP server configuration
def create_copilot_mcp_server():
"""Create an in-process MCP server configuration for CoPilot tools.
This can be passed to ClaudeAgentOptions.mcp_servers.
Note: The actual SDK MCP server creation depends on the claude-agent-sdk
package being available. This function returns the configuration that
can be used with the SDK.
"""
try:
from claude_agent_sdk import create_sdk_mcp_server, tool
# Create decorated tool functions
sdk_tools = []
for tool_name, base_tool in TOOL_REGISTRY.items():
# Get the handler
handler = create_tool_handler(base_tool)
# Create the decorated tool
# The @tool decorator expects (name, description, schema)
decorated = tool(
tool_name,
base_tool.description,
base_tool.parameters.get("properties", {}),
)(handler)
sdk_tools.append(decorated)
# Create the MCP server
server = create_sdk_mcp_server(
name="copilot",
version="1.0.0",
tools=sdk_tools,
)
return server
except ImportError:
logger.warning(
"claude-agent-sdk not available, returning tool definitions only"
)
return {
"tools": get_tool_definitions(),
"handlers": get_tool_handlers(),
}
# List of tool names for allowed_tools configuration
COPILOT_TOOL_NAMES = [f"mcp__copilot__{name}" for name in TOOL_REGISTRY.keys()]
# Also export the raw tool names for flexibility
RAW_TOOL_NAMES = list(TOOL_REGISTRY.keys())

View File

@@ -52,10 +52,8 @@ from .response_model import (
StreamBaseResponse,
StreamError,
StreamFinish,
StreamFinishStep,
StreamHeartbeat,
StreamStart,
StreamStartStep,
StreamTextDelta,
StreamTextEnd,
StreamTextStart,
@@ -353,10 +351,6 @@ async def stream_chat_completion(
retry_count: int = 0,
session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str}
_continuation_message_id: (
str | None
) = None, # Internal: reuse message ID for tool call continuations
_task_id: str | None = None, # Internal: task ID for SSE reconnection support
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
@@ -485,17 +479,11 @@ async def stream_chat_completion(
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
is_continuation = _continuation_message_id is not None
message_id = _continuation_message_id or str(uuid_module.uuid4())
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Only yield message start for the initial call, not for continuations.
# This is the single place where StreamStart is emitted (removed from routes.py).
if not is_continuation:
yield StreamStart(messageId=message_id, taskId=_task_id)
# Emit start-step before each LLM call (AI SDK uses this to add step boundaries)
yield StreamStartStep()
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
@@ -597,10 +585,6 @@ async def stream_chat_completion(
)
yield chunk
elif isinstance(chunk, StreamFinish):
if has_done_tool_call:
# Tool calls happened — close the step but don't send message-level finish.
# The continuation will open a new step, and finish will come at the end.
yield StreamFinishStep()
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
@@ -632,8 +616,6 @@ async def stream_chat_completion(
has_saved_assistant_message = True
has_yielded_end = True
# Emit finish-step before finish (resets AI SDK text/reasoning state)
yield StreamFinishStep()
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
@@ -718,7 +700,6 @@ async def stream_chat_completion(
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinishStep()
yield StreamFinish()
return
@@ -733,8 +714,6 @@ async def stream_chat_completion(
retry_count=retry_count + 1,
session=session,
context=context,
_continuation_message_id=message_id, # Reuse message ID since start was already sent
_task_id=_task_id,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
@@ -804,8 +783,6 @@ async def stream_chat_completion(
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
_continuation_message_id=message_id, # Reuse message ID to avoid duplicates
_task_id=_task_id,
):
yield chunk
@@ -1588,7 +1565,6 @@ async def _execute_long_running_tool_with_streaming(
task_id,
StreamError(errorText=str(e)),
)
await stream_registry.publish_chunk(task_id, StreamFinishStep())
await stream_registry.publish_chunk(task_id, StreamFinish())
await _update_pending_operation(
@@ -1846,7 +1822,6 @@ async def _generate_llm_continuation_with_streaming(
# Publish start event
await stream_registry.publish_chunk(task_id, StreamStart(messageId=message_id))
await stream_registry.publish_chunk(task_id, StreamStartStep())
await stream_registry.publish_chunk(task_id, StreamTextStart(id=text_block_id))
# Stream the response
@@ -1870,7 +1845,6 @@ async def _generate_llm_continuation_with_streaming(
# Publish end events
await stream_registry.publish_chunk(task_id, StreamTextEnd(id=text_block_id))
await stream_registry.publish_chunk(task_id, StreamFinishStep())
if assistant_content:
# Reload session from DB to avoid race condition with user messages
@@ -1912,5 +1886,4 @@ async def _generate_llm_continuation_with_streaming(
task_id,
StreamError(errorText=f"Failed to generate response: {e}"),
)
await stream_registry.publish_chunk(task_id, StreamFinishStep())
await stream_registry.publish_chunk(task_id, StreamFinish())

View File

@@ -555,6 +555,10 @@ async def get_active_task_for_session(
if task_user_id and user_id != task_user_id:
continue
logger.info(
f"[TASK_LOOKUP] Found running task {task_id[:8]}... for session {session_id[:8]}..."
)
# Get the last message ID from Redis Stream
stream_key = _get_task_stream_key(task_id)
last_id = "0-0"
@@ -598,10 +602,8 @@ def _reconstruct_chunk(chunk_data: dict) -> StreamBaseResponse | None:
ResponseType,
StreamError,
StreamFinish,
StreamFinishStep,
StreamHeartbeat,
StreamStart,
StreamStartStep,
StreamTextDelta,
StreamTextEnd,
StreamTextStart,
@@ -615,8 +617,6 @@ def _reconstruct_chunk(chunk_data: dict) -> StreamBaseResponse | None:
type_to_class: dict[str, type[StreamBaseResponse]] = {
ResponseType.START.value: StreamStart,
ResponseType.FINISH.value: StreamFinish,
ResponseType.START_STEP.value: StreamStartStep,
ResponseType.FINISH_STEP.value: StreamFinishStep,
ResponseType.TEXT_START.value: StreamTextStart,
ResponseType.TEXT_DELTA.value: StreamTextDelta,
ResponseType.TEXT_END.value: StreamTextEnd,

View File

@@ -7,15 +7,7 @@ from typing import Any, NotRequired, TypedDict
from backend.api.features.library import db as library_db
from backend.api.features.store import db as store_db
from backend.data.graph import (
Graph,
Link,
Node,
create_graph,
get_graph,
get_graph_all_versions,
get_store_listed_graphs,
)
from backend.data.graph import Graph, Link, Node, get_graph, get_store_listed_graphs
from backend.util.exceptions import DatabaseError, NotFoundError
from .service import (
@@ -28,8 +20,6 @@ from .service import (
logger = logging.getLogger(__name__)
AGENT_EXECUTOR_BLOCK_ID = "e189baac-8c20-45a1-94a7-55177ea42565"
class ExecutionSummary(TypedDict):
"""Summary of a single execution for quality assessment."""
@@ -669,45 +659,6 @@ def json_to_graph(agent_json: dict[str, Any]) -> Graph:
)
def _reassign_node_ids(graph: Graph) -> None:
"""Reassign all node and link IDs to new UUIDs.
This is needed when creating a new version to avoid unique constraint violations.
"""
id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes}
for node in graph.nodes:
node.id = id_map[node.id]
for link in graph.links:
link.id = str(uuid.uuid4())
if link.source_id in id_map:
link.source_id = id_map[link.source_id]
if link.sink_id in id_map:
link.sink_id = id_map[link.sink_id]
def _populate_agent_executor_user_ids(agent_json: dict[str, Any], user_id: str) -> None:
"""Populate user_id in AgentExecutorBlock nodes.
The external agent generator creates AgentExecutorBlock nodes with empty user_id.
This function fills in the actual user_id so sub-agents run with correct permissions.
Args:
agent_json: Agent JSON dict (modified in place)
user_id: User ID to set
"""
for node in agent_json.get("nodes", []):
if node.get("block_id") == AGENT_EXECUTOR_BLOCK_ID:
input_default = node.get("input_default") or {}
if not input_default.get("user_id"):
input_default["user_id"] = user_id
node["input_default"] = input_default
logger.debug(
f"Set user_id for AgentExecutorBlock node {node.get('id')}"
)
async def save_agent_to_library(
agent_json: dict[str, Any], user_id: str, is_update: bool = False
) -> tuple[Graph, Any]:
@@ -721,35 +672,10 @@ async def save_agent_to_library(
Returns:
Tuple of (created Graph, LibraryAgent)
"""
# Populate user_id in AgentExecutorBlock nodes before conversion
_populate_agent_executor_user_ids(agent_json, user_id)
graph = json_to_graph(agent_json)
if is_update:
if graph.id:
existing_versions = await get_graph_all_versions(graph.id, user_id)
if existing_versions:
latest_version = max(v.version for v in existing_versions)
graph.version = latest_version + 1
_reassign_node_ids(graph)
logger.info(f"Updating agent {graph.id} to version {graph.version}")
else:
graph.id = str(uuid.uuid4())
graph.version = 1
_reassign_node_ids(graph)
logger.info(f"Creating new agent with ID {graph.id}")
created_graph = await create_graph(graph, user_id)
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)
return created_graph, library_agents[0]
return await library_db.update_graph_in_library(graph, user_id)
return await library_db.create_graph_in_library(graph, user_id)
def graph_to_json(graph: Graph) -> dict[str, Any]:

View File

@@ -206,9 +206,9 @@ async def search_agents(
]
)
no_results_msg = (
f"No agents found matching '{query}'. Try different keywords or browse the marketplace."
f"No agents found matching '{query}'. Let the user know they can try different keywords or browse the marketplace. Also let them know you can create a custom agent for them based on their needs."
if source == "marketplace"
else f"No agents matching '{query}' found in your library."
else f"No agents matching '{query}' found in your library. Let the user know you can create a custom agent for them based on their needs."
)
return NoResultsResponse(
message=no_results_msg, session_id=session_id, suggestions=suggestions
@@ -224,10 +224,10 @@ async def search_agents(
message = (
"Now you have found some options for the user to choose from. "
"You can add a link to a recommended agent at: /marketplace/agent/agent_id "
"Please ask the user if they would like to use any of these agents."
"Please ask the user if they would like to use any of these agents. Let the user know we can create a custom agent for them based on their needs."
if source == "marketplace"
else "Found agents in the user's library. You can provide a link to view an agent at: "
"/library/agents/{agent_id}. Use agent_output to get execution results, or run_agent to execute."
"/library/agents/{agent_id}. Use agent_output to get execution results, or run_agent to execute. Let the user know we can create a custom agent for them based on their needs."
)
return AgentsFoundResponse(

View File

@@ -19,7 +19,10 @@ from backend.data.graph import GraphSettings
from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks.graph_lifecycle_hooks import on_graph_activate
from backend.integrations.webhooks.graph_lifecycle_hooks import (
on_graph_activate,
on_graph_deactivate,
)
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, InvalidInputError, NotFoundError
from backend.util.json import SafeJson
@@ -537,6 +540,92 @@ async def update_agent_version_in_library(
return library_model.LibraryAgent.from_db(lib)
async def create_graph_in_library(
graph: graph_db.Graph,
user_id: str,
) -> tuple[graph_db.GraphModel, library_model.LibraryAgent]:
"""Create a new graph and add it to the user's library."""
graph.version = 1
graph_model = graph_db.make_graph_model(graph, user_id)
graph_model.reassign_ids(user_id=user_id, reassign_graph_id=True)
created_graph = await graph_db.create_graph(graph_model, user_id)
library_agents = await create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)
if created_graph.is_active:
created_graph = await on_graph_activate(created_graph, user_id=user_id)
return created_graph, library_agents[0]
async def update_graph_in_library(
graph: graph_db.Graph,
user_id: str,
) -> tuple[graph_db.GraphModel, library_model.LibraryAgent]:
"""Create a new version of an existing graph and update the library entry."""
existing_versions = await graph_db.get_graph_all_versions(graph.id, user_id)
current_active_version = (
next((v for v in existing_versions if v.is_active), None)
if existing_versions
else None
)
graph.version = (
max(v.version for v in existing_versions) + 1 if existing_versions else 1
)
graph_model = graph_db.make_graph_model(graph, user_id)
graph_model.reassign_ids(user_id=user_id, reassign_graph_id=False)
created_graph = await graph_db.create_graph(graph_model, user_id)
library_agent = await get_library_agent_by_graph_id(user_id, created_graph.id)
if not library_agent:
raise NotFoundError(f"Library agent not found for graph {created_graph.id}")
library_agent = await update_library_agent_version_and_settings(
user_id, created_graph
)
if created_graph.is_active:
created_graph = await on_graph_activate(created_graph, user_id=user_id)
await graph_db.set_graph_active_version(
graph_id=created_graph.id,
version=created_graph.version,
user_id=user_id,
)
if current_active_version:
await on_graph_deactivate(current_active_version, user_id=user_id)
return created_graph, library_agent
async def update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
"""Update library agent to point to new graph version and sync settings."""
library = await update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await update_library_agent(
library_agent_id=library.id,
user_id=user_id,
settings=updated_settings,
)
return library
async def update_library_agent(
library_agent_id: str,
user_id: str,

View File

@@ -101,7 +101,6 @@ from backend.util.timezone_utils import (
from backend.util.virus_scanner import scan_content_safe
from .library import db as library_db
from .library import model as library_model
from .store.model import StoreAgentDetails
@@ -823,18 +822,16 @@ async def update_graph(
graph: graph_db.Graph,
user_id: Annotated[str, Security(get_user_id)],
) -> graph_db.GraphModel:
# Sanity check
if graph.id and graph.id != graph_id:
raise HTTPException(400, detail="Graph ID does not match ID in URI")
# Determine new version
existing_versions = await graph_db.get_graph_all_versions(graph_id, user_id=user_id)
if not existing_versions:
raise HTTPException(404, detail=f"Graph #{graph_id} not found")
latest_version_number = max(g.version for g in existing_versions)
graph.version = latest_version_number + 1
graph.version = max(g.version for g in existing_versions) + 1
current_active_version = next((v for v in existing_versions if v.is_active), None)
graph = graph_db.make_graph_model(graph, user_id)
graph.reassign_ids(user_id=user_id, reassign_graph_id=False)
graph.validate_graph(for_run=False)
@@ -842,27 +839,23 @@ async def update_graph(
new_graph_version = await graph_db.create_graph(graph, user_id=user_id)
if new_graph_version.is_active:
# Keep the library agent up to date with the new active version
await _update_library_agent_version_and_settings(user_id, new_graph_version)
# Handle activation of the new graph first to ensure continuity
await library_db.update_library_agent_version_and_settings(
user_id, new_graph_version
)
new_graph_version = await on_graph_activate(new_graph_version, user_id=user_id)
# Ensure new version is the only active version
await graph_db.set_graph_active_version(
graph_id=graph_id, version=new_graph_version.version, user_id=user_id
)
if current_active_version:
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_version, user_id=user_id)
# Fetch new graph version *with sub-graphs* (needed for credentials input schema)
new_graph_version_with_subgraphs = await graph_db.get_graph(
graph_id,
new_graph_version.version,
user_id=user_id,
include_subgraphs=True,
)
assert new_graph_version_with_subgraphs # make type checker happy
assert new_graph_version_with_subgraphs
return new_graph_version_with_subgraphs
@@ -900,33 +893,15 @@ async def set_graph_active_version(
)
# Keep the library agent up to date with the new active version
await _update_library_agent_version_and_settings(user_id, new_active_graph)
await library_db.update_library_agent_version_and_settings(
user_id, new_active_graph
)
if current_active_graph and current_active_graph.version != new_active_version:
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_graph, user_id=user_id)
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id,
settings=updated_settings,
)
return library
@v1_router.patch(
path="/graphs/{graph_id}/settings",
summary="Update graph settings",

View File

@@ -0,0 +1,28 @@
"""ElevenLabs integration blocks - test credentials and shared utilities."""
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsMetaInput
from backend.integrations.providers import ProviderName
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="elevenlabs",
api_key=SecretStr("mock-elevenlabs-api-key"),
title="Mock ElevenLabs API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
ElevenLabsCredentials = APIKeyCredentials
ElevenLabsCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.ELEVENLABS], Literal["api_key"]
]

View File

@@ -0,0 +1,77 @@
"""Text encoding block for converting special characters to escape sequences."""
import codecs
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class TextEncoderBlock(Block):
"""
Encodes a string by converting special characters into escape sequences.
This block is the inverse of TextDecoderBlock. It takes text containing
special characters (like newlines, tabs, etc.) and converts them into
their escape sequence representations (e.g., newline becomes \\n).
"""
class Input(BlockSchemaInput):
"""Input schema for TextEncoderBlock."""
text: str = SchemaField(
description="A string containing special characters to be encoded",
placeholder="Your text with newlines and quotes to encode",
)
class Output(BlockSchemaOutput):
"""Output schema for TextEncoderBlock."""
encoded_text: str = SchemaField(
description="The encoded text with special characters converted to escape sequences"
)
error: str = SchemaField(description="Error message if encoding fails")
def __init__(self):
super().__init__(
id="5185f32e-4b65-4ecf-8fbb-873f003f09d6",
description="Encodes a string by converting special characters into escape sequences",
categories={BlockCategory.TEXT},
input_schema=TextEncoderBlock.Input,
output_schema=TextEncoderBlock.Output,
test_input={
"text": """Hello
World!
This is a "quoted" string."""
},
test_output=[
(
"encoded_text",
"""Hello\\nWorld!\\nThis is a "quoted" string.""",
)
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
"""
Encode the input text by converting special characters to escape sequences.
Args:
input_data: The input containing the text to encode.
**kwargs: Additional keyword arguments (unused).
Yields:
The encoded text with escape sequences, or an error message if encoding fails.
"""
try:
encoded_text = codecs.encode(input_data.text, "unicode_escape").decode(
"utf-8"
)
yield "encoded_text", encoded_text
except Exception as e:
yield "error", f"Encoding error: {str(e)}"

View File

@@ -115,6 +115,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001"
CLAUDE_4_6_OPUS = "claude-opus-4-6"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
@@ -270,6 +271,9 @@ MODEL_METADATA = {
LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 64000, "Claude Sonnet 4", "Anthropic", "Anthropic", 2
), # claude-4-sonnet-20250514
LlmModel.CLAUDE_4_6_OPUS: ModelMetadata(
"anthropic", 200000, 128000, "Claude Opus 4.6", "Anthropic", "Anthropic", 3
), # claude-opus-4-6
LlmModel.CLAUDE_4_5_OPUS: ModelMetadata(
"anthropic", 200000, 64000, "Claude Opus 4.5", "Anthropic", "Anthropic", 3
), # claude-opus-4-5-20251101

View File

@@ -1,246 +0,0 @@
import os
import tempfile
from typing import Optional
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.fx.Loop import Loop
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class MediaDurationBlock(Block):
class Input(BlockSchemaInput):
media_in: MediaFileType = SchemaField(
description="Media input (URL, data URI, or local path)."
)
is_video: bool = SchemaField(
description="Whether the media is a video (True) or audio (False).",
default=True,
)
class Output(BlockSchemaOutput):
duration: float = SchemaField(
description="Duration of the media file (in seconds)."
)
def __init__(self):
super().__init__(
id="d8b91fd4-da26-42d4-8ecb-8b196c6d84b6",
description="Block to get the duration of a media file.",
categories={BlockCategory.MULTIMEDIA},
input_schema=MediaDurationBlock.Input,
output_schema=MediaDurationBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# 1) Store the input media locally
local_media_path = await store_media_file(
file=input_data.media_in,
execution_context=execution_context,
return_format="for_local_processing",
)
assert execution_context.graph_exec_id is not None
media_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_media_path
)
# 2) Load the clip
if input_data.is_video:
clip = VideoFileClip(media_abspath)
else:
clip = AudioFileClip(media_abspath)
yield "duration", clip.duration
class LoopVideoBlock(Block):
"""
Block for looping (repeating) a video clip until a given duration or number of loops.
"""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="The input video (can be a URL, data URI, or local path)."
)
# Provide EITHER a `duration` or `n_loops` or both. We'll demonstrate `duration`.
duration: Optional[float] = SchemaField(
description="Target duration (in seconds) to loop the video to. If omitted, defaults to no looping.",
default=None,
ge=0.0,
)
n_loops: Optional[int] = SchemaField(
description="Number of times to repeat the video. If omitted, defaults to 1 (no repeat).",
default=None,
ge=1,
)
class Output(BlockSchemaOutput):
video_out: str = SchemaField(
description="Looped video returned either as a relative path or a data URI."
)
def __init__(self):
super().__init__(
id="8bf9eef6-5451-4213-b265-25306446e94b",
description="Block to loop a video to a given duration or number of repeats.",
categories={BlockCategory.MULTIMEDIA},
input_schema=LoopVideoBlock.Input,
output_schema=LoopVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the input video locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
input_abspath = get_exec_file_path(graph_exec_id, local_video_path)
# 2) Load the clip
clip = VideoFileClip(input_abspath)
# 3) Apply the loop effect
looped_clip = clip
if input_data.duration:
# Loop until we reach the specified duration
looped_clip = looped_clip.with_effects([Loop(duration=input_data.duration)])
elif input_data.n_loops:
looped_clip = looped_clip.with_effects([Loop(n=input_data.n_loops)])
else:
raise ValueError("Either 'duration' or 'n_loops' must be provided.")
assert isinstance(looped_clip, VideoFileClip)
# 4) Save the looped output
output_filename = MediaFileType(
f"{node_exec_id}_looped_{os.path.basename(local_video_path)}"
)
output_abspath = get_exec_file_path(graph_exec_id, output_filename)
looped_clip = looped_clip.with_audio(clip.audio)
looped_clip.write_videofile(output_abspath, codec="libx264", audio_codec="aac")
# Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out
class AddAudioToVideoBlock(Block):
"""
Block that adds (attaches) an audio track to an existing video.
Optionally scale the volume of the new track.
"""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Video input (URL, data URI, or local path)."
)
audio_in: MediaFileType = SchemaField(
description="Audio input (URL, data URI, or local path)."
)
volume: float = SchemaField(
description="Volume scale for the newly attached audio track (1.0 = original).",
default=1.0,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Final video (with attached audio), as a path or data URI."
)
def __init__(self):
super().__init__(
id="3503748d-62b6-4425-91d6-725b064af509",
description="Block to attach an audio file to a video file using moviepy.",
categories={BlockCategory.MULTIMEDIA},
input_schema=AddAudioToVideoBlock.Input,
output_schema=AddAudioToVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the inputs locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
local_audio_path = await store_media_file(
file=input_data.audio_in,
execution_context=execution_context,
return_format="for_local_processing",
)
abs_temp_dir = os.path.join(tempfile.gettempdir(), "exec_file", graph_exec_id)
video_abspath = os.path.join(abs_temp_dir, local_video_path)
audio_abspath = os.path.join(abs_temp_dir, local_audio_path)
# 2) Load video + audio with moviepy
video_clip = VideoFileClip(video_abspath)
audio_clip = AudioFileClip(audio_abspath)
# Optionally scale volume
if input_data.volume != 1.0:
audio_clip = audio_clip.with_volume_scaled(input_data.volume)
# 3) Attach the new audio track
final_clip = video_clip.with_audio(audio_clip)
# 4) Write to output file
output_filename = MediaFileType(
f"{node_exec_id}_audio_attached_{os.path.basename(local_video_path)}"
)
output_abspath = os.path.join(abs_temp_dir, output_filename)
final_clip.write_videofile(output_abspath, codec="libx264", audio_codec="aac")
# 5) Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out

View File

@@ -0,0 +1,77 @@
import pytest
from backend.blocks.encoder_block import TextEncoderBlock
@pytest.mark.asyncio
async def test_text_encoder_basic():
"""Test basic encoding of newlines and special characters."""
block = TextEncoderBlock()
result = []
async for output in block.run(TextEncoderBlock.Input(text="Hello\nWorld")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
assert result[0][1] == "Hello\\nWorld"
@pytest.mark.asyncio
async def test_text_encoder_multiple_escapes():
"""Test encoding of multiple escape sequences."""
block = TextEncoderBlock()
result = []
async for output in block.run(
TextEncoderBlock.Input(text="Line1\nLine2\tTabbed\rCarriage")
):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
assert "\\n" in result[0][1]
assert "\\t" in result[0][1]
assert "\\r" in result[0][1]
@pytest.mark.asyncio
async def test_text_encoder_unicode():
"""Test that unicode characters are handled correctly."""
block = TextEncoderBlock()
result = []
async for output in block.run(TextEncoderBlock.Input(text="Hello 世界\n")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
# Unicode characters should be escaped as \uXXXX sequences
assert "\\n" in result[0][1]
@pytest.mark.asyncio
async def test_text_encoder_empty_string():
"""Test encoding of an empty string."""
block = TextEncoderBlock()
result = []
async for output in block.run(TextEncoderBlock.Input(text="")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
assert result[0][1] == ""
@pytest.mark.asyncio
async def test_text_encoder_error_handling():
"""Test that encoding errors are handled gracefully."""
from unittest.mock import patch
block = TextEncoderBlock()
result = []
with patch("codecs.encode", side_effect=Exception("Mocked encoding error")):
async for output in block.run(TextEncoderBlock.Input(text="test")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "error"
assert "Mocked encoding error" in result[0][1]

View File

@@ -0,0 +1,37 @@
"""Video editing blocks for AutoGPT Platform.
This module provides blocks for:
- Downloading videos from URLs (YouTube, Vimeo, news sites, direct links)
- Clipping/trimming video segments
- Concatenating multiple videos
- Adding text overlays
- Adding AI-generated narration
- Getting media duration
- Looping videos
- Adding audio to videos
Dependencies:
- yt-dlp: For video downloading
- moviepy: For video editing operations
- elevenlabs: For AI narration (optional)
"""
from backend.blocks.video.add_audio import AddAudioToVideoBlock
from backend.blocks.video.clip import VideoClipBlock
from backend.blocks.video.concat import VideoConcatBlock
from backend.blocks.video.download import VideoDownloadBlock
from backend.blocks.video.duration import MediaDurationBlock
from backend.blocks.video.loop import LoopVideoBlock
from backend.blocks.video.narration import VideoNarrationBlock
from backend.blocks.video.text_overlay import VideoTextOverlayBlock
__all__ = [
"AddAudioToVideoBlock",
"LoopVideoBlock",
"MediaDurationBlock",
"VideoClipBlock",
"VideoConcatBlock",
"VideoDownloadBlock",
"VideoNarrationBlock",
"VideoTextOverlayBlock",
]

View File

@@ -0,0 +1,131 @@
"""Shared utilities for video blocks."""
from __future__ import annotations
import logging
import os
import re
import subprocess
from pathlib import Path
logger = logging.getLogger(__name__)
# Known operation tags added by video blocks
_VIDEO_OPS = (
r"(?:clip|overlay|narrated|looped|concat|audio_attached|with_audio|narration)"
)
# Matches: {node_exec_id}_{operation}_ where node_exec_id contains a UUID
_BLOCK_PREFIX_RE = re.compile(
r"^[a-zA-Z0-9_-]*"
r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
r"[a-zA-Z0-9_-]*"
r"_" + _VIDEO_OPS + r"_"
)
# Matches: a lone {node_exec_id}_ prefix (no operation keyword, e.g. download output)
_UUID_PREFIX_RE = re.compile(
r"^[a-zA-Z0-9_-]*"
r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
r"[a-zA-Z0-9_-]*_"
)
def extract_source_name(input_path: str, max_length: int = 50) -> str:
"""Extract the original source filename by stripping block-generated prefixes.
Iteratively removes {node_exec_id}_{operation}_ prefixes that accumulate
when chaining video blocks, recovering the original human-readable name.
Safe for plain filenames (no UUID -> no stripping).
Falls back to "video" if everything is stripped.
"""
stem = Path(input_path).stem
# Pass 1: strip {node_exec_id}_{operation}_ prefixes iteratively
while _BLOCK_PREFIX_RE.match(stem):
stem = _BLOCK_PREFIX_RE.sub("", stem, count=1)
# Pass 2: strip a lone {node_exec_id}_ prefix (e.g. from download block)
if _UUID_PREFIX_RE.match(stem):
stem = _UUID_PREFIX_RE.sub("", stem, count=1)
if not stem:
return "video"
return stem[:max_length]
def get_video_codecs(output_path: str) -> tuple[str, str]:
"""Get appropriate video and audio codecs based on output file extension.
Args:
output_path: Path to the output file (used to determine extension)
Returns:
Tuple of (video_codec, audio_codec)
Codec mappings:
- .mp4: H.264 + AAC (universal compatibility)
- .webm: VP8 + Vorbis (web streaming)
- .mkv: H.264 + AAC (container supports many codecs)
- .mov: H.264 + AAC (Apple QuickTime, widely compatible)
- .m4v: H.264 + AAC (Apple iTunes/devices)
- .avi: MPEG-4 + MP3 (legacy Windows)
"""
ext = os.path.splitext(output_path)[1].lower()
codec_map: dict[str, tuple[str, str]] = {
".mp4": ("libx264", "aac"),
".webm": ("libvpx", "libvorbis"),
".mkv": ("libx264", "aac"),
".mov": ("libx264", "aac"),
".m4v": ("libx264", "aac"),
".avi": ("mpeg4", "libmp3lame"),
}
return codec_map.get(ext, ("libx264", "aac"))
def strip_chapters_inplace(video_path: str) -> None:
"""Strip chapter metadata from a media file in-place using ffmpeg.
MoviePy 2.x crashes with IndexError when parsing files with embedded
chapter metadata (https://github.com/Zulko/moviepy/issues/2419).
This strips chapters without re-encoding.
Args:
video_path: Absolute path to the media file to strip chapters from.
"""
base, ext = os.path.splitext(video_path)
tmp_path = base + ".tmp" + ext
try:
result = subprocess.run(
[
"ffmpeg",
"-y",
"-i",
video_path,
"-map_chapters",
"-1",
"-codec",
"copy",
tmp_path,
],
capture_output=True,
text=True,
timeout=300,
)
if result.returncode != 0:
logger.warning(
"ffmpeg chapter strip failed (rc=%d): %s",
result.returncode,
result.stderr,
)
return
os.replace(tmp_path, video_path)
except FileNotFoundError:
logger.warning("ffmpeg not found; skipping chapter strip")
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)

View File

@@ -0,0 +1,113 @@
"""AddAudioToVideoBlock - Attach an audio track to a video file."""
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class AddAudioToVideoBlock(Block):
"""Add (attach) an audio track to an existing video."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Video input (URL, data URI, or local path)."
)
audio_in: MediaFileType = SchemaField(
description="Audio input (URL, data URI, or local path)."
)
volume: float = SchemaField(
description="Volume scale for the newly attached audio track (1.0 = original).",
default=1.0,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Final video (with attached audio), as a path or data URI."
)
def __init__(self):
super().__init__(
id="3503748d-62b6-4425-91d6-725b064af509",
description="Block to attach an audio file to a video file using moviepy.",
categories={BlockCategory.MULTIMEDIA},
input_schema=AddAudioToVideoBlock.Input,
output_schema=AddAudioToVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the inputs locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
local_audio_path = await store_media_file(
file=input_data.audio_in,
execution_context=execution_context,
return_format="for_local_processing",
)
video_abspath = get_exec_file_path(graph_exec_id, local_video_path)
audio_abspath = get_exec_file_path(graph_exec_id, local_audio_path)
# 2) Load video + audio with moviepy
strip_chapters_inplace(video_abspath)
strip_chapters_inplace(audio_abspath)
video_clip = None
audio_clip = None
final_clip = None
try:
video_clip = VideoFileClip(video_abspath)
audio_clip = AudioFileClip(audio_abspath)
# Optionally scale volume
if input_data.volume != 1.0:
audio_clip = audio_clip.with_volume_scaled(input_data.volume)
# 3) Attach the new audio track
final_clip = video_clip.with_audio(audio_clip)
# 4) Write to output file
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_with_audio_{source}.mp4")
output_abspath = get_exec_file_path(graph_exec_id, output_filename)
final_clip.write_videofile(
output_abspath, codec="libx264", audio_codec="aac"
)
finally:
if final_clip:
final_clip.close()
if audio_clip:
audio_clip.close()
if video_clip:
video_clip.close()
# 5) Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out

View File

@@ -0,0 +1,167 @@
"""VideoClipBlock - Extract a segment from a video file."""
from typing import Literal
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoClipBlock(Block):
"""Extract a time segment from a video."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Input video (URL, data URI, or local path)"
)
start_time: float = SchemaField(description="Start time in seconds", ge=0.0)
end_time: float = SchemaField(description="End time in seconds", ge=0.0)
output_format: Literal["mp4", "webm", "mkv", "mov"] = SchemaField(
description="Output format", default="mp4", advanced=True
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Clipped video file (path or data URI)"
)
duration: float = SchemaField(description="Clip duration in seconds")
def __init__(self):
super().__init__(
id="8f539119-e580-4d86-ad41-86fbcb22abb1",
description="Extract a time segment from a video",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"video_in": "/tmp/test.mp4",
"start_time": 0.0,
"end_time": 10.0,
},
test_output=[("video_out", str), ("duration", float)],
test_mock={
"_clip_video": lambda *args: 10.0,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "clip_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _clip_video(
self,
video_abspath: str,
output_abspath: str,
start_time: float,
end_time: float,
) -> float:
"""Extract a clip from a video. Extracted for testability."""
clip = None
subclip = None
try:
strip_chapters_inplace(video_abspath)
clip = VideoFileClip(video_abspath)
subclip = clip.subclipped(start_time, end_time)
video_codec, audio_codec = get_video_codecs(output_abspath)
subclip.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
return subclip.duration
finally:
if subclip:
subclip.close()
if clip:
clip.close()
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Validate time range
if input_data.end_time <= input_data.start_time:
raise BlockExecutionError(
message=f"end_time ({input_data.end_time}) must be greater than start_time ({input_data.start_time})",
block_name=self.name,
block_id=str(self.id),
)
try:
assert execution_context.graph_exec_id is not None
# Store the input video locally
local_video_path = await self._store_input_video(
execution_context, input_data.video_in
)
video_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_video_path
)
# Build output path
source = extract_source_name(local_video_path)
output_filename = MediaFileType(
f"{node_exec_id}_clip_{source}.{input_data.output_format}"
)
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
duration = self._clip_video(
video_abspath,
output_abspath,
input_data.start_time,
input_data.end_time,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
yield "video_out", video_out
yield "duration", duration
except BlockExecutionError:
raise
except Exception as e:
raise BlockExecutionError(
message=f"Failed to clip video: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,227 @@
"""VideoConcatBlock - Concatenate multiple video clips into one."""
from typing import Literal
from moviepy import concatenate_videoclips
from moviepy.video.fx import CrossFadeIn, CrossFadeOut, FadeIn, FadeOut
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoConcatBlock(Block):
"""Merge multiple video clips into one continuous video."""
class Input(BlockSchemaInput):
videos: list[MediaFileType] = SchemaField(
description="List of video files to concatenate (in order)"
)
transition: Literal["none", "crossfade", "fade_black"] = SchemaField(
description="Transition between clips", default="none"
)
transition_duration: int = SchemaField(
description="Transition duration in seconds",
default=1,
ge=0,
advanced=True,
)
output_format: Literal["mp4", "webm", "mkv", "mov"] = SchemaField(
description="Output format", default="mp4", advanced=True
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Concatenated video file (path or data URI)"
)
total_duration: float = SchemaField(description="Total duration in seconds")
def __init__(self):
super().__init__(
id="9b0f531a-1118-487f-aeec-3fa63ea8900a",
description="Merge multiple video clips into one continuous video",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"videos": ["/tmp/a.mp4", "/tmp/b.mp4"],
},
test_output=[
("video_out", str),
("total_duration", float),
],
test_mock={
"_concat_videos": lambda *args: 20.0,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "concat_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _concat_videos(
self,
video_abspaths: list[str],
output_abspath: str,
transition: str,
transition_duration: int,
) -> float:
"""Concatenate videos. Extracted for testability.
Returns:
Total duration of the concatenated video.
"""
clips = []
faded_clips = []
final = None
try:
# Load clips
for v in video_abspaths:
strip_chapters_inplace(v)
clips.append(VideoFileClip(v))
# Validate transition_duration against shortest clip
if transition in {"crossfade", "fade_black"} and transition_duration > 0:
min_duration = min(c.duration for c in clips)
if transition_duration >= min_duration:
raise BlockExecutionError(
message=(
f"transition_duration ({transition_duration}s) must be "
f"shorter than the shortest clip ({min_duration:.2f}s)"
),
block_name=self.name,
block_id=str(self.id),
)
if transition == "crossfade":
for i, clip in enumerate(clips):
effects = []
if i > 0:
effects.append(CrossFadeIn(transition_duration))
if i < len(clips) - 1:
effects.append(CrossFadeOut(transition_duration))
if effects:
clip = clip.with_effects(effects)
faded_clips.append(clip)
final = concatenate_videoclips(
faded_clips,
method="compose",
padding=-transition_duration,
)
elif transition == "fade_black":
for clip in clips:
faded = clip.with_effects(
[FadeIn(transition_duration), FadeOut(transition_duration)]
)
faded_clips.append(faded)
final = concatenate_videoclips(faded_clips)
else:
final = concatenate_videoclips(clips)
video_codec, audio_codec = get_video_codecs(output_abspath)
final.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
return final.duration
finally:
if final:
final.close()
for clip in faded_clips:
clip.close()
for clip in clips:
clip.close()
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Validate minimum clips
if len(input_data.videos) < 2:
raise BlockExecutionError(
message="At least 2 videos are required for concatenation",
block_name=self.name,
block_id=str(self.id),
)
try:
assert execution_context.graph_exec_id is not None
# Store all input videos locally
video_abspaths = []
for video in input_data.videos:
local_path = await self._store_input_video(execution_context, video)
video_abspaths.append(
get_exec_file_path(execution_context.graph_exec_id, local_path)
)
# Build output path
source = (
extract_source_name(video_abspaths[0]) if video_abspaths else "video"
)
output_filename = MediaFileType(
f"{node_exec_id}_concat_{source}.{input_data.output_format}"
)
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
total_duration = self._concat_videos(
video_abspaths,
output_abspath,
input_data.transition,
input_data.transition_duration,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
yield "video_out", video_out
yield "total_duration", total_duration
except BlockExecutionError:
raise
except Exception as e:
raise BlockExecutionError(
message=f"Failed to concatenate videos: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,172 @@
"""VideoDownloadBlock - Download video from URL (YouTube, Vimeo, news sites, direct links)."""
import os
import typing
from typing import Literal
import yt_dlp
if typing.TYPE_CHECKING:
from yt_dlp import _Params
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoDownloadBlock(Block):
"""Download video from URL using yt-dlp."""
class Input(BlockSchemaInput):
url: str = SchemaField(
description="URL of the video to download (YouTube, Vimeo, direct link, etc.)",
placeholder="https://www.youtube.com/watch?v=...",
)
quality: Literal["best", "1080p", "720p", "480p", "audio_only"] = SchemaField(
description="Video quality preference", default="720p"
)
output_format: Literal["mp4", "webm", "mkv"] = SchemaField(
description="Output video format", default="mp4", advanced=True
)
class Output(BlockSchemaOutput):
video_file: MediaFileType = SchemaField(
description="Downloaded video (path or data URI)"
)
duration: float = SchemaField(description="Video duration in seconds")
title: str = SchemaField(description="Video title from source")
source_url: str = SchemaField(description="Original source URL")
def __init__(self):
super().__init__(
id="c35daabb-cd60-493b-b9ad-51f1fe4b50c4",
description="Download video from URL (YouTube, Vimeo, news sites, direct links)",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
disabled=True, # Disable until we can sandbox yt-dlp and handle security implications
test_input={
"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"quality": "480p",
},
test_output=[
("video_file", str),
("duration", float),
("title", str),
("source_url", str),
],
test_mock={
"_download_video": lambda *args: (
"video.mp4",
212.0,
"Test Video",
),
"_store_output_video": lambda *args, **kwargs: "video.mp4",
},
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _get_format_string(self, quality: str) -> str:
formats = {
"best": "bestvideo+bestaudio/best",
"1080p": "bestvideo[height<=1080]+bestaudio/best[height<=1080]",
"720p": "bestvideo[height<=720]+bestaudio/best[height<=720]",
"480p": "bestvideo[height<=480]+bestaudio/best[height<=480]",
"audio_only": "bestaudio/best",
}
return formats.get(quality, formats["720p"])
def _download_video(
self,
url: str,
quality: str,
output_format: str,
output_dir: str,
node_exec_id: str,
) -> tuple[str, float, str]:
"""Download video. Extracted for testability."""
output_template = os.path.join(
output_dir, f"{node_exec_id}_%(title).50s.%(ext)s"
)
ydl_opts: "_Params" = {
"format": f"{self._get_format_string(quality)}/best",
"outtmpl": output_template,
"merge_output_format": output_format,
"quiet": True,
"no_warnings": True,
}
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
info = ydl.extract_info(url, download=True)
video_path = ydl.prepare_filename(info)
# Handle format conversion in filename
if not video_path.endswith(f".{output_format}"):
video_path = video_path.rsplit(".", 1)[0] + f".{output_format}"
# Return just the filename, not the full path
filename = os.path.basename(video_path)
return (
filename,
info.get("duration") or 0.0,
info.get("title") or "Unknown",
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
try:
assert execution_context.graph_exec_id is not None
# Get the exec file directory
output_dir = get_exec_file_path(execution_context.graph_exec_id, "")
os.makedirs(output_dir, exist_ok=True)
filename, duration, title = self._download_video(
input_data.url,
input_data.quality,
input_data.output_format,
output_dir,
node_exec_id,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, MediaFileType(filename)
)
yield "video_file", video_out
yield "duration", duration
yield "title", title
yield "source_url", input_data.url
except Exception as e:
raise BlockExecutionError(
message=f"Failed to download video: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,77 @@
"""MediaDurationBlock - Get the duration of a media file."""
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import strip_chapters_inplace
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class MediaDurationBlock(Block):
"""Get the duration of a media file (video or audio)."""
class Input(BlockSchemaInput):
media_in: MediaFileType = SchemaField(
description="Media input (URL, data URI, or local path)."
)
is_video: bool = SchemaField(
description="Whether the media is a video (True) or audio (False).",
default=True,
)
class Output(BlockSchemaOutput):
duration: float = SchemaField(
description="Duration of the media file (in seconds)."
)
def __init__(self):
super().__init__(
id="d8b91fd4-da26-42d4-8ecb-8b196c6d84b6",
description="Block to get the duration of a media file.",
categories={BlockCategory.MULTIMEDIA},
input_schema=MediaDurationBlock.Input,
output_schema=MediaDurationBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# 1) Store the input media locally
local_media_path = await store_media_file(
file=input_data.media_in,
execution_context=execution_context,
return_format="for_local_processing",
)
assert execution_context.graph_exec_id is not None
media_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_media_path
)
# 2) Strip chapters to avoid MoviePy crash, then load the clip
strip_chapters_inplace(media_abspath)
clip = None
try:
if input_data.is_video:
clip = VideoFileClip(media_abspath)
else:
clip = AudioFileClip(media_abspath)
duration = clip.duration
finally:
if clip:
clip.close()
yield "duration", duration

View File

@@ -0,0 +1,115 @@
"""LoopVideoBlock - Loop a video to a given duration or number of repeats."""
from typing import Optional
from moviepy.video.fx.Loop import Loop
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class LoopVideoBlock(Block):
"""Loop (repeat) a video clip until a given duration or number of loops."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="The input video (can be a URL, data URI, or local path)."
)
duration: Optional[float] = SchemaField(
description="Target duration (in seconds) to loop the video to. Either duration or n_loops must be provided.",
default=None,
ge=0.0,
le=3600.0, # Max 1 hour to prevent disk exhaustion
)
n_loops: Optional[int] = SchemaField(
description="Number of times to repeat the video. Either n_loops or duration must be provided.",
default=None,
ge=1,
le=10, # Max 10 loops to prevent disk exhaustion
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Looped video returned either as a relative path or a data URI."
)
def __init__(self):
super().__init__(
id="8bf9eef6-5451-4213-b265-25306446e94b",
description="Block to loop a video to a given duration or number of repeats.",
categories={BlockCategory.MULTIMEDIA},
input_schema=LoopVideoBlock.Input,
output_schema=LoopVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the input video locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
input_abspath = get_exec_file_path(graph_exec_id, local_video_path)
# 2) Load the clip
strip_chapters_inplace(input_abspath)
clip = None
looped_clip = None
try:
clip = VideoFileClip(input_abspath)
# 3) Apply the loop effect
if input_data.duration:
# Loop until we reach the specified duration
looped_clip = clip.with_effects([Loop(duration=input_data.duration)])
elif input_data.n_loops:
looped_clip = clip.with_effects([Loop(n=input_data.n_loops)])
else:
raise ValueError("Either 'duration' or 'n_loops' must be provided.")
assert isinstance(looped_clip, VideoFileClip)
# 4) Save the looped output
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_looped_{source}.mp4")
output_abspath = get_exec_file_path(graph_exec_id, output_filename)
looped_clip = looped_clip.with_audio(clip.audio)
looped_clip.write_videofile(
output_abspath, codec="libx264", audio_codec="aac"
)
finally:
if looped_clip:
looped_clip.close()
if clip:
clip.close()
# Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out

View File

@@ -0,0 +1,267 @@
"""VideoNarrationBlock - Generate AI voice narration and add to video."""
import os
from typing import Literal
from elevenlabs import ElevenLabs
from moviepy import CompositeAudioClip
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.elevenlabs._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
ElevenLabsCredentials,
ElevenLabsCredentialsInput,
)
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import CredentialsField, SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoNarrationBlock(Block):
"""Generate AI narration and add to video."""
class Input(BlockSchemaInput):
credentials: ElevenLabsCredentialsInput = CredentialsField(
description="ElevenLabs API key for voice synthesis"
)
video_in: MediaFileType = SchemaField(
description="Input video (URL, data URI, or local path)"
)
script: str = SchemaField(description="Narration script text")
voice_id: str = SchemaField(
description="ElevenLabs voice ID", default="21m00Tcm4TlvDq8ikWAM" # Rachel
)
model_id: Literal[
"eleven_multilingual_v2",
"eleven_flash_v2_5",
"eleven_turbo_v2_5",
"eleven_turbo_v2",
] = SchemaField(
description="ElevenLabs TTS model",
default="eleven_multilingual_v2",
)
mix_mode: Literal["replace", "mix", "ducking"] = SchemaField(
description="How to combine with original audio. 'ducking' applies stronger attenuation than 'mix'.",
default="ducking",
)
narration_volume: float = SchemaField(
description="Narration volume (0.0 to 2.0)",
default=1.0,
ge=0.0,
le=2.0,
advanced=True,
)
original_volume: float = SchemaField(
description="Original audio volume when mixing (0.0 to 1.0)",
default=0.3,
ge=0.0,
le=1.0,
advanced=True,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Video with narration (path or data URI)"
)
audio_file: MediaFileType = SchemaField(
description="Generated audio file (path or data URI)"
)
def __init__(self):
super().__init__(
id="3d036b53-859c-4b17-9826-ca340f736e0e",
description="Generate AI narration and add to video",
categories={BlockCategory.MULTIMEDIA, BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"video_in": "/tmp/test.mp4",
"script": "Hello world",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("video_out", str), ("audio_file", str)],
test_mock={
"_generate_narration_audio": lambda *args: b"mock audio content",
"_add_narration_to_video": lambda *args: None,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "narrated_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _generate_narration_audio(
self, api_key: str, script: str, voice_id: str, model_id: str
) -> bytes:
"""Generate narration audio via ElevenLabs API."""
client = ElevenLabs(api_key=api_key)
audio_generator = client.text_to_speech.convert(
voice_id=voice_id,
text=script,
model_id=model_id,
)
# The SDK returns a generator, collect all chunks
return b"".join(audio_generator)
def _add_narration_to_video(
self,
video_abspath: str,
audio_abspath: str,
output_abspath: str,
mix_mode: str,
narration_volume: float,
original_volume: float,
) -> None:
"""Add narration audio to video. Extracted for testability."""
video = None
final = None
narration_original = None
narration_scaled = None
original = None
try:
strip_chapters_inplace(video_abspath)
video = VideoFileClip(video_abspath)
narration_original = AudioFileClip(audio_abspath)
narration_scaled = narration_original.with_volume_scaled(narration_volume)
narration = narration_scaled
if mix_mode == "replace":
final_audio = narration
elif mix_mode == "mix":
if video.audio:
original = video.audio.with_volume_scaled(original_volume)
final_audio = CompositeAudioClip([original, narration])
else:
final_audio = narration
else: # ducking - apply stronger attenuation
if video.audio:
# Ducking uses a much lower volume for original audio
ducking_volume = original_volume * 0.3
original = video.audio.with_volume_scaled(ducking_volume)
final_audio = CompositeAudioClip([original, narration])
else:
final_audio = narration
final = video.with_audio(final_audio)
video_codec, audio_codec = get_video_codecs(output_abspath)
final.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
finally:
if original:
original.close()
if narration_scaled:
narration_scaled.close()
if narration_original:
narration_original.close()
if final:
final.close()
if video:
video.close()
async def run(
self,
input_data: Input,
*,
credentials: ElevenLabsCredentials,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
try:
assert execution_context.graph_exec_id is not None
# Store the input video locally
local_video_path = await self._store_input_video(
execution_context, input_data.video_in
)
video_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_video_path
)
# Generate narration audio via ElevenLabs
audio_content = self._generate_narration_audio(
credentials.api_key.get_secret_value(),
input_data.script,
input_data.voice_id,
input_data.model_id,
)
# Save audio to exec file path
audio_filename = MediaFileType(f"{node_exec_id}_narration.mp3")
audio_abspath = get_exec_file_path(
execution_context.graph_exec_id, audio_filename
)
os.makedirs(os.path.dirname(audio_abspath), exist_ok=True)
with open(audio_abspath, "wb") as f:
f.write(audio_content)
# Add narration to video
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_narrated_{source}.mp4")
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
self._add_narration_to_video(
video_abspath,
audio_abspath,
output_abspath,
input_data.mix_mode,
input_data.narration_volume,
input_data.original_volume,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
audio_out = await self._store_output_video(
execution_context, audio_filename
)
yield "video_out", video_out
yield "audio_file", audio_out
except Exception as e:
raise BlockExecutionError(
message=f"Failed to add narration: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,231 @@
"""VideoTextOverlayBlock - Add text overlay to video."""
from typing import Literal
from moviepy import CompositeVideoClip, TextClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoTextOverlayBlock(Block):
"""Add text overlay/caption to video."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Input video (URL, data URI, or local path)"
)
text: str = SchemaField(description="Text to overlay on video")
position: Literal[
"top",
"center",
"bottom",
"top-left",
"top-right",
"bottom-left",
"bottom-right",
] = SchemaField(description="Position of text on screen", default="bottom")
start_time: float | None = SchemaField(
description="When to show text (seconds). None = entire video",
default=None,
advanced=True,
)
end_time: float | None = SchemaField(
description="When to hide text (seconds). None = until end",
default=None,
advanced=True,
)
font_size: int = SchemaField(
description="Font size", default=48, ge=12, le=200, advanced=True
)
font_color: str = SchemaField(
description="Font color (hex or name)", default="white", advanced=True
)
bg_color: str | None = SchemaField(
description="Background color behind text (None for transparent)",
default=None,
advanced=True,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Video with text overlay (path or data URI)"
)
def __init__(self):
super().__init__(
id="8ef14de6-cc90-430a-8cfa-3a003be92454",
description="Add text overlay/caption to video",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
disabled=True, # Disable until we can lockdown imagemagick security policy
test_input={"video_in": "/tmp/test.mp4", "text": "Hello World"},
test_output=[("video_out", str)],
test_mock={
"_add_text_overlay": lambda *args: None,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "overlay_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _add_text_overlay(
self,
video_abspath: str,
output_abspath: str,
text: str,
position: str,
start_time: float | None,
end_time: float | None,
font_size: int,
font_color: str,
bg_color: str | None,
) -> None:
"""Add text overlay to video. Extracted for testability."""
video = None
final = None
txt_clip = None
try:
strip_chapters_inplace(video_abspath)
video = VideoFileClip(video_abspath)
txt_clip = TextClip(
text=text,
font_size=font_size,
color=font_color,
bg_color=bg_color,
)
# Position mapping
pos_map = {
"top": ("center", "top"),
"center": ("center", "center"),
"bottom": ("center", "bottom"),
"top-left": ("left", "top"),
"top-right": ("right", "top"),
"bottom-left": ("left", "bottom"),
"bottom-right": ("right", "bottom"),
}
txt_clip = txt_clip.with_position(pos_map[position])
# Set timing
start = start_time or 0
end = end_time or video.duration
duration = max(0, end - start)
txt_clip = txt_clip.with_start(start).with_end(end).with_duration(duration)
final = CompositeVideoClip([video, txt_clip])
video_codec, audio_codec = get_video_codecs(output_abspath)
final.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
finally:
if txt_clip:
txt_clip.close()
if final:
final.close()
if video:
video.close()
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Validate time range if both are provided
if (
input_data.start_time is not None
and input_data.end_time is not None
and input_data.end_time <= input_data.start_time
):
raise BlockExecutionError(
message=f"end_time ({input_data.end_time}) must be greater than start_time ({input_data.start_time})",
block_name=self.name,
block_id=str(self.id),
)
try:
assert execution_context.graph_exec_id is not None
# Store the input video locally
local_video_path = await self._store_input_video(
execution_context, input_data.video_in
)
video_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_video_path
)
# Build output path
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_overlay_{source}.mp4")
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
self._add_text_overlay(
video_abspath,
output_abspath,
input_data.text,
input_data.position,
input_data.start_time,
input_data.end_time,
input_data.font_size,
input_data.font_color,
input_data.bg_color,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
yield "video_out", video_out
except BlockExecutionError:
raise
except Exception as e:
raise BlockExecutionError(
message=f"Failed to add text overlay: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -36,12 +36,14 @@ from backend.blocks.replicate.replicate_block import ReplicateModelBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.blocks.video.narration import VideoNarrationBlock
from backend.data.block import Block, BlockCost, BlockCostType
from backend.integrations.credentials_store import (
aiml_api_credentials,
anthropic_credentials,
apollo_credentials,
did_credentials,
elevenlabs_credentials,
enrichlayer_credentials,
groq_credentials,
ideogram_credentials,
@@ -78,6 +80,7 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.CLAUDE_4_1_OPUS: 21,
LlmModel.CLAUDE_4_OPUS: 21,
LlmModel.CLAUDE_4_SONNET: 5,
LlmModel.CLAUDE_4_6_OPUS: 14,
LlmModel.CLAUDE_4_5_HAIKU: 4,
LlmModel.CLAUDE_4_5_OPUS: 14,
LlmModel.CLAUDE_4_5_SONNET: 9,
@@ -639,4 +642,16 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
},
),
],
VideoNarrationBlock: [
BlockCost(
cost_amount=5, # ElevenLabs TTS cost
cost_filter={
"credentials": {
"id": elevenlabs_credentials.id,
"provider": elevenlabs_credentials.provider,
"type": elevenlabs_credentials.type,
}
},
)
],
}

View File

@@ -224,6 +224,14 @@ openweathermap_credentials = APIKeyCredentials(
expires_at=None,
)
elevenlabs_credentials = APIKeyCredentials(
id="f4a8b6c2-3d1e-4f5a-9b8c-7d6e5f4a3b2c",
provider="elevenlabs",
api_key=SecretStr(settings.secrets.elevenlabs_api_key),
title="Use Credits for ElevenLabs",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
ollama_credentials,
revid_credentials,
@@ -252,6 +260,7 @@ DEFAULT_CREDENTIALS = [
v0_credentials,
webshare_proxy_credentials,
openweathermap_credentials,
elevenlabs_credentials,
]
SYSTEM_CREDENTIAL_IDS = {cred.id for cred in DEFAULT_CREDENTIALS}
@@ -366,6 +375,8 @@ class IntegrationCredentialsStore:
all_credentials.append(webshare_proxy_credentials)
if settings.secrets.openweathermap_api_key:
all_credentials.append(openweathermap_credentials)
if settings.secrets.elevenlabs_api_key:
all_credentials.append(elevenlabs_credentials)
return all_credentials
async def get_creds_by_id(

View File

@@ -18,6 +18,7 @@ class ProviderName(str, Enum):
DISCORD = "discord"
D_ID = "d_id"
E2B = "e2b"
ELEVENLABS = "elevenlabs"
FAL = "fal"
GITHUB = "github"
GOOGLE = "google"

View File

@@ -8,6 +8,8 @@ from pathlib import Path
from typing import TYPE_CHECKING, Literal
from urllib.parse import urlparse
from pydantic import BaseModel
from backend.util.cloud_storage import get_cloud_storage_handler
from backend.util.request import Requests
from backend.util.settings import Config
@@ -17,6 +19,35 @@ from backend.util.virus_scanner import scan_content_safe
if TYPE_CHECKING:
from backend.data.execution import ExecutionContext
class WorkspaceUri(BaseModel):
"""Parsed workspace:// URI."""
file_ref: str # File ID or path (e.g. "abc123" or "/path/to/file.txt")
mime_type: str | None = None # MIME type from fragment (e.g. "video/mp4")
is_path: bool = False # True if file_ref is a path (starts with "/")
def parse_workspace_uri(uri: str) -> WorkspaceUri:
"""Parse a workspace:// URI into its components.
Examples:
"workspace://abc123" → WorkspaceUri(file_ref="abc123", mime_type=None, is_path=False)
"workspace://abc123#video/mp4" → WorkspaceUri(file_ref="abc123", mime_type="video/mp4", is_path=False)
"workspace:///path/to/file.txt" → WorkspaceUri(file_ref="/path/to/file.txt", mime_type=None, is_path=True)
"""
raw = uri.removeprefix("workspace://")
mime_type: str | None = None
if "#" in raw:
raw, fragment = raw.split("#", 1)
mime_type = fragment or None
return WorkspaceUri(
file_ref=raw,
mime_type=mime_type,
is_path=raw.startswith("/"),
)
# Return format options for store_media_file
# - "for_local_processing": Returns local file path - use with ffmpeg, MoviePy, PIL, etc.
# - "for_external_api": Returns data URI (base64) - use when sending content to external APIs
@@ -183,22 +214,20 @@ async def store_media_file(
"This file type is only available in CoPilot sessions."
)
# Parse workspace reference
# workspace://abc123 - by file ID
# workspace:///path/to/file.txt - by virtual path
file_ref = file[12:] # Remove "workspace://"
# Parse workspace reference (strips #mimeType fragment from file ID)
ws = parse_workspace_uri(file)
if file_ref.startswith("/"):
# Path reference
workspace_content = await workspace_manager.read_file(file_ref)
file_info = await workspace_manager.get_file_info_by_path(file_ref)
if ws.is_path:
# Path reference: workspace:///path/to/file.txt
workspace_content = await workspace_manager.read_file(ws.file_ref)
file_info = await workspace_manager.get_file_info_by_path(ws.file_ref)
filename = sanitize_filename(
file_info.name if file_info else f"{uuid.uuid4()}.bin"
)
else:
# ID reference
workspace_content = await workspace_manager.read_file_by_id(file_ref)
file_info = await workspace_manager.get_file_info(file_ref)
# ID reference: workspace://abc123 or workspace://abc123#video/mp4
workspace_content = await workspace_manager.read_file_by_id(ws.file_ref)
file_info = await workspace_manager.get_file_info(ws.file_ref)
filename = sanitize_filename(
file_info.name if file_info else f"{uuid.uuid4()}.bin"
)
@@ -334,7 +363,21 @@ async def store_media_file(
# Don't re-save if input was already from workspace
if is_from_workspace:
# Return original workspace reference
# Return original workspace reference, ensuring MIME type fragment
ws = parse_workspace_uri(file)
if not ws.mime_type:
# Add MIME type fragment if missing (older refs without it)
try:
if ws.is_path:
info = await workspace_manager.get_file_info_by_path(
ws.file_ref
)
else:
info = await workspace_manager.get_file_info(ws.file_ref)
if info:
return MediaFileType(f"{file}#{info.mimeType}")
except Exception:
pass
return MediaFileType(file)
# Save new content to workspace
@@ -346,7 +389,7 @@ async def store_media_file(
filename=filename,
overwrite=True,
)
return MediaFileType(f"workspace://{file_record.id}")
return MediaFileType(f"workspace://{file_record.id}#{file_record.mimeType}")
else:
raise ValueError(f"Invalid return_format: {return_format}")

View File

@@ -656,6 +656,7 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
e2b_api_key: str = Field(default="", description="E2B API key")
nvidia_api_key: str = Field(default="", description="Nvidia API key")
mem0_api_key: str = Field(default="", description="Mem0 API key")
elevenlabs_api_key: str = Field(default="", description="ElevenLabs API key")
linear_client_id: str = Field(default="", description="Linear client ID")
linear_client_secret: str = Field(default="", description="Linear client secret")

View File

@@ -22,6 +22,7 @@ from backend.data.workspace import (
soft_delete_workspace_file,
)
from backend.util.settings import Config
from backend.util.virus_scanner import scan_content_safe
from backend.util.workspace_storage import compute_file_checksum, get_workspace_storage
logger = logging.getLogger(__name__)
@@ -187,6 +188,9 @@ class WorkspaceManager:
f"{Config().max_file_size_mb}MB limit"
)
# Virus scan content before persisting (defense in depth)
await scan_content_safe(content, filename=filename)
# Determine path with session scoping
if path is None:
path = f"/{filename}"

View File

@@ -825,6 +825,29 @@ files = [
{file = "charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63"},
]
[[package]]
name = "claude-agent-sdk"
version = "0.1.31"
description = "Python SDK for Claude Code"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "claude_agent_sdk-0.1.31-py3-none-macosx_11_0_arm64.whl", hash = "sha256:801bacfe4192782a7cc7b61b0d23a57f061c069993dd3dfa8109aa2e7050a530"},
{file = "claude_agent_sdk-0.1.31-py3-none-manylinux_2_17_aarch64.whl", hash = "sha256:0b608e0cbfcedcb827427e6d16a73fe573d58e7f93e15f95435066feacbe6511"},
{file = "claude_agent_sdk-0.1.31-py3-none-manylinux_2_17_x86_64.whl", hash = "sha256:d0cb30e026a22246e84d9237d23bb4df20be5146913a04d2802ddd37d4f8b8c9"},
{file = "claude_agent_sdk-0.1.31-py3-none-win_amd64.whl", hash = "sha256:8ceca675c2770ad739bd1208362059a830e91c74efcf128045b5a7af14d36f2b"},
{file = "claude_agent_sdk-0.1.31.tar.gz", hash = "sha256:b68c681083d7cc985dd3e48f73aabf459f056c1a7e1c5b9c47033c6af94da1a1"},
]
[package.dependencies]
anyio = ">=4.0.0"
mcp = ">=0.1.0"
typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.11\""}
[package.extras]
dev = ["anyio[trio] (>=4.0.0)", "mypy (>=1.0.0)", "pytest (>=7.0.0)", "pytest-asyncio (>=0.20.0)", "pytest-cov (>=4.0.0)", "ruff (>=0.1.0)"]
[[package]]
name = "cleo"
version = "2.1.0"
@@ -1169,6 +1192,29 @@ attrs = ">=21.3.0"
e2b = ">=1.5.4,<2.0.0"
httpx = ">=0.20.0,<1.0.0"
[[package]]
name = "elevenlabs"
version = "1.59.0"
description = ""
optional = false
python-versions = "<4.0,>=3.8"
groups = ["main"]
files = [
{file = "elevenlabs-1.59.0-py3-none-any.whl", hash = "sha256:468145db81a0bc867708b4a8619699f75583e9481b395ec1339d0b443da771ed"},
{file = "elevenlabs-1.59.0.tar.gz", hash = "sha256:16e735bd594e86d415dd445d249c8cc28b09996cfd627fbc10102c0a84698859"},
]
[package.dependencies]
httpx = ">=0.21.2"
pydantic = ">=1.9.2"
pydantic-core = ">=2.18.2,<3.0.0"
requests = ">=2.20"
typing_extensions = ">=4.0.0"
websockets = ">=11.0"
[package.extras]
pyaudio = ["pyaudio (>=0.2.14)"]
[[package]]
name = "email-validator"
version = "2.2.0"
@@ -2320,6 +2366,18 @@ http2 = ["h2 (>=3,<5)"]
socks = ["socksio (==1.*)"]
zstd = ["zstandard (>=0.18.0)"]
[[package]]
name = "httpx-sse"
version = "0.4.3"
description = "Consume Server-Sent Event (SSE) messages with HTTPX."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "httpx_sse-0.4.3-py3-none-any.whl", hash = "sha256:0ac1c9fe3c0afad2e0ebb25a934a59f4c7823b60792691f779fad2c5568830fc"},
{file = "httpx_sse-0.4.3.tar.gz", hash = "sha256:9b1ed0127459a66014aec3c56bebd93da3c1bc8bb6618c8082039a44889a755d"},
]
[[package]]
name = "huggingface-hub"
version = "0.34.4"
@@ -2981,6 +3039,39 @@ files = [
{file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"},
]
[[package]]
name = "mcp"
version = "1.26.0"
description = "Model Context Protocol SDK"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "mcp-1.26.0-py3-none-any.whl", hash = "sha256:904a21c33c25aa98ddbeb47273033c435e595bbacfdb177f4bd87f6dceebe1ca"},
{file = "mcp-1.26.0.tar.gz", hash = "sha256:db6e2ef491eecc1a0d93711a76f28dec2e05999f93afd48795da1c1137142c66"},
]
[package.dependencies]
anyio = ">=4.5"
httpx = ">=0.27.1"
httpx-sse = ">=0.4"
jsonschema = ">=4.20.0"
pydantic = ">=2.11.0,<3.0.0"
pydantic-settings = ">=2.5.2"
pyjwt = {version = ">=2.10.1", extras = ["crypto"]}
python-multipart = ">=0.0.9"
pywin32 = {version = ">=310", markers = "sys_platform == \"win32\""}
sse-starlette = ">=1.6.1"
starlette = ">=0.27"
typing-extensions = ">=4.9.0"
typing-inspection = ">=0.4.1"
uvicorn = {version = ">=0.31.1", markers = "sys_platform != \"emscripten\""}
[package.extras]
cli = ["python-dotenv (>=1.0.0)", "typer (>=0.16.0)"]
rich = ["rich (>=13.9.4)"]
ws = ["websockets (>=15.0.1)"]
[[package]]
name = "mdurl"
version = "0.1.2"
@@ -5210,7 +5301,7 @@ description = "Python for Window Extensions"
optional = false
python-versions = "*"
groups = ["main"]
markers = "platform_system == \"Windows\""
markers = "sys_platform == \"win32\" or platform_system == \"Windows\""
files = [
{file = "pywin32-311-cp310-cp310-win32.whl", hash = "sha256:d03ff496d2a0cd4a5893504789d4a15399133fe82517455e78bad62efbb7f0a3"},
{file = "pywin32-311-cp310-cp310-win_amd64.whl", hash = "sha256:797c2772017851984b97180b0bebe4b620bb86328e8a884bb626156295a63b3b"},
@@ -6195,6 +6286,27 @@ postgresql-psycopgbinary = ["psycopg[binary] (>=3.0.7)"]
pymysql = ["pymysql"]
sqlcipher = ["sqlcipher3_binary"]
[[package]]
name = "sse-starlette"
version = "3.0.3"
description = "SSE plugin for Starlette"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "sse_starlette-3.0.3-py3-none-any.whl", hash = "sha256:af5bf5a6f3933df1d9c7f8539633dc8444ca6a97ab2e2a7cd3b6e431ac03a431"},
{file = "sse_starlette-3.0.3.tar.gz", hash = "sha256:88cfb08747e16200ea990c8ca876b03910a23b547ab3bd764c0d8eb81019b971"},
]
[package.dependencies]
anyio = ">=4.7.0"
[package.extras]
daphne = ["daphne (>=4.2.0)"]
examples = ["aiosqlite (>=0.21.0)", "fastapi (>=0.115.12)", "sqlalchemy[asyncio] (>=2.0.41)", "starlette (>=0.49.1)", "uvicorn (>=0.34.0)"]
granian = ["granian (>=2.3.1)"]
uvicorn = ["uvicorn (>=0.34.0)"]
[[package]]
name = "stagehand"
version = "0.5.1"
@@ -7361,6 +7473,28 @@ files = [
defusedxml = ">=0.7.1,<0.8.0"
requests = "*"
[[package]]
name = "yt-dlp"
version = "2025.12.8"
description = "A feature-rich command-line audio/video downloader"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "yt_dlp-2025.12.8-py3-none-any.whl", hash = "sha256:36e2584342e409cfbfa0b5e61448a1c5189e345cf4564294456ee509e7d3e065"},
{file = "yt_dlp-2025.12.8.tar.gz", hash = "sha256:b773c81bb6b71cb2c111cfb859f453c7a71cf2ef44eff234ff155877184c3e4f"},
]
[package.extras]
build = ["build", "hatchling (>=1.27.0)", "pip", "setuptools (>=71.0.2)", "wheel"]
curl-cffi = ["curl-cffi (>=0.5.10,<0.6.dev0 || >=0.10.dev0,<0.14) ; implementation_name == \"cpython\""]
default = ["brotli ; implementation_name == \"cpython\"", "brotlicffi ; implementation_name != \"cpython\"", "certifi", "mutagen", "pycryptodomex", "requests (>=2.32.2,<3)", "urllib3 (>=2.0.2,<3)", "websockets (>=13.0)", "yt-dlp-ejs (==0.3.2)"]
dev = ["autopep8 (>=2.0,<3.0)", "pre-commit", "pytest (>=8.1,<9.0)", "pytest-rerunfailures (>=14.0,<15.0)", "ruff (>=0.14.0,<0.15.0)"]
pyinstaller = ["pyinstaller (>=6.17.0)"]
secretstorage = ["cffi", "secretstorage"]
static-analysis = ["autopep8 (>=2.0,<3.0)", "ruff (>=0.14.0,<0.15.0)"]
test = ["pytest (>=8.1,<9.0)", "pytest-rerunfailures (>=14.0,<15.0)"]
[[package]]
name = "zerobouncesdk"
version = "1.1.2"
@@ -7512,4 +7646,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.14"
content-hash = "ee5742dc1a9df50dfc06d4b26a1682cbb2b25cab6b79ce5625ec272f93e4f4bf"
content-hash = "f79a5f01baf459195d6fd06be2515b83c60cf2aef11a16530842b47febb98a23"

View File

@@ -13,6 +13,7 @@ aio-pika = "^9.5.5"
aiohttp = "^3.10.0"
aiodns = "^3.5.0"
anthropic = "^0.59.0"
claude-agent-sdk = "^0.1.0"
apscheduler = "^3.11.1"
autogpt-libs = { path = "../autogpt_libs", develop = true }
bleach = { extras = ["css"], version = "^6.2.0" }
@@ -20,6 +21,7 @@ click = "^8.2.0"
cryptography = "^45.0"
discord-py = "^2.5.2"
e2b-code-interpreter = "^1.5.2"
elevenlabs = "^1.50.0"
fastapi = "^0.116.1"
feedparser = "^6.0.11"
flake8 = "^7.3.0"
@@ -71,6 +73,7 @@ tweepy = "^4.16.0"
uvicorn = { extras = ["standard"], version = "^0.35.0" }
websockets = "^15.0"
youtube-transcript-api = "^1.2.1"
yt-dlp = "2025.12.08"
zerobouncesdk = "^1.1.2"
# NOTE: please insert new dependencies in their alphabetical location
pytest-snapshot = "^0.9.0"

View File

@@ -30,7 +30,6 @@
"defaults"
],
"dependencies": {
"@ai-sdk/react": "3.0.61",
"@faker-js/faker": "10.0.0",
"@hookform/resolvers": "5.2.2",
"@next/third-parties": "15.4.6",
@@ -61,10 +60,6 @@
"@rjsf/utils": "6.1.2",
"@rjsf/validator-ajv8": "6.1.2",
"@sentry/nextjs": "10.27.0",
"@streamdown/cjk": "1.0.1",
"@streamdown/code": "1.0.1",
"@streamdown/math": "1.0.1",
"@streamdown/mermaid": "1.0.1",
"@supabase/ssr": "0.7.0",
"@supabase/supabase-js": "2.78.0",
"@tanstack/react-query": "5.90.6",
@@ -73,7 +68,6 @@
"@vercel/analytics": "1.5.0",
"@vercel/speed-insights": "1.2.0",
"@xyflow/react": "12.9.2",
"ai": "6.0.59",
"boring-avatars": "1.11.2",
"class-variance-authority": "0.7.1",
"clsx": "2.1.1",
@@ -118,11 +112,9 @@
"remark-math": "6.0.0",
"shepherd.js": "14.5.1",
"sonner": "2.0.7",
"streamdown": "2.1.0",
"tailwind-merge": "2.6.0",
"tailwind-scrollbar": "3.1.0",
"tailwindcss-animate": "1.0.7",
"use-stick-to-bottom": "1.1.2",
"uuid": "11.1.0",
"vaul": "1.1.2",
"zod": "3.25.76",

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
import { beautifyString } from "@/lib/utils";
import { Clipboard, Maximize2 } from "lucide-react";
import React, { useState } from "react";
import React, { useMemo, useState } from "react";
import { Button } from "../../../../../components/__legacy__/ui/button";
import { ContentRenderer } from "../../../../../components/__legacy__/ui/render";
import {
@@ -11,6 +11,12 @@ import {
TableHeader,
TableRow,
} from "../../../../../components/__legacy__/ui/table";
import type { OutputMetadata } from "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useToast } from "../../../../../components/molecules/Toast/use-toast";
import ExpandableOutputDialog from "./ExpandableOutputDialog";
@@ -26,6 +32,9 @@ export default function DataTable({
data,
}: DataTableProps) {
const { toast } = useToast();
const enableEnhancedOutputHandling = useGetFlag(
Flag.ENABLE_ENHANCED_OUTPUT_HANDLING,
);
const [expandedDialog, setExpandedDialog] = useState<{
isOpen: boolean;
execId: string;
@@ -33,6 +42,15 @@ export default function DataTable({
data: any[];
} | null>(null);
// Prepare renderers for each item when enhanced mode is enabled
const getItemRenderer = useMemo(() => {
if (!enableEnhancedOutputHandling) return null;
return (item: unknown) => {
const metadata: OutputMetadata = {};
return globalRegistry.getRenderer(item, metadata);
};
}, [enableEnhancedOutputHandling]);
const copyData = (pin: string, data: string) => {
navigator.clipboard.writeText(data).then(() => {
toast({
@@ -102,15 +120,31 @@ export default function DataTable({
<Clipboard size={18} />
</Button>
</div>
{value.map((item, index) => (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < value.length - 1 && ", "}
</React.Fragment>
))}
{value.map((item, index) => {
const renderer = getItemRenderer?.(item);
if (enableEnhancedOutputHandling && renderer) {
const metadata: OutputMetadata = {};
return (
<React.Fragment key={index}>
<OutputItem
value={item}
metadata={metadata}
renderer={renderer}
/>
{index < value.length - 1 && ", "}
</React.Fragment>
);
}
return (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < value.length - 1 && ", "}
</React.Fragment>
);
})}
</div>
</TableCell>
</TableRow>

View File

@@ -1,8 +1,14 @@
import React, { useContext, useState } from "react";
import React, { useContext, useMemo, useState } from "react";
import { Button } from "@/components/__legacy__/ui/button";
import { Maximize2 } from "lucide-react";
import * as Separator from "@radix-ui/react-separator";
import { ContentRenderer } from "@/components/__legacy__/ui/render";
import type { OutputMetadata } from "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { beautifyString } from "@/lib/utils";
@@ -21,6 +27,9 @@ export default function NodeOutputs({
data,
}: NodeOutputsProps) {
const builderContext = useContext(BuilderContext);
const enableEnhancedOutputHandling = useGetFlag(
Flag.ENABLE_ENHANCED_OUTPUT_HANDLING,
);
const [expandedDialog, setExpandedDialog] = useState<{
isOpen: boolean;
@@ -37,6 +46,15 @@ export default function NodeOutputs({
const { getNodeTitle } = builderContext;
// Prepare renderers for each item when enhanced mode is enabled
const getItemRenderer = useMemo(() => {
if (!enableEnhancedOutputHandling) return null;
return (item: unknown) => {
const metadata: OutputMetadata = {};
return globalRegistry.getRenderer(item, metadata);
};
}, [enableEnhancedOutputHandling]);
const getBeautifiedPinName = (pin: string) => {
if (!pin.startsWith("tools_^_")) {
return beautifyString(pin);
@@ -87,15 +105,31 @@ export default function NodeOutputs({
<div className="mt-2">
<strong className="mr-2">Data:</strong>
<div className="mt-1">
{dataArray.slice(0, 10).map((item, index) => (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < Math.min(dataArray.length, 10) - 1 && ", "}
</React.Fragment>
))}
{dataArray.slice(0, 10).map((item, index) => {
const renderer = getItemRenderer?.(item);
if (enableEnhancedOutputHandling && renderer) {
const metadata: OutputMetadata = {};
return (
<React.Fragment key={index}>
<OutputItem
value={item}
metadata={metadata}
renderer={renderer}
/>
{index < Math.min(dataArray.length, 10) - 1 && ", "}
</React.Fragment>
);
}
return (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < Math.min(dataArray.length, 10) - 1 && ", "}
</React.Fragment>
);
})}
{dataArray.length > 10 && (
<span style={{ color: "#888" }}>
<br />

View File

@@ -1,71 +0,0 @@
"use client";
import { ChatInput } from "@/components/contextual/Chat/components/ChatInput/ChatInput";
import { UIDataTypes, UIMessage, UITools } from "ai";
import { LayoutGroup, motion } from "framer-motion";
import { ChatMessagesContainer } from "../ChatMessagesContainer/ChatMessagesContainer";
import { CopilotChatActionsProvider } from "../CopilotChatActionsProvider/CopilotChatActionsProvider";
import { EmptySession } from "../EmptySession/EmptySession";
export interface ChatContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
sessionId: string | null;
isLoadingSession: boolean;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
}
export const ChatContainer = ({
messages,
status,
error,
sessionId,
isLoadingSession,
isCreatingSession,
onCreateSession,
onSend,
}: ChatContainerProps) => {
const inputLayoutId = "copilot-2-chat-input";
return (
<CopilotChatActionsProvider onSend={onSend}>
<LayoutGroup id="copilot-2-chat-layout">
<div className="flex h-full min-h-0 w-full flex-col bg-[#f8f8f9] px-2 lg:px-0">
{sessionId ? (
<div className="mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col">
<ChatMessagesContainer
messages={messages}
status={status}
error={error}
isLoading={isLoadingSession}
/>
<motion.div
layoutId={inputLayoutId}
transition={{ type: "spring", bounce: 0.2, duration: 0.65 }}
className="relative px-3 pb-2 pt-2"
>
<div className="pointer-events-none absolute left-0 right-0 top-[-18px] z-10 h-6 bg-gradient-to-b from-transparent to-[#f8f8f9]" />
<ChatInput
inputId="chat-input-session"
onSend={onSend}
disabled={status === "streaming"}
isStreaming={status === "streaming"}
onStop={() => {}}
placeholder="What else can I help with?"
/>
</motion.div>
</div>
) : (
<EmptySession
inputLayoutId={inputLayoutId}
isCreatingSession={isCreatingSession}
onCreateSession={onCreateSession}
onSend={onSend}
/>
)}
</div>
</LayoutGroup>
</CopilotChatActionsProvider>
);
};

View File

@@ -1,144 +0,0 @@
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import {
Message,
MessageContent,
MessageResponse,
} from "@/components/ai-elements/message";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { UIDataTypes, UIMessage, UITools, ToolUIPart } from "ai";
import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks";
import { FindAgentsTool } from "../../tools/FindAgents/FindAgents";
import { SearchDocsTool } from "../../tools/SearchDocs/SearchDocs";
import { RunBlockTool } from "../../tools/RunBlock/RunBlock";
import { RunAgentTool } from "../../tools/RunAgent/RunAgent";
import { ViewAgentOutputTool } from "../../tools/ViewAgentOutput/ViewAgentOutput";
import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../tools/EditAgent/EditAgent";
interface ChatMessagesContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
isLoading: boolean;
}
export const ChatMessagesContainer = ({
messages,
status,
error,
isLoading,
}: ChatMessagesContainerProps) => {
return (
<Conversation className="min-h-0 flex-1">
<ConversationContent className="gap-6 px-3 py-6">
{isLoading && messages.length === 0 && (
<div className="flex flex-1 items-center justify-center">
<LoadingSpinner size="large" className="text-neutral-400" />
</div>
)}
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent
className={
"text-[1rem] leading-relaxed " +
"group-[.is-user]:rounded-xl group-[.is-user]:bg-purple-100 group-[.is-user]:px-3 group-[.is-user]:py-2.5 group-[.is-user]:text-slate-900 group-[.is-user]:[border-bottom-right-radius:0] " +
"group-[.is-assistant]:bg-transparent group-[.is-assistant]:text-slate-900"
}
>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return (
<MessageResponse key={`${message.id}-${i}`}>
{part.text}
</MessageResponse>
);
case "tool-find_block":
return (
<FindBlocksTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-find_agent":
case "tool-find_library_agent":
return (
<FindAgentsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-search_docs":
case "tool-get_doc_page":
return (
<SearchDocsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_block":
return (
<RunBlockTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_agent":
case "tool-schedule_agent":
return (
<RunAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-create_agent":
return (
<CreateAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-edit_agent":
return (
<EditAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-view_agent_output":
return (
<ViewAgentOutputTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
default:
return null;
}
})}
</MessageContent>
</Message>
))}
{status === "submitted" && (
<Message from="assistant">
<MessageContent className="text-[1rem] leading-relaxed">
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
Thinking...
</span>
</MessageContent>
</Message>
)}
{error && (
<div className="rounded-lg bg-red-50 p-3 text-red-600">
Error: {error.message}
</div>
)}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
);
};

View File

@@ -1,191 +0,0 @@
"use client";
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
Sidebar,
SidebarContent,
SidebarFooter,
SidebarHeader,
SidebarTrigger,
useSidebar,
} from "@/components/ui/sidebar";
import { cn } from "@/lib/utils";
import {
PlusCircleIcon,
PlusIcon,
SpinnerGapIcon,
} from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { parseAsString, useQueryState } from "nuqs";
export function ChatSidebar() {
const { state } = useSidebar();
const isCollapsed = state === "collapsed";
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
function handleNewChat() {
setSessionId(null);
}
function handleSelectSession(id: string) {
setSessionId(id);
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
return (
<Sidebar
variant="inset"
collapsible="icon"
className="!top-[50px] !h-[calc(100vh-50px)] border-r border-zinc-100 px-0"
>
{isCollapsed && (
<SidebarHeader
className={cn(
"flex",
isCollapsed
? "flex-row items-center justify-between gap-y-4 md:flex-col md:items-start md:justify-start"
: "flex-row items-center justify-between",
)}
>
<motion.div
key={isCollapsed ? "header-collapsed" : "header-expanded"}
className="flex flex-col items-center gap-3 pt-4"
initial={{ opacity: 0, filter: "blur(3px)" }}
animate={{ opacity: 1, filter: "blur(0px)" }}
transition={{ type: "spring", bounce: 0.2 }}
>
<div className="flex flex-col items-center gap-2">
<SidebarTrigger />
<Button
variant="ghost"
onClick={handleNewChat}
style={{ minWidth: "auto", width: "auto" }}
>
<PlusCircleIcon className="!size-5" />
<span className="sr-only">New Chat</span>
</Button>
</div>
</motion.div>
</SidebarHeader>
)}
<SidebarContent className="gap-4 overflow-y-auto px-4 py-4 [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.1 }}
className="flex items-center justify-between px-3"
>
<Text variant="h3" size="body-medium">
Your chats
</Text>
<div className="relative left-6">
<SidebarTrigger />
</div>
</motion.div>
)}
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.15 }}
className="mt-4 flex flex-col gap-1"
>
{isLoadingSessions ? (
<div className="flex items-center justify-center py-4">
<SpinnerGapIcon className="h-5 w-5 animate-spin text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => handleSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === sessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === sessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || `Untitled chat`}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</motion.div>
)}
</SidebarContent>
{!isCollapsed && sessionId && (
<SidebarFooter className="shrink-0 bg-zinc-50 p-3 pb-1 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.2 }}
>
<Button
variant="primary"
size="small"
onClick={handleNewChat}
className="w-full"
leftIcon={<PlusIcon className="h-4 w-4" weight="bold" />}
>
New Chat
</Button>
</motion.div>
</SidebarFooter>
)}
</Sidebar>
);
}

View File

@@ -1,16 +0,0 @@
"use client";
import { CopilotChatActionsContext } from "./useCopilotChatActions";
interface Props {
onSend: (message: string) => void | Promise<void>;
children: React.ReactNode;
}
export function CopilotChatActionsProvider({ onSend, children }: Props) {
return (
<CopilotChatActionsContext.Provider value={{ onSend }}>
{children}
</CopilotChatActionsContext.Provider>
);
}

View File

@@ -1,23 +0,0 @@
"use client";
import { createContext, useContext } from "react";
interface CopilotChatActions {
onSend: (message: string) => void | Promise<void>;
}
const CopilotChatActionsContext = createContext<CopilotChatActions | null>(
null,
);
export function useCopilotChatActions(): CopilotChatActions {
const ctx = useContext(CopilotChatActionsContext);
if (!ctx) {
throw new Error(
"useCopilotChatActions must be used within CopilotChatActionsProvider",
);
}
return ctx;
}
export { CopilotChatActionsContext };

View File

@@ -1,111 +0,0 @@
"use client";
import {
getGreetingName,
getInputPlaceholder,
getQuickActions,
} from "@/app/(platform)/copilot/helpers";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { ChatInput } from "@/components/contextual/Chat/components/ChatInput/ChatInput";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { SpinnerGapIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { useEffect, useState } from "react";
interface Props {
inputLayoutId: string;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
}
export function EmptySession({
inputLayoutId,
isCreatingSession,
onSend,
}: Props) {
const { user } = useSupabase();
const greetingName = getGreetingName(user);
const quickActions = getQuickActions();
const [loadingAction, setLoadingAction] = useState<string | null>(null);
const [inputPlaceholder, setInputPlaceholder] = useState(
getInputPlaceholder(),
);
useEffect(() => {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
}, [window.innerWidth]);
async function handleQuickActionClick(action: string) {
if (isCreatingSession || loadingAction) return;
setLoadingAction(action);
try {
await onSend(action);
} finally {
setLoadingAction(null);
}
}
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-0 py-5 md:px-6 md:py-10">
<motion.div
className="w-full max-w-3xl text-center"
initial={{ opacity: 0, y: 14, filter: "blur(6px)" }}
animate={{ opacity: 1, y: 0, filter: "blur(0px)" }}
transition={{ type: "spring", bounce: 0.2, duration: 0.7 }}
>
<div className="mx-auto max-w-3xl">
<Text variant="h3" className="mb-1 !text-[1.375rem] text-zinc-700">
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 !font-normal">
Tell me about your work I&apos;ll find what to automate.
</Text>
<div className="mb-6">
<motion.div
layoutId={inputLayoutId}
transition={{ type: "spring", bounce: 0.2, duration: 0.65 }}
className="w-full px-2"
>
<ChatInput
inputId="chat-input-empty"
onSend={onSend}
disabled={isCreatingSession}
placeholder={inputPlaceholder}
className="w-full"
/>
</motion.div>
</div>
</div>
<div className="flex flex-wrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => void handleQuickActionClick(action)}
disabled={isCreatingSession || loadingAction !== null}
aria-busy={loadingAction === action}
leftIcon={
loadingAction === action ? (
<SpinnerGapIcon
className="h-4 w-4 animate-spin"
weight="bold"
/>
) : null
}
className="h-auto shrink-0 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
</motion.div>
</div>
);
}

View File

@@ -1,11 +0,0 @@
export function getInputPlaceholder(width?: number) {
if (!width) return "What's your role and what eats up most of your day?";
if (width < 500) {
return "I'm a chef and I hate...";
}
if (width <= 1080) {
return "What's your role and what eats up most of your day?";
}
return "What's your role and what eats up most of your day? e.g. 'I'm a recruiter and I hate...'";
}

View File

@@ -1,140 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, SpinnerGapIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
onSelectSession: (sessionId: string) => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
onSelectSession,
onNewChat,
onClose,
onOpenChange,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 px-4 py-2">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1rem" height="1rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col gap-1 overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
{isLoading ? (
<div className="flex items-center justify-center py-4">
<SpinnerGapIcon className="h-5 w-5 animate-spin text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === currentSessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === currentSessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || "Untitled chat"}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</div>
{currentSessionId && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -1,22 +0,0 @@
import { Button } from "@/components/atoms/Button/Button";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import { ListIcon } from "@phosphor-icons/react";
interface Props {
onOpenDrawer: () => void;
}
export function MobileHeader({ onOpenDrawer }: Props) {
return (
<Button
variant="icon"
size="icon"
aria-label="Open sessions"
onClick={onOpenDrawer}
className="fixed z-50 bg-white shadow-md"
style={{ left: "1rem", top: `${NAVBAR_HEIGHT_PX + 20}px` }}
>
<ListIcon width="1.25rem" height="1.25rem" />
</Button>
);
}

View File

@@ -1,54 +0,0 @@
import { cn } from "@/lib/utils";
import { AnimatePresence, motion } from "framer-motion";
interface Props {
text: string;
className?: string;
}
export function MorphingTextAnimation({ text, className }: Props) {
const letters = text.split("");
return (
<div className={cn(className)}>
<AnimatePresence mode="popLayout" initial={false}>
<motion.div key={text} className="whitespace-nowrap">
<motion.span className="inline-flex overflow-hidden">
{letters.map((char, index) => (
<motion.span
key={`${text}-${index}`}
initial={{
opacity: 0,
y: 8,
rotateX: "80deg",
filter: "blur(6px)",
}}
animate={{
opacity: 1,
y: 0,
rotateX: "0deg",
filter: "blur(0px)",
}}
exit={{
opacity: 0,
y: -8,
rotateX: "-80deg",
filter: "blur(6px)",
}}
style={{ willChange: "transform" }}
transition={{
delay: 0.015 * index,
type: "spring",
bounce: 0.5,
}}
className="inline-block"
>
{char === " " ? "\u00A0" : char}
</motion.span>
))}
</motion.span>
</motion.div>
</AnimatePresence>
</div>
);
}

View File

@@ -1,92 +0,0 @@
"use client";
import { cn } from "@/lib/utils";
import { CaretDownIcon } from "@phosphor-icons/react";
import { AnimatePresence, motion, useReducedMotion } from "framer-motion";
import { useId } from "react";
import { useToolAccordion } from "./useToolAccordion";
interface Props {
badgeText: string;
title: React.ReactNode;
description?: React.ReactNode;
children: React.ReactNode;
className?: string;
defaultExpanded?: boolean;
expanded?: boolean;
onExpandedChange?: (expanded: boolean) => void;
}
export function ToolAccordion({
badgeText,
title,
description,
children,
className,
defaultExpanded,
expanded,
onExpandedChange,
}: Props) {
const shouldReduceMotion = useReducedMotion();
const contentId = useId();
const { isExpanded, toggle } = useToolAccordion({
expanded,
defaultExpanded,
onExpandedChange,
});
return (
<div className={cn("mt-2 w-full rounded-lg border px-3 py-2", className)}>
<button
type="button"
aria-expanded={isExpanded}
aria-controls={contentId}
onClick={toggle}
className="flex w-full items-center justify-between gap-3 py-1 text-left"
>
<div className="flex min-w-0 items-center gap-2">
<span className="px-2 py-0.5 text-[11px] font-medium text-muted-foreground">
{badgeText}
</span>
<div className="min-w-0">
<p className="truncate text-sm font-medium text-foreground">
{title}
</p>
{description && (
<p className="truncate text-xs text-muted-foreground">
{description}
</p>
)}
</div>
</div>
<CaretDownIcon
className={cn(
"h-4 w-4 shrink-0 text-muted-foreground transition-transform",
isExpanded && "rotate-180",
)}
weight="bold"
/>
</button>
<AnimatePresence initial={false}>
{isExpanded && (
<motion.div
id={contentId}
initial={{ height: 0, opacity: 0, filter: "blur(10px)" }}
animate={{ height: "auto", opacity: 1, filter: "blur(0px)" }}
exit={{ height: 0, opacity: 0, filter: "blur(10px)" }}
transition={
shouldReduceMotion
? { duration: 0 }
: { type: "spring", bounce: 0.35, duration: 0.55 }
}
className="overflow-hidden"
style={{ willChange: "height, opacity, filter" }}
>
<div className="pb-2 pt-3">{children}</div>
</motion.div>
)}
</AnimatePresence>
</div>
);
}

View File

@@ -1,32 +0,0 @@
import { useState } from "react";
interface UseToolAccordionOptions {
expanded?: boolean;
defaultExpanded?: boolean;
onExpandedChange?: (expanded: boolean) => void;
}
interface UseToolAccordionResult {
isExpanded: boolean;
toggle: () => void;
}
export function useToolAccordion({
expanded,
defaultExpanded = false,
onExpandedChange,
}: UseToolAccordionOptions): UseToolAccordionResult {
const [uncontrolledExpanded, setUncontrolledExpanded] =
useState(defaultExpanded);
const isControlled = typeof expanded === "boolean";
const isExpanded = isControlled ? expanded : uncontrolledExpanded;
function toggle() {
const next = !isExpanded;
if (!isControlled) setUncontrolledExpanded(next);
onExpandedChange?.(next);
}
return { isExpanded, toggle };
}

View File

@@ -1,128 +0,0 @@
import type { UIMessage, UIDataTypes, UITools } from "ai";
interface SessionChatMessage {
role: string;
content: string | null;
tool_call_id: string | null;
tool_calls: unknown[] | null;
}
function coerceSessionChatMessages(
rawMessages: unknown[],
): SessionChatMessage[] {
return rawMessages
.map((m) => {
if (!m || typeof m !== "object") return null;
const msg = m as Record<string, unknown>;
const role = typeof msg.role === "string" ? msg.role : null;
if (!role) return null;
return {
role,
content:
typeof msg.content === "string"
? msg.content
: msg.content == null
? null
: String(msg.content),
tool_call_id:
typeof msg.tool_call_id === "string"
? msg.tool_call_id
: msg.tool_call_id == null
? null
: String(msg.tool_call_id),
tool_calls: Array.isArray(msg.tool_calls) ? msg.tool_calls : null,
};
})
.filter((m): m is SessionChatMessage => m !== null);
}
function safeJsonParse(value: string): unknown {
try {
return JSON.parse(value) as unknown;
} catch {
return value;
}
}
function toToolInput(rawArguments: unknown): unknown {
if (typeof rawArguments === "string") {
const trimmed = rawArguments.trim();
return trimmed ? safeJsonParse(trimmed) : {};
}
if (rawArguments && typeof rawArguments === "object") return rawArguments;
return {};
}
export function convertChatSessionMessagesToUiMessages(
sessionId: string,
rawMessages: unknown[],
): UIMessage<unknown, UIDataTypes, UITools>[] {
const messages = coerceSessionChatMessages(rawMessages);
const toolOutputsByCallId = new Map<string, unknown>();
for (const msg of messages) {
if (msg.role !== "tool") continue;
if (!msg.tool_call_id) continue;
if (msg.content == null) continue;
toolOutputsByCallId.set(msg.tool_call_id, msg.content);
}
const uiMessages: UIMessage<unknown, UIDataTypes, UITools>[] = [];
messages.forEach((msg, index) => {
if (msg.role === "tool") return;
if (msg.role !== "user" && msg.role !== "assistant") return;
const parts: UIMessage<unknown, UIDataTypes, UITools>["parts"] = [];
if (typeof msg.content === "string" && msg.content.trim()) {
parts.push({ type: "text", text: msg.content, state: "done" });
}
if (msg.role === "assistant" && Array.isArray(msg.tool_calls)) {
for (const rawToolCall of msg.tool_calls) {
if (!rawToolCall || typeof rawToolCall !== "object") continue;
const toolCall = rawToolCall as {
id?: unknown;
function?: { name?: unknown; arguments?: unknown };
};
const toolCallId = String(toolCall.id ?? "").trim();
const toolName = String(toolCall.function?.name ?? "").trim();
if (!toolCallId || !toolName) continue;
const input = toToolInput(toolCall.function?.arguments);
const output = toolOutputsByCallId.get(toolCallId);
if (output !== undefined) {
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "output-available",
input,
output: typeof output === "string" ? safeJsonParse(output) : output,
});
} else {
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "input-available",
input,
});
}
}
}
if (parts.length === 0) return;
uiMessages.push({
id: `${sessionId}-${index}`,
role: msg.role,
parts,
});
});
return uiMessages;
}

View File

@@ -1,67 +0,0 @@
"use client";
import { SidebarProvider } from "@/components/ui/sidebar";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotPage } from "./useCopilotPage";
export default function Page() {
const {
sessionId,
messages,
status,
error,
isLoadingSession,
isCreatingSession,
createSession,
onSend,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
} = useCopilotPage();
return (
<SidebarProvider
defaultOpen={true}
className="h-[calc(100vh-72px)] min-h-0"
>
{!isMobile && <ChatSidebar />}
<div className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex-1 overflow-hidden">
<ChatContainer
messages={messages}
status={status}
error={error}
sessionId={sessionId}
isLoadingSession={isLoadingSession}
isCreatingSession={isCreatingSession}
onCreateSession={createSession}
onSend={onSend}
/>
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={sessionId}
isLoading={isLoadingSessions}
onSelectSession={handleSelectSession}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
/>
)}
</SidebarProvider>
);
}

View File

@@ -1,218 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ClarificationQuestionsWidget,
type ClarifyingQuestion as WidgetClarifyingQuestion,
} from "@/components/contextual/Chat/components/ClarificationQuestionsWidget/ClarificationQuestionsWidget";
import {
formatMaybeJson,
getAnimationText,
getCreateAgentToolOutput,
isAgentPreviewOutput,
isAgentSavedOutput,
isClarificationNeededOutput,
isErrorOutput,
isOperationInProgressOutput,
isOperationPendingOutput,
isOperationStartedOutput,
ToolIcon,
truncateText,
type CreateAgentToolOutput,
} from "./helpers";
export interface CreateAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: CreateAgentToolPart;
}
function getAccordionMeta(output: CreateAgentToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isAgentSavedOutput(output)) {
return { badgeText: "Create agent", title: output.agent_name };
}
if (isAgentPreviewOutput(output)) {
return {
badgeText: "Create agent",
title: output.agent_name,
description: `${output.node_count} block${output.node_count === 1 ? "" : "s"}`,
};
}
if (isClarificationNeededOutput(output)) {
const questions = output.questions ?? [];
return {
badgeText: "Create agent",
title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
};
}
if (
isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { badgeText: "Create agent", title: "Creating agent" };
}
return { badgeText: "Create agent", title: "Error" };
}
export function CreateAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getCreateAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output) ||
isAgentPreviewOutput(output) ||
isAgentSavedOutput(output) ||
isClarificationNeededOutput(output) ||
isErrorOutput(output));
function handleClarificationAnswers(answers: Record<string, string>) {
const contextMessage = Object.entries(answers)
.map(([keyword, answer]) => `${keyword}: ${answer}`)
.join("\n");
onSend(
`I have the answers to your questions:\n\n${contextMessage}\n\nPlease proceed with creating the agent.`,
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isClarificationNeededOutput(output)}
>
{(isOperationStartedOutput(output) ||
isOperationPendingOutput(output)) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs text-muted-foreground">
Operation: {output.operation_id}
</p>
<p className="text-xs italic text-muted-foreground">
Check your library in a few minutes.
</p>
</div>
)}
{isOperationInProgressOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs italic text-muted-foreground">
Please wait for the current operation to finish.
</p>
</div>
)}
{isAgentSavedOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<div className="flex flex-wrap gap-2">
<Link
href={output.library_agent_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in library
</Link>
<Link
href={output.agent_page_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in builder
</Link>
</div>
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</pre>
</div>
)}
{isAgentPreviewOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.description?.trim() && (
<p className="text-xs text-muted-foreground">
{output.description}
</p>
)}
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(formatMaybeJson(output.agent_json), 1600)}
</pre>
</div>
)}
{isClarificationNeededOutput(output) && (
<ClarificationQuestionsWidget
questions={(output.questions ?? []).map((q) => {
const item: WidgetClarifyingQuestion = {
question: q.question,
keyword: q.keyword,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
})}
message={output.message}
onSubmitAnswers={handleClarificationAnswers}
/>
)}
{isErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,181 +0,0 @@
import type { ToolUIPart } from "ai";
import { PlusIcon } from "@phosphor-icons/react";
import type { AgentPreviewResponse } from "@/app/api/__generated__/models/agentPreviewResponse";
import type { AgentSavedResponse } from "@/app/api/__generated__/models/agentSavedResponse";
import type { ClarificationNeededResponse } from "@/app/api/__generated__/models/clarificationNeededResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { OperationInProgressResponse } from "@/app/api/__generated__/models/operationInProgressResponse";
import type { OperationPendingResponse } from "@/app/api/__generated__/models/operationPendingResponse";
import type { OperationStartedResponse } from "@/app/api/__generated__/models/operationStartedResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export type CreateAgentToolOutput =
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| ErrorResponse;
function parseOutput(output: unknown): CreateAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.operation_started ||
type === ResponseType.operation_pending ||
type === ResponseType.operation_in_progress ||
type === ResponseType.agent_preview ||
type === ResponseType.agent_saved ||
type === ResponseType.clarification_needed ||
type === ResponseType.error
) {
return output as CreateAgentToolOutput;
}
if ("operation_id" in output && "tool_name" in output)
return output as OperationStartedResponse | OperationPendingResponse;
if ("tool_call_id" in output) return output as OperationInProgressResponse;
if ("agent_json" in output && "agent_name" in output)
return output as AgentPreviewResponse;
if ("agent_id" in output && "library_agent_id" in output)
return output as AgentSavedResponse;
if ("questions" in output) return output as ClarificationNeededResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getCreateAgentToolOutput(
part: unknown,
): CreateAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isOperationStartedOutput(
output: CreateAgentToolOutput,
): output is OperationStartedResponse {
return (
output.type === ResponseType.operation_started ||
("operation_id" in output && "tool_name" in output)
);
}
export function isOperationPendingOutput(
output: CreateAgentToolOutput,
): output is OperationPendingResponse {
return output.type === ResponseType.operation_pending;
}
export function isOperationInProgressOutput(
output: CreateAgentToolOutput,
): output is OperationInProgressResponse {
return (
output.type === ResponseType.operation_in_progress ||
"tool_call_id" in output
);
}
export function isAgentPreviewOutput(
output: CreateAgentToolOutput,
): output is AgentPreviewResponse {
return output.type === ResponseType.agent_preview || "agent_json" in output;
}
export function isAgentSavedOutput(
output: CreateAgentToolOutput,
): output is AgentSavedResponse {
return (
output.type === ResponseType.agent_saved || "agent_page_link" in output
);
}
export function isClarificationNeededOutput(
output: CreateAgentToolOutput,
): output is ClarificationNeededResponse {
return (
output.type === ResponseType.clarification_needed || "questions" in output
);
}
export function isErrorOutput(
output: CreateAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.state) {
case "input-streaming":
case "input-available":
return "Creating a new agent";
case "output-available": {
const output = parseOutput(part.output);
if (!output) return "Creating a new agent";
if (isOperationStartedOutput(output)) return "Agent creation started";
if (isOperationPendingOutput(output)) return "Agent creation in progress";
if (isOperationInProgressOutput(output))
return "Agent creation already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error creating agent";
}
case "output-error":
return "Error creating agent";
default:
return "Creating a new agent";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PlusIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function truncateText(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}

View File

@@ -1,218 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ClarificationQuestionsWidget,
type ClarifyingQuestion as WidgetClarifyingQuestion,
} from "@/components/contextual/Chat/components/ClarificationQuestionsWidget/ClarificationQuestionsWidget";
import {
formatMaybeJson,
getAnimationText,
getEditAgentToolOutput,
isAgentPreviewOutput,
isAgentSavedOutput,
isClarificationNeededOutput,
isErrorOutput,
isOperationInProgressOutput,
isOperationPendingOutput,
isOperationStartedOutput,
ToolIcon,
truncateText,
type EditAgentToolOutput,
} from "./helpers";
export interface EditAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: EditAgentToolPart;
}
function getAccordionMeta(output: EditAgentToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isAgentSavedOutput(output)) {
return { badgeText: "Edit agent", title: output.agent_name };
}
if (isAgentPreviewOutput(output)) {
return {
badgeText: "Edit agent",
title: output.agent_name,
description: `${output.node_count} block${output.node_count === 1 ? "" : "s"}`,
};
}
if (isClarificationNeededOutput(output)) {
const questions = output.questions ?? [];
return {
badgeText: "Edit agent",
title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
};
}
if (
isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { badgeText: "Edit agent", title: "Editing agent" };
}
return { badgeText: "Edit agent", title: "Error" };
}
export function EditAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getEditAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output) ||
isAgentPreviewOutput(output) ||
isAgentSavedOutput(output) ||
isClarificationNeededOutput(output) ||
isErrorOutput(output));
function handleClarificationAnswers(answers: Record<string, string>) {
const contextMessage = Object.entries(answers)
.map(([keyword, answer]) => `${keyword}: ${answer}`)
.join("\n");
onSend(
`I have the answers to your questions:\n\n${contextMessage}\n\nPlease proceed with editing the agent.`,
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isClarificationNeededOutput(output)}
>
{(isOperationStartedOutput(output) ||
isOperationPendingOutput(output)) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs text-muted-foreground">
Operation: {output.operation_id}
</p>
<p className="text-xs italic text-muted-foreground">
Check your library in a few minutes.
</p>
</div>
)}
{isOperationInProgressOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs italic text-muted-foreground">
Please wait for the current operation to finish.
</p>
</div>
)}
{isAgentSavedOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<div className="flex flex-wrap gap-2">
<Link
href={output.library_agent_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in library
</Link>
<Link
href={output.agent_page_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in builder
</Link>
</div>
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</pre>
</div>
)}
{isAgentPreviewOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.description?.trim() && (
<p className="text-xs text-muted-foreground">
{output.description}
</p>
)}
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(formatMaybeJson(output.agent_json), 1600)}
</pre>
</div>
)}
{isClarificationNeededOutput(output) && (
<ClarificationQuestionsWidget
questions={(output.questions ?? []).map((q) => {
const item: WidgetClarifyingQuestion = {
question: q.question,
keyword: q.keyword,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
})}
message={output.message}
onSubmitAnswers={handleClarificationAnswers}
/>
)}
{isErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,181 +0,0 @@
import type { ToolUIPart } from "ai";
import { PencilLineIcon } from "@phosphor-icons/react";
import type { AgentPreviewResponse } from "@/app/api/__generated__/models/agentPreviewResponse";
import type { AgentSavedResponse } from "@/app/api/__generated__/models/agentSavedResponse";
import type { ClarificationNeededResponse } from "@/app/api/__generated__/models/clarificationNeededResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { OperationInProgressResponse } from "@/app/api/__generated__/models/operationInProgressResponse";
import type { OperationPendingResponse } from "@/app/api/__generated__/models/operationPendingResponse";
import type { OperationStartedResponse } from "@/app/api/__generated__/models/operationStartedResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export type EditAgentToolOutput =
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| ErrorResponse;
function parseOutput(output: unknown): EditAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.operation_started ||
type === ResponseType.operation_pending ||
type === ResponseType.operation_in_progress ||
type === ResponseType.agent_preview ||
type === ResponseType.agent_saved ||
type === ResponseType.clarification_needed ||
type === ResponseType.error
) {
return output as EditAgentToolOutput;
}
if ("operation_id" in output && "tool_name" in output)
return output as OperationStartedResponse | OperationPendingResponse;
if ("tool_call_id" in output) return output as OperationInProgressResponse;
if ("agent_json" in output && "agent_name" in output)
return output as AgentPreviewResponse;
if ("agent_id" in output && "library_agent_id" in output)
return output as AgentSavedResponse;
if ("questions" in output) return output as ClarificationNeededResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getEditAgentToolOutput(
part: unknown,
): EditAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isOperationStartedOutput(
output: EditAgentToolOutput,
): output is OperationStartedResponse {
return (
output.type === ResponseType.operation_started ||
("operation_id" in output && "tool_name" in output)
);
}
export function isOperationPendingOutput(
output: EditAgentToolOutput,
): output is OperationPendingResponse {
return output.type === ResponseType.operation_pending;
}
export function isOperationInProgressOutput(
output: EditAgentToolOutput,
): output is OperationInProgressResponse {
return (
output.type === ResponseType.operation_in_progress ||
"tool_call_id" in output
);
}
export function isAgentPreviewOutput(
output: EditAgentToolOutput,
): output is AgentPreviewResponse {
return output.type === ResponseType.agent_preview || "agent_json" in output;
}
export function isAgentSavedOutput(
output: EditAgentToolOutput,
): output is AgentSavedResponse {
return (
output.type === ResponseType.agent_saved || "agent_page_link" in output
);
}
export function isClarificationNeededOutput(
output: EditAgentToolOutput,
): output is ClarificationNeededResponse {
return (
output.type === ResponseType.clarification_needed || "questions" in output
);
}
export function isErrorOutput(
output: EditAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.state) {
case "input-streaming":
case "input-available":
return "Editing the agent";
case "output-available": {
const output = parseOutput(part.output);
if (!output) return "Editing the agent";
if (isOperationStartedOutput(output)) return "Agent update started";
if (isOperationPendingOutput(output)) return "Agent update in progress";
if (isOperationInProgressOutput(output))
return "Agent update already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error editing agent";
}
case "output-error":
return "Error editing agent";
default:
return "Editing the agent";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PencilLineIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function truncateText(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}

View File

@@ -1,131 +0,0 @@
"use client";
import { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import {
getAgentHref,
getAnimationText,
getFindAgentsOutput,
getSourceLabelFromToolType,
isAgentsFoundOutput,
isErrorOutput,
ToolIcon,
} from "./helpers";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
export interface FindAgentsToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: FindAgentsToolPart;
}
export function FindAgentsTool({ part }: Props) {
const text = getAnimationText(part);
const output = getFindAgentsOutput(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const query =
typeof part.input === "object" && part.input !== null
? String((part.input as { query?: unknown }).query ?? "").trim()
: "";
const agentsFoundOutput =
part.state === "output-available" && output && isAgentsFoundOutput(output)
? output
: null;
const hasAgents =
!!agentsFoundOutput &&
agentsFoundOutput.agents.length > 0 &&
(typeof agentsFoundOutput.count !== "number" ||
agentsFoundOutput.count > 0);
const totalCount = agentsFoundOutput ? agentsFoundOutput.count : 0;
const { label: sourceLabel, source } = getSourceLabelFromToolType(part.type);
const scopeText =
source === "library"
? "in your library"
: source === "marketplace"
? "in marketplace"
: "";
const accordionDescription = `Found ${totalCount}${scopeText ? ` ${scopeText}` : ""}${
query ? ` for "${query}"` : ""
}`;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon
toolType={part.type}
isStreaming={isStreaming}
isError={isError}
/>
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasAgents && agentsFoundOutput && (
<ToolAccordion
badgeText={sourceLabel}
title="Agent results"
description={accordionDescription}
>
<div className="grid gap-2 sm:grid-cols-2">
{agentsFoundOutput.agents.map((agent) => {
const href = getAgentHref(agent);
const agentSource =
agent.source === "library"
? "Library"
: agent.source === "marketplace"
? "Marketplace"
: null;
return (
<div
key={agent.id}
className="rounded-2xl border bg-background p-3"
>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<div className="flex items-center gap-2">
<p className="truncate text-sm font-medium text-foreground">
{agent.name}
</p>
{agentSource && (
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{agentSource}
</span>
)}
</div>
<p className="mt-1 line-clamp-2 text-xs text-muted-foreground">
{agent.description}
</p>
</div>
{href && (
<Link
href={href}
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
)}
</div>
</div>
);
})}
</div>
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,176 +0,0 @@
import { ToolUIPart } from "ai";
import { MagnifyingGlassIcon, SquaresFourIcon } from "@phosphor-icons/react";
import type { AgentInfo } from "@/app/api/__generated__/models/agentInfo";
import type { AgentsFoundResponse } from "@/app/api/__generated__/models/agentsFoundResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export interface FindAgentInput {
query: string;
}
export type FindAgentsOutput =
| AgentsFoundResponse
| NoResultsResponse
| ErrorResponse;
export type FindAgentsToolType =
| "tool-find_agent"
| "tool-find_library_agent"
| (string & {});
function parseOutput(output: unknown): FindAgentsOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.agents_found ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as FindAgentsOutput;
}
if ("agents" in output && "count" in output)
return output as AgentsFoundResponse;
if ("suggestions" in output && !("error" in output))
return output as NoResultsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getFindAgentsOutput(part: unknown): FindAgentsOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isAgentsFoundOutput(
output: FindAgentsOutput,
): output is AgentsFoundResponse {
return output.type === ResponseType.agents_found || "agents" in output;
}
export function isNoResultsOutput(
output: FindAgentsOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorOutput(
output: FindAgentsOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getSourceLabelFromToolType(toolType?: FindAgentsToolType): {
source: "marketplace" | "library" | "unknown";
label: string;
} {
if (toolType === "tool-find_library_agent") {
return { source: "library", label: "Library" };
}
if (toolType === "tool-find_agent") {
return { source: "marketplace", label: "Marketplace" };
}
return { source: "unknown", label: "Agents" };
}
export function getAnimationText(part: {
type?: FindAgentsToolType;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const { source } = getSourceLabelFromToolType(part.type);
const query = (part.input as FindAgentInput | undefined)?.query?.trim();
// Action phrase matching legacy ToolCallMessage
const actionPhrase =
source === "library"
? "Looking for library agents"
: "Looking for agents in the marketplace";
const queryText = query ? ` matching "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `${actionPhrase}${queryText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) {
return `${actionPhrase}${queryText}`;
}
if (isNoResultsOutput(output)) {
return `No agents found${queryText}`;
}
if (isAgentsFoundOutput(output)) {
const count = output.count ?? output.agents?.length ?? 0;
return `Found ${count} agent${count === 1 ? "" : "s"}${queryText}`;
}
if (isErrorOutput(output)) {
return `Error finding agents${queryText}`;
}
return `${actionPhrase}${queryText}`;
}
case "output-error":
return `Error finding agents${queryText}`;
default:
return actionPhrase;
}
}
export function getAgentHref(agent: AgentInfo): string | null {
if (agent.source === "library") {
return `/library/agents/${encodeURIComponent(agent.id)}`;
}
const [creator, slug, ...rest] = agent.id.split("/");
if (!creator || !slug || rest.length > 0) return null;
return `/marketplace/agent/${encodeURIComponent(creator)}/${encodeURIComponent(slug)}`;
}
export function ToolIcon({
toolType,
isStreaming,
isError,
}: {
toolType?: FindAgentsToolType;
isStreaming?: boolean;
isError?: boolean;
}) {
const { source } = getSourceLabelFromToolType(toolType);
const IconComponent =
source === "library" ? MagnifyingGlassIcon : SquaresFourIcon;
return (
<IconComponent
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}

View File

@@ -1,41 +0,0 @@
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import type { BlockListResponse } from "@/app/api/__generated__/models/blockListResponse";
import { ToolUIPart } from "ai";
import { getAnimationText, ToolIcon } from "./helpers";
export interface FindBlockInput {
query: string;
}
export type FindBlockOutput = BlockListResponse;
export interface FindBlockToolPart {
type: string;
toolName?: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: FindBlockInput | unknown;
output?: string | FindBlockOutput | unknown;
title?: string;
}
interface Props {
part: FindBlockToolPart;
}
export function FindBlocksTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError = part.state === "output-error";
return (
<div className="flex items-center gap-2 py-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
);
}

View File

@@ -1,72 +0,0 @@
import { ToolUIPart } from "ai";
import type { BlockListResponse } from "@/app/api/__generated__/models/blockListResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import { FindBlockInput, FindBlockToolPart } from "./FindBlocks";
import { PackageIcon } from "@phosphor-icons/react";
function parseOutput(output: unknown): BlockListResponse | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (type === ResponseType.block_list || "blocks" in output) {
return output as BlockListResponse;
}
}
return null;
}
export function getAnimationText(part: FindBlockToolPart): string {
const query = (part.input as FindBlockInput | undefined)?.query?.trim();
const queryText = query ? ` matching "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Searching for blocks${queryText}`;
case "output-available": {
const parsed = parseOutput(part.output);
if (parsed) {
return `Found ${parsed.count} block${parsed.count === 1 ? "" : "s"}${queryText}`;
}
return `Searching for blocks${queryText}`;
}
case "output-error":
return `Error finding blocks${queryText}`;
default:
return "Searching for blocks";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PackageIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}

View File

@@ -1,377 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ChatCredentialsSetup,
type CredentialInfo,
} from "@/components/contextual/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup";
import {
formatMaybeJson,
getAnimationText,
getRunAgentToolOutput,
isRunAgentAgentDetailsOutput,
isRunAgentErrorOutput,
isRunAgentExecutionStartedOutput,
isRunAgentNeedLoginOutput,
isRunAgentSetupRequirementsOutput,
ToolIcon,
type RunAgentToolOutput,
} from "./helpers";
export interface RunAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: RunAgentToolPart;
}
function getAccordionMeta(output: RunAgentToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isRunAgentExecutionStartedOutput(output)) {
const statusText =
typeof output.status === "string" && output.status.trim()
? output.status.trim()
: "started";
return {
badgeText: "Run agent",
title: output.graph_name,
description: `Status: ${statusText}`,
};
}
if (isRunAgentAgentDetailsOutput(output)) {
return {
badgeText: "Run agent",
title: output.agent.name,
description: "Inputs required",
};
}
if (isRunAgentSetupRequirementsOutput(output)) {
const missingCredsCount = Object.keys(
(output.setup_info.user_readiness?.missing_credentials ?? {}) as Record<
string,
unknown
>,
).length;
return {
badgeText: "Run agent",
title: output.setup_info.agent_name,
description:
missingCredsCount > 0
? `Missing ${missingCredsCount} credential${missingCredsCount === 1 ? "" : "s"}`
: output.message,
};
}
if (isRunAgentNeedLoginOutput(output)) {
return { badgeText: "Run agent", title: "Sign in required" };
}
return { badgeText: "Run agent", title: "Error" };
}
function coerceMissingCredentials(
rawMissingCredentials: unknown,
): CredentialInfo[] {
const missing =
rawMissingCredentials && typeof rawMissingCredentials === "object"
? (rawMissingCredentials as Record<string, unknown>)
: {};
const validTypes = new Set([
"api_key",
"oauth2",
"user_password",
"host_scoped",
]);
const results: CredentialInfo[] = [];
Object.values(missing).forEach((value) => {
if (!value || typeof value !== "object") return;
const cred = value as Record<string, unknown>;
const provider =
typeof cred.provider === "string" ? cred.provider.trim() : "";
if (!provider) return;
const providerName =
typeof cred.provider_name === "string" && cred.provider_name.trim()
? cred.provider_name.trim()
: provider.replace(/_/g, " ");
const title =
typeof cred.title === "string" && cred.title.trim()
? cred.title.trim()
: providerName;
const types =
Array.isArray(cred.types) && cred.types.length > 0
? cred.types
: typeof cred.type === "string"
? [cred.type]
: [];
const credentialTypes = types
.map((t) => (typeof t === "string" ? t.trim() : ""))
.filter(
(t): t is "api_key" | "oauth2" | "user_password" | "host_scoped" =>
validTypes.has(t),
);
if (credentialTypes.length === 0) return;
const scopes = Array.isArray(cred.scopes)
? cred.scopes.filter((s): s is string => typeof s === "string")
: undefined;
const item: CredentialInfo = {
provider,
providerName,
credentialTypes,
title,
};
if (scopes && scopes.length > 0) {
item.scopes = scopes;
}
results.push(item);
});
return results;
}
function coerceExpectedInputs(rawInputs: unknown): Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> {
if (!Array.isArray(rawInputs)) return [];
const results: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> = [];
rawInputs.forEach((value, index) => {
if (!value || typeof value !== "object") return;
const input = value as Record<string, unknown>;
const name =
typeof input.name === "string" && input.name.trim()
? input.name.trim()
: `input-${index}`;
const title =
typeof input.title === "string" && input.title.trim()
? input.title.trim()
: name;
const type = typeof input.type === "string" ? input.type : "unknown";
const description =
typeof input.description === "string" && input.description.trim()
? input.description.trim()
: undefined;
const required = Boolean(input.required);
const item: {
name: string;
title: string;
type: string;
description?: string;
required: boolean;
} = { name, title, type, required };
if (description) item.description = description;
results.push(item);
});
return results;
}
export function RunAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getRunAgentToolOutput(part);
const isError =
part.state === "output-error" ||
(!!output && isRunAgentErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isRunAgentExecutionStartedOutput(output) ||
isRunAgentAgentDetailsOutput(output) ||
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentNeedLoginOutput(output) ||
isRunAgentErrorOutput(output));
function handleAllCredentialsComplete() {
onSend(
"I've configured the required credentials. Please check if everything is ready and proceed with running the agent.",
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentAgentDetailsOutput(output)
}
>
{isRunAgentExecutionStartedOutput(output) && (
<div className="grid gap-2">
<div className="rounded-2xl border bg-background p-3">
<div className="flex items-start justify-between gap-3">
<div className="min-w-0">
<p className="text-sm font-medium text-foreground">
Execution started
</p>
<p className="mt-0.5 truncate text-xs text-muted-foreground">
{output.execution_id}
</p>
<p className="mt-2 text-xs text-muted-foreground">
{output.message}
</p>
</div>
{output.library_agent_link && (
<Link
href={output.library_agent_link}
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
)}
</div>
</div>
</div>
)}
{isRunAgentAgentDetailsOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.agent.description?.trim() && (
<p className="text-xs text-muted-foreground">
{output.agent.description}
</p>
)}
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">Inputs</p>
<p className="mt-1 text-xs text-muted-foreground">
Provide required inputs in chat, or ask to run with defaults.
</p>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(output.agent.inputs)}
</pre>
</div>
</div>
)}
{isRunAgentSetupRequirementsOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
).length > 0 && (
<ChatCredentialsSetup
credentials={coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
)}
agentName={output.setup_info.agent_name}
message={output.message}
onAllCredentialsComplete={handleAllCredentialsComplete}
onCancel={() => {}}
/>
)}
{coerceExpectedInputs(
(output.setup_info.requirements as Record<string, unknown>)
?.inputs,
).length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Expected inputs
</p>
<div className="mt-2 grid gap-2">
{coerceExpectedInputs(
(
output.setup_info.requirements as Record<
string,
unknown
>
)?.inputs,
).map((input) => (
<div key={input.name} className="rounded-xl border p-2">
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{input.title}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{input.required ? "Required" : "Optional"}
</span>
</div>
<p className="mt-1 text-xs text-muted-foreground">
{input.name} {input.type}
{input.description ? `${input.description}` : ""}
</p>
</div>
))}
</div>
</div>
)}
</div>
)}
{isRunAgentNeedLoginOutput(output) && (
<p className="text-sm text-foreground">{output.message}</p>
)}
{isRunAgentErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,191 +0,0 @@
import type { ToolUIPart } from "ai";
import { PlayIcon } from "@phosphor-icons/react";
import type { AgentDetailsResponse } from "@/app/api/__generated__/models/agentDetailsResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { ExecutionStartedResponse } from "@/app/api/__generated__/models/executionStartedResponse";
import type { NeedLoginResponse } from "@/app/api/__generated__/models/needLoginResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
export interface RunAgentInput {
username_agent_slug?: string;
library_agent_id?: string;
inputs?: Record<string, unknown>;
use_defaults?: boolean;
schedule_name?: string;
cron?: string;
timezone?: string;
}
export type RunAgentToolOutput =
| SetupRequirementsResponse
| ExecutionStartedResponse
| AgentDetailsResponse
| NeedLoginResponse
| ErrorResponse;
const RUN_AGENT_OUTPUT_TYPES = new Set<string>([
ResponseType.setup_requirements,
ResponseType.execution_started,
ResponseType.agent_details,
ResponseType.need_login,
ResponseType.error,
]);
export function isRunAgentSetupRequirementsOutput(
output: RunAgentToolOutput,
): output is SetupRequirementsResponse {
return (
output.type === ResponseType.setup_requirements ||
("setup_info" in output && typeof output.setup_info === "object")
);
}
export function isRunAgentExecutionStartedOutput(
output: RunAgentToolOutput,
): output is ExecutionStartedResponse {
return (
output.type === ResponseType.execution_started || "execution_id" in output
);
}
export function isRunAgentAgentDetailsOutput(
output: RunAgentToolOutput,
): output is AgentDetailsResponse {
return output.type === ResponseType.agent_details || "agent" in output;
}
export function isRunAgentNeedLoginOutput(
output: RunAgentToolOutput,
): output is NeedLoginResponse {
return output.type === ResponseType.need_login;
}
export function isRunAgentErrorOutput(
output: RunAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
function parseOutput(output: unknown): RunAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (typeof type === "string" && RUN_AGENT_OUTPUT_TYPES.has(type)) {
return output as RunAgentToolOutput;
}
if ("execution_id" in output) return output as ExecutionStartedResponse;
if ("setup_info" in output) return output as SetupRequirementsResponse;
if ("agent" in output) return output as AgentDetailsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
if (type === ResponseType.need_login) return output as NeedLoginResponse;
}
return null;
}
export function getRunAgentToolOutput(
part: unknown,
): RunAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getAgentIdentifierText(
input: RunAgentInput | undefined,
): string | null {
if (!input) return null;
const slug = input.username_agent_slug?.trim();
if (slug) return slug;
const libraryId = input.library_agent_id?.trim();
if (libraryId) return `Library agent ${libraryId}`;
return null;
}
function getExecutionModeText(input: RunAgentInput | undefined): string | null {
if (!input) return null;
const isSchedule = Boolean(input.schedule_name?.trim() || input.cron?.trim());
return isSchedule ? "Scheduled run" : "Run";
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as RunAgentInput | undefined;
const agentIdentifier = getAgentIdentifierText(input);
const isSchedule = Boolean(
input?.schedule_name?.trim() || input?.cron?.trim(),
);
const actionPhrase = isSchedule
? "Scheduling the agent to run"
: "Running the agent";
const identifierText = agentIdentifier ? ` "${agentIdentifier}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `${actionPhrase}${identifierText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `${actionPhrase}${identifierText}`;
if (isRunAgentExecutionStartedOutput(output)) {
return `Started "${output.graph_name}"`;
}
if (isRunAgentAgentDetailsOutput(output)) {
return `Agent inputs needed for "${output.agent.name}"`;
}
if (isRunAgentSetupRequirementsOutput(output)) {
return `Setup needed for "${output.setup_info.agent_name}"`;
}
if (isRunAgentNeedLoginOutput(output))
return "Sign in required to run agent";
return "Error running agent";
}
case "output-error":
return "Error running agent";
default:
return actionPhrase;
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PlayIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -1,325 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ChatCredentialsSetup,
type CredentialInfo,
} from "@/components/contextual/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup";
import {
formatMaybeJson,
getAnimationText,
getRunBlockToolOutput,
isRunBlockBlockOutput,
isRunBlockErrorOutput,
isRunBlockSetupRequirementsOutput,
ToolIcon,
type RunBlockToolOutput,
} from "./helpers";
export interface RunBlockToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: RunBlockToolPart;
}
function getAccordionMeta(output: RunBlockToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isRunBlockBlockOutput(output)) {
const keys = Object.keys(output.outputs ?? {});
return {
badgeText: "Run block",
title: output.block_name,
description:
keys.length > 0
? `${keys.length} output key${keys.length === 1 ? "" : "s"}`
: output.message,
};
}
if (isRunBlockSetupRequirementsOutput(output)) {
const missingCredsCount = Object.keys(
(output.setup_info.user_readiness?.missing_credentials ?? {}) as Record<
string,
unknown
>,
).length;
return {
badgeText: "Run block",
title: output.setup_info.agent_name,
description:
missingCredsCount > 0
? `Missing ${missingCredsCount} credential${missingCredsCount === 1 ? "" : "s"}`
: output.message,
};
}
return { badgeText: "Run block", title: "Error" };
}
function coerceMissingCredentials(
rawMissingCredentials: unknown,
): CredentialInfo[] {
const missing =
rawMissingCredentials && typeof rawMissingCredentials === "object"
? (rawMissingCredentials as Record<string, unknown>)
: {};
const validTypes = new Set([
"api_key",
"oauth2",
"user_password",
"host_scoped",
]);
const results: CredentialInfo[] = [];
Object.values(missing).forEach((value) => {
if (!value || typeof value !== "object") return;
const cred = value as Record<string, unknown>;
const provider =
typeof cred.provider === "string" ? cred.provider.trim() : "";
if (!provider) return;
const providerName =
typeof cred.provider_name === "string" && cred.provider_name.trim()
? cred.provider_name.trim()
: provider.replace(/_/g, " ");
const title =
typeof cred.title === "string" && cred.title.trim()
? cred.title.trim()
: providerName;
const types =
Array.isArray(cred.types) && cred.types.length > 0
? cred.types
: typeof cred.type === "string"
? [cred.type]
: [];
const credentialTypes = types
.map((t) => (typeof t === "string" ? t.trim() : ""))
.filter(
(t): t is "api_key" | "oauth2" | "user_password" | "host_scoped" =>
validTypes.has(t),
);
if (credentialTypes.length === 0) return;
const scopes = Array.isArray(cred.scopes)
? cred.scopes.filter((s): s is string => typeof s === "string")
: undefined;
const item: CredentialInfo = {
provider,
providerName,
credentialTypes,
title,
};
if (scopes && scopes.length > 0) {
item.scopes = scopes;
}
results.push(item);
});
return results;
}
function coerceExpectedInputs(rawInputs: unknown): Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> {
if (!Array.isArray(rawInputs)) return [];
const results: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> = [];
rawInputs.forEach((value, index) => {
if (!value || typeof value !== "object") return;
const input = value as Record<string, unknown>;
const name =
typeof input.name === "string" && input.name.trim()
? input.name.trim()
: `input-${index}`;
const title =
typeof input.title === "string" && input.title.trim()
? input.title.trim()
: name;
const type = typeof input.type === "string" ? input.type : "unknown";
const description =
typeof input.description === "string" && input.description.trim()
? input.description.trim()
: undefined;
const required = Boolean(input.required);
const item: {
name: string;
title: string;
type: string;
description?: string;
required: boolean;
} = { name, title, type, required };
if (description) item.description = description;
results.push(item);
});
return results;
}
export function RunBlockTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getRunBlockToolOutput(part);
const isError =
part.state === "output-error" ||
(!!output && isRunBlockErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isRunBlockBlockOutput(output) ||
isRunBlockSetupRequirementsOutput(output) ||
isRunBlockErrorOutput(output));
function handleAllCredentialsComplete() {
onSend(
"I've configured the required credentials. Please re-run the block now.",
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isRunBlockSetupRequirementsOutput(output)}
>
{isRunBlockBlockOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{Object.entries(output.outputs ?? {}).map(([key, items]) => (
<div key={key} className="rounded-2xl border bg-background p-3">
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{key}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{items.length} item{items.length === 1 ? "" : "s"}
</span>
</div>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(items.slice(0, 3))}
</pre>
</div>
))}
</div>
)}
{isRunBlockSetupRequirementsOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
).length > 0 && (
<ChatCredentialsSetup
credentials={coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
)}
agentName={output.setup_info.agent_name}
message={output.message}
onAllCredentialsComplete={handleAllCredentialsComplete}
onCancel={() => {}}
/>
)}
{coerceExpectedInputs(
(output.setup_info.requirements as Record<string, unknown>)
?.inputs,
).length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Expected inputs
</p>
<div className="mt-2 grid gap-2">
{coerceExpectedInputs(
(
output.setup_info.requirements as Record<
string,
unknown
>
)?.inputs,
).map((input) => (
<div key={input.name} className="rounded-xl border p-2">
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{input.title}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{input.required ? "Required" : "Optional"}
</span>
</div>
<p className="mt-1 text-xs text-muted-foreground">
{input.name} {input.type}
{input.description ? `${input.description}` : ""}
</p>
</div>
))}
</div>
</div>
)}
</div>
)}
{isRunBlockErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,140 +0,0 @@
import type { ToolUIPart } from "ai";
import { PlayIcon } from "@phosphor-icons/react";
import type { BlockOutputResponse } from "@/app/api/__generated__/models/blockOutputResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
export interface RunBlockInput {
block_id?: string;
input_data?: Record<string, unknown>;
}
export type RunBlockToolOutput =
| SetupRequirementsResponse
| BlockOutputResponse
| ErrorResponse;
const RUN_BLOCK_OUTPUT_TYPES = new Set<string>([
ResponseType.setup_requirements,
ResponseType.block_output,
ResponseType.error,
]);
export function isRunBlockSetupRequirementsOutput(
output: RunBlockToolOutput,
): output is SetupRequirementsResponse {
return (
output.type === ResponseType.setup_requirements ||
("setup_info" in output && typeof output.setup_info === "object")
);
}
export function isRunBlockBlockOutput(
output: RunBlockToolOutput,
): output is BlockOutputResponse {
return output.type === ResponseType.block_output || "block_id" in output;
}
export function isRunBlockErrorOutput(
output: RunBlockToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
function parseOutput(output: unknown): RunBlockToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (typeof type === "string" && RUN_BLOCK_OUTPUT_TYPES.has(type)) {
return output as RunBlockToolOutput;
}
if ("block_id" in output) return output as BlockOutputResponse;
if ("setup_info" in output) return output as SetupRequirementsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getRunBlockToolOutput(
part: unknown,
): RunBlockToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getBlockLabel(input: RunBlockInput | undefined): string | null {
const blockId = input?.block_id?.trim();
if (!blockId) return null;
return `Block ${blockId.slice(0, 8)}`;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as RunBlockInput | undefined;
const blockId = input?.block_id?.trim();
const blockText = blockId ? ` "${blockId}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Running the block${blockText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Running the block${blockText}`;
if (isRunBlockBlockOutput(output)) return `Ran "${output.block_name}"`;
if (isRunBlockSetupRequirementsOutput(output)) {
return `Setup needed for "${output.setup_info.agent_name}"`;
}
return "Error running block";
}
case "output-error":
return "Error running block";
default:
return "Running the block";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PlayIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -1,197 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { useMemo } from "react";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
getDocsToolOutput,
getDocsToolTitle,
getToolLabel,
getAnimationText,
isDocPageOutput,
isDocSearchResultsOutput,
isErrorOutput,
isNoResultsOutput,
ToolIcon,
toDocsUrl,
type DocsToolType,
} from "./helpers";
export interface DocsToolPart {
type: DocsToolType;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: DocsToolPart;
}
function truncate(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}
export function SearchDocsTool({ part }: Props) {
const output = getDocsToolOutput(part);
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const normalized = useMemo(() => {
if (!output) return null;
const title = getDocsToolTitle(part.type, output);
const label = getToolLabel(part.type);
return { title, label };
}, [output, part.type]);
const isOutputAvailable = part.state === "output-available" && !!output;
const docSearchOutput =
isOutputAvailable && output && isDocSearchResultsOutput(output)
? output
: null;
const docPageOutput =
isOutputAvailable && output && isDocPageOutput(output) ? output : null;
const noResultsOutput =
isOutputAvailable && output && isNoResultsOutput(output) ? output : null;
const errorOutput =
isOutputAvailable && output && isErrorOutput(output) ? output : null;
const hasExpandableContent =
isOutputAvailable &&
((!!docSearchOutput && docSearchOutput.count > 0) ||
!!docPageOutput ||
!!noResultsOutput ||
!!errorOutput);
const accordionDescription =
hasExpandableContent && docSearchOutput
? `Found ${docSearchOutput.count} result${docSearchOutput.count === 1 ? "" : "s"} for "${docSearchOutput.query}"`
: hasExpandableContent && docPageOutput
? docPageOutput.path
: hasExpandableContent && (noResultsOutput || errorOutput)
? ((noResultsOutput ?? errorOutput)?.message ?? null)
: null;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon
toolType={part.type}
isStreaming={isStreaming}
isError={isError}
/>
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && normalized && (
<ToolAccordion
badgeText={normalized.label}
title={normalized.title}
description={accordionDescription}
>
{docSearchOutput && (
<div className="grid gap-2">
{docSearchOutput.results.map((r) => {
const href = r.doc_url ?? toDocsUrl(r.path);
return (
<div
key={r.path}
className="rounded-2xl border bg-background p-3"
>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<p className="truncate text-sm font-medium text-foreground">
{r.title}
</p>
<p className="mt-0.5 truncate text-xs text-muted-foreground">
{r.path}
{r.section ? `${r.section}` : ""}
</p>
<p className="mt-2 text-xs text-muted-foreground">
{truncate(r.snippet, 240)}
</p>
</div>
<Link
href={href}
target="_blank"
rel="noreferrer"
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
</div>
</div>
);
})}
</div>
)}
{docPageOutput && (
<div>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<p className="truncate text-sm font-medium text-foreground">
{docPageOutput.title}
</p>
<p className="mt-0.5 truncate text-xs text-muted-foreground">
{docPageOutput.path}
</p>
</div>
<Link
href={docPageOutput.doc_url ?? toDocsUrl(docPageOutput.path)}
target="_blank"
rel="noreferrer"
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
</div>
<p className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{truncate(docPageOutput.content, 800)}
</p>
</div>
)}
{noResultsOutput && (
<div>
<p className="text-sm text-foreground">
{noResultsOutput.message}
</p>
{noResultsOutput.suggestions &&
noResultsOutput.suggestions.length > 0 && (
<ul className="mt-2 list-disc space-y-1 pl-5 text-xs text-muted-foreground">
{noResultsOutput.suggestions.slice(0, 5).map((s) => (
<li key={s}>{s}</li>
))}
</ul>
)}
</div>
)}
{errorOutput && (
<div>
<p className="text-sm text-foreground">{errorOutput.message}</p>
{errorOutput.error && (
<p className="mt-2 text-xs text-muted-foreground">
{errorOutput.error}
</p>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,205 +0,0 @@
import { ToolUIPart } from "ai";
import { FileMagnifyingGlassIcon, FileTextIcon } from "@phosphor-icons/react";
import type { DocPageResponse } from "@/app/api/__generated__/models/docPageResponse";
import type { DocSearchResultsResponse } from "@/app/api/__generated__/models/docSearchResultsResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export interface SearchDocsInput {
query: string;
}
export interface GetDocPageInput {
path: string;
}
export type DocsToolOutput =
| DocSearchResultsResponse
| DocPageResponse
| NoResultsResponse
| ErrorResponse;
export type DocsToolType = "tool-search_docs" | "tool-get_doc_page" | string;
export function getToolLabel(toolType: DocsToolType): string {
switch (toolType) {
case "tool-search_docs":
return "Docs";
case "tool-get_doc_page":
return "Docs page";
default:
return "Docs";
}
}
function parseOutput(output: unknown): DocsToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.doc_search_results ||
type === ResponseType.doc_page ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as DocsToolOutput;
}
if ("results" in output && "query" in output)
return output as DocSearchResultsResponse;
if ("content" in output && "path" in output)
return output as DocPageResponse;
if ("suggestions" in output && !("error" in output))
return output as NoResultsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getDocsToolOutput(part: unknown): DocsToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isDocSearchResultsOutput(
output: DocsToolOutput,
): output is DocSearchResultsResponse {
return output.type === ResponseType.doc_search_results || "results" in output;
}
export function isDocPageOutput(
output: DocsToolOutput,
): output is DocPageResponse {
return output.type === ResponseType.doc_page || "content" in output;
}
export function isNoResultsOutput(
output: DocsToolOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorOutput(output: DocsToolOutput): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getDocsToolTitle(
toolType: DocsToolType,
output: DocsToolOutput,
): string {
if (toolType === "tool-search_docs") {
if (isDocSearchResultsOutput(output)) return "Documentation results";
if (isNoResultsOutput(output)) return "No documentation found";
return "Documentation search error";
}
if (isDocPageOutput(output)) return "Documentation page";
if (isNoResultsOutput(output)) return "No documentation found";
return "Documentation page error";
}
export function getAnimationText(part: {
type: DocsToolType;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.type) {
case "tool-search_docs": {
const query = (part.input as SearchDocsInput | undefined)?.query?.trim();
const queryText = query ? ` for "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Searching documentation${queryText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Searching documentation${queryText}`;
if (isDocSearchResultsOutput(output)) {
const count = output.count ?? output.results.length;
return `Found ${count} result${count === 1 ? "" : "s"}${queryText}`;
}
if (isNoResultsOutput(output)) {
return `No results found${queryText}`;
}
return `Error searching documentation${queryText}`;
}
case "output-error":
return `Error searching documentation${queryText}`;
default:
return "Searching documentation";
}
}
case "tool-get_doc_page": {
const path = (part.input as GetDocPageInput | undefined)?.path?.trim();
const pathText = path ? ` "${path}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Loading documentation page${pathText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Loading documentation page${pathText}`;
if (isDocPageOutput(output)) return `Loaded "${output.title}"`;
if (isNoResultsOutput(output)) return "Documentation page not found";
return "Error loading documentation page";
}
case "output-error":
return "Error loading documentation page";
default:
return "Loading documentation page";
}
}
}
return "Processing";
}
export function ToolIcon({
toolType,
isStreaming,
isError,
}: {
toolType: DocsToolType;
isStreaming?: boolean;
isError?: boolean;
}) {
const IconComponent =
toolType === "tool-get_doc_page" ? FileTextIcon : FileMagnifyingGlassIcon;
return (
<IconComponent
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function toDocsUrl(path: string): string {
const urlPath = path.includes(".")
? path.slice(0, path.lastIndexOf("."))
: path;
return `https://docs.agpt.co/${urlPath}`;
}

View File

@@ -1,181 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
formatMaybeJson,
getAnimationText,
getViewAgentOutputToolOutput,
isAgentOutputResponse,
isErrorResponse,
isNoResultsResponse,
ToolIcon,
type ViewAgentOutputToolOutput,
} from "./helpers";
export interface ViewAgentOutputToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: ViewAgentOutputToolPart;
}
function getAccordionMeta(output: ViewAgentOutputToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isAgentOutputResponse(output)) {
const status = output.execution?.status;
return {
badgeText: "Agent output",
title: output.agent_name,
description: status ? `Status: ${status}` : output.message,
};
}
if (isNoResultsResponse(output)) {
return { badgeText: "Agent output", title: "No results" };
}
return { badgeText: "Agent output", title: "Error" };
}
export function ViewAgentOutputTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getViewAgentOutputToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorResponse(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isAgentOutputResponse(output) ||
isNoResultsResponse(output) ||
isErrorResponse(output));
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}>
{isAgentOutputResponse(output) && (
<div className="grid gap-2">
<div className="flex items-start justify-between gap-3">
<p className="text-sm text-foreground">{output.message}</p>
{output.library_agent_link && (
<Link
href={output.library_agent_link}
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
)}
</div>
{output.execution ? (
<div className="grid gap-2">
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Execution
</p>
<p className="mt-1 truncate text-xs text-muted-foreground">
{output.execution.execution_id}
</p>
<p className="mt-1 text-xs text-muted-foreground">
Status: {output.execution.status}
</p>
</div>
{output.execution.inputs_summary && (
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Inputs summary
</p>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(output.execution.inputs_summary)}
</pre>
</div>
)}
{Object.entries(output.execution.outputs ?? {}).map(
([key, items]) => (
<div
key={key}
className="rounded-2xl border bg-background p-3"
>
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{key}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{items.length} item{items.length === 1 ? "" : "s"}
</span>
</div>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(items.slice(0, 3))}
</pre>
</div>
),
)}
</div>
) : (
<div className="rounded-2xl border bg-background p-3">
<p className="text-sm text-foreground">
No execution selected.
</p>
<p className="mt-1 text-xs text-muted-foreground">
Try asking for a specific run or execution_id.
</p>
</div>
)}
</div>
)}
{isNoResultsResponse(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.suggestions && output.suggestions.length > 0 && (
<ul className="mt-1 list-disc space-y-1 pl-5 text-xs text-muted-foreground">
{output.suggestions.slice(0, 5).map((s) => (
<li key={s}>{s}</li>
))}
</ul>
)}
</div>
)}
{isErrorResponse(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,154 +0,0 @@
import type { ToolUIPart } from "ai";
import { EyeIcon } from "@phosphor-icons/react";
import type { AgentOutputResponse } from "@/app/api/__generated__/models/agentOutputResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export interface ViewAgentOutputInput {
agent_name?: string;
library_agent_id?: string;
store_slug?: string;
execution_id?: string;
run_time?: string;
}
export type ViewAgentOutputToolOutput =
| AgentOutputResponse
| NoResultsResponse
| ErrorResponse;
function parseOutput(output: unknown): ViewAgentOutputToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.agent_output ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as ViewAgentOutputToolOutput;
}
if ("agent_id" in output && "agent_name" in output) {
return output as AgentOutputResponse;
}
if ("suggestions" in output && !("error" in output)) {
return output as NoResultsResponse;
}
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function isAgentOutputResponse(
output: ViewAgentOutputToolOutput,
): output is AgentOutputResponse {
return output.type === ResponseType.agent_output || "agent_id" in output;
}
export function isNoResultsResponse(
output: ViewAgentOutputToolOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorResponse(
output: ViewAgentOutputToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getViewAgentOutputToolOutput(
part: unknown,
): ViewAgentOutputToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getAgentIdentifierText(
input: ViewAgentOutputInput | undefined,
): string | null {
if (!input) return null;
const libraryId = input.library_agent_id?.trim();
if (libraryId) return `Library agent ${libraryId}`;
const slug = input.store_slug?.trim();
if (slug) return slug;
const name = input.agent_name?.trim();
if (name) return name;
return null;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as ViewAgentOutputInput | undefined;
const agent = getAgentIdentifierText(input);
const agentText = agent ? ` "${agent}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Retrieving agent output${agentText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Retrieving agent output${agentText}`;
if (isAgentOutputResponse(output)) {
if (output.execution)
return `Retrieved output (${output.execution.status})`;
return "Retrieved agent output";
}
if (isNoResultsResponse(output)) return "No outputs found";
return "Error loading agent output";
}
case "output-error":
return "Error loading agent output";
default:
return "Retrieving agent output";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<EyeIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -1,64 +0,0 @@
import {
getGetV2ListSessionsQueryKey,
useGetV2GetSession,
usePostV2CreateSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { useQueryClient } from "@tanstack/react-query";
import { parseAsString, useQueryState } from "nuqs";
import { useMemo } from "react";
import { convertChatSessionMessagesToUiMessages } from "./helpers/convertChatSessionToUiMessages";
export function useChatSession() {
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const queryClient = useQueryClient();
const sessionQuery = useGetV2GetSession(sessionId ?? "", {
query: {
staleTime: Infinity,
refetchOnWindowFocus: false,
refetchOnReconnect: false,
},
});
// Memoize so the effect in useCopilotPage doesn't infinite-loop on a new
// array reference every render. Re-derives only when query data changes.
const hydratedMessages = useMemo(() => {
if (sessionQuery.data?.status !== 200 || !sessionId) return undefined;
return convertChatSessionMessagesToUiMessages(
sessionId,
sessionQuery.data.data.messages ?? [],
);
}, [sessionQuery.data, sessionId]);
const { mutateAsync: createSessionMutation, isPending: isCreatingSession } =
usePostV2CreateSession({
mutation: {
onSuccess: (response) => {
if (response.status === 200 && response.data?.id) {
setSessionId(response.data.id);
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
}
},
},
});
async function createSession() {
if (sessionId) return sessionId;
const response = await createSessionMutation();
if (response.status !== 200 || !response.data?.id) {
throw new Error("Failed to create session");
}
return response.data.id;
}
return {
sessionId,
setSessionId,
hydratedMessages,
isLoadingSession: sessionQuery.isLoading,
createSession,
isCreatingSession,
};
}

View File

@@ -1,137 +0,0 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { useCallback, useEffect, useState } from "react";
import { useChatSession } from "./useChatSession";
export function useCopilotPage() {
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const [pendingMessage, setPendingMessage] = useState<string | null>(null);
const {
sessionId,
setSessionId,
hydratedMessages,
isLoadingSession,
createSession,
isCreatingSession,
} = useChatSession();
const breakpoint = useBreakpoint();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const transport = sessionId
? new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
prepareSendMessagesRequest: ({ messages }) => {
const last = messages[messages.length - 1];
return {
body: {
message: last.parts
?.map((p) => (p.type === "text" ? p.text : ""))
.join(""),
is_user_message: last.role === "user",
context: null,
},
};
},
// Resume uses GET on the same endpoint (no message param → backend resumes)
prepareReconnectToStreamRequest: () => ({
api: `/api/chat/sessions/${sessionId}/stream`,
}),
})
: null;
const { messages, sendMessage, status, error, setMessages } = useChat({
id: sessionId ?? undefined,
transport: transport ?? undefined,
resume: !!sessionId,
});
useEffect(() => {
if (!hydratedMessages || hydratedMessages.length === 0) return;
setMessages((prev) => {
if (prev.length > hydratedMessages.length) return prev;
return hydratedMessages;
});
}, [hydratedMessages, setMessages]);
// Clear messages when session is null
useEffect(() => {
if (!sessionId) setMessages([]);
}, [sessionId, setMessages]);
useEffect(() => {
if (!sessionId || !pendingMessage) return;
const msg = pendingMessage;
setPendingMessage(null);
sendMessage({ text: msg });
}, [sessionId, pendingMessage, sendMessage]);
async function onSend(message: string) {
const trimmed = message.trim();
if (!trimmed) return;
if (sessionId) {
sendMessage({ text: trimmed });
return;
}
setPendingMessage(trimmed);
await createSession();
}
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
const handleOpenDrawer = useCallback(() => {
setIsDrawerOpen(true);
}, []);
const handleCloseDrawer = useCallback(() => {
setIsDrawerOpen(false);
}, []);
const handleDrawerOpenChange = useCallback((open: boolean) => {
setIsDrawerOpen(open);
}, []);
const handleSelectSession = useCallback(
(id: string) => {
setSessionId(id);
if (isMobile) setIsDrawerOpen(false);
},
[setSessionId, isMobile],
);
const handleNewChat = useCallback(() => {
setSessionId(null);
if (isMobile) setIsDrawerOpen(false);
}, [setSessionId, isMobile]);
return {
sessionId,
messages,
status,
error,
isLoadingSession,
isCreatingSession,
createSession,
onSend,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
};
}

View File

@@ -88,27 +88,39 @@ export async function POST(
}
/**
* Resume an active stream for a session.
*
* Called by the AI SDK's `useChat(resume: true)` on page load.
* Proxies to the backend which checks for an active stream and either
* replays it (200 + SSE) or returns 204 No Content.
* Legacy GET endpoint for backward compatibility
*/
export async function GET(
_request: NextRequest,
request: NextRequest,
{ params }: { params: Promise<{ sessionId: string }> },
) {
const { sessionId } = await params;
const searchParams = request.nextUrl.searchParams;
const message = searchParams.get("message");
const isUserMessage = searchParams.get("is_user_message");
if (!message) {
return new Response("Missing message parameter", { status: 400 });
}
try {
// Get auth token from server-side session
const token = await getServerAuthToken();
// Build backend URL
const backendUrl = environment.getAGPTServerBaseUrl();
const streamUrl = new URL(
`/api/chat/sessions/${sessionId}/stream`,
backendUrl,
);
streamUrl.searchParams.set("message", message);
// Pass is_user_message parameter if provided
if (isUserMessage !== null) {
streamUrl.searchParams.set("is_user_message", isUserMessage);
}
// Forward request to backend with auth header
const headers: Record<string, string> = {
Accept: "text/event-stream",
"Cache-Control": "no-cache",
@@ -124,11 +136,6 @@ export async function GET(
headers,
});
// 204 = no active stream to resume
if (response.status === 204) {
return new Response(null, { status: 204 });
}
if (!response.ok) {
const error = await response.text();
return new Response(error, {
@@ -137,17 +144,17 @@ export async function GET(
});
}
// Return the SSE stream directly
return new Response(response.body, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
},
});
} catch (error) {
console.error("Resume stream proxy error:", error);
console.error("SSE proxy error:", error);
return new Response(
JSON.stringify({
error: "Failed to connect to chat service",

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@source "../node_modules/streamdown/dist/*.js";
@layer base {
:root {
@@ -30,14 +29,6 @@
--chart-3: 197 37% 24%;
--chart-4: 43 74% 66%;
--chart-5: 27 87% 67%;
--sidebar-background: 0 0% 98%;
--sidebar-foreground: 240 5.3% 26.1%;
--sidebar-primary: 240 5.9% 10%;
--sidebar-primary-foreground: 0 0% 98%;
--sidebar-accent: 240 4.8% 95.9%;
--sidebar-accent-foreground: 240 5.9% 10%;
--sidebar-border: 220 13% 91%;
--sidebar-ring: 217.2 91.2% 59.8%;
}
.dark {
@@ -65,14 +56,6 @@
--chart-3: 30 80% 55%;
--chart-4: 280 65% 60%;
--chart-5: 340 75% 55%;
--sidebar-background: 240 5.9% 10%;
--sidebar-foreground: 240 4.8% 95.9%;
--sidebar-primary: 224.3 76.3% 48%;
--sidebar-primary-foreground: 0 0% 100%;
--sidebar-accent: 240 3.7% 15.9%;
--sidebar-accent-foreground: 240 4.8% 95.9%;
--sidebar-border: 240 3.7% 15.9%;
--sidebar-ring: 217.2 91.2% 59.8%;
}
* {

View File

@@ -22,7 +22,7 @@ const isValidVideoUrl = (url: string): boolean => {
if (url.startsWith("data:video")) {
return true;
}
const videoExtensions = /\.(mp4|webm|ogg)$/i;
const videoExtensions = /\.(mp4|webm|ogg|mov|avi|mkv|m4v)$/i;
const youtubeRegex = /^(https?:\/\/)?(www\.)?(youtube\.com|youtu\.?be)\/.+$/;
const cleanedUrl = url.split("?")[0];
return (
@@ -44,11 +44,29 @@ const isValidAudioUrl = (url: string): boolean => {
if (url.startsWith("data:audio")) {
return true;
}
const audioExtensions = /\.(mp3|wav)$/i;
const audioExtensions = /\.(mp3|wav|ogg|m4a|aac|flac)$/i;
const cleanedUrl = url.split("?")[0];
return isValidMediaUri(url) && audioExtensions.test(cleanedUrl);
};
const getVideoMimeType = (url: string): string => {
if (url.startsWith("data:video/")) {
const match = url.match(/^data:(video\/[^;]+)/);
return match?.[1] || "video/mp4";
}
const extension = url.split("?")[0].split(".").pop()?.toLowerCase();
const mimeMap: Record<string, string> = {
mp4: "video/mp4",
webm: "video/webm",
ogg: "video/ogg",
mov: "video/quicktime",
avi: "video/x-msvideo",
mkv: "video/x-matroska",
m4v: "video/mp4",
};
return mimeMap[extension || ""] || "video/mp4";
};
const VideoRenderer: React.FC<{ videoUrl: string }> = ({ videoUrl }) => {
const videoId = getYouTubeVideoId(videoUrl);
return (
@@ -63,7 +81,7 @@ const VideoRenderer: React.FC<{ videoUrl: string }> = ({ videoUrl }) => {
></iframe>
) : (
<video controls width="100%" height="315">
<source src={videoUrl} type="video/mp4" />
<source src={videoUrl} type={getVideoMimeType(videoUrl)} />
Your browser does not support the video tag.
</video>
)}

View File

@@ -1,109 +0,0 @@
"use client";
import { Button } from "@/components/ui/button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { ArrowDownIcon } from "lucide-react";
import type { ComponentProps } from "react";
import { useCallback } from "react";
import { StickToBottom, useStickToBottomContext } from "use-stick-to-bottom";
export type ConversationProps = ComponentProps<typeof StickToBottom>;
export const Conversation = ({ className, ...props }: ConversationProps) => (
<StickToBottom
className={cn(
"relative flex-1 overflow-y-hidden",
scrollbarStyles,
className,
)}
initial="smooth"
resize="smooth"
role="log"
{...props}
/>
);
export type ConversationContentProps = ComponentProps<
typeof StickToBottom.Content
>;
export const ConversationContent = ({
className,
...props
}: ConversationContentProps) => (
<StickToBottom.Content
className={cn("flex flex-col gap-8 p-4", className)}
{...props}
/>
);
export type ConversationEmptyStateProps = ComponentProps<"div"> & {
title?: string;
description?: string;
icon?: React.ReactNode;
};
export const ConversationEmptyState = ({
className,
title = "No messages yet",
description = "Start a conversation to see messages here",
icon,
children,
...props
}: ConversationEmptyStateProps) => (
<div
className={cn(
"flex size-full flex-col items-center justify-center gap-3 p-8 text-center",
className,
)}
{...props}
>
{children ?? (
<>
{icon && (
<div className="text-neutral-500 dark:text-neutral-400">{icon}</div>
)}
<div className="space-y-1">
<h3 className="text-sm font-medium">{title}</h3>
{description && (
<p className="text-sm text-neutral-500 dark:text-neutral-400">
{description}
</p>
)}
</div>
</>
)}
</div>
);
export type ConversationScrollButtonProps = ComponentProps<typeof Button>;
export const ConversationScrollButton = ({
className,
...props
}: ConversationScrollButtonProps) => {
const { isAtBottom, scrollToBottom } = useStickToBottomContext();
const handleScrollToBottom = useCallback(() => {
scrollToBottom();
}, [scrollToBottom]);
return (
!isAtBottom && (
<Button
className={cn(
"absolute bottom-4 left-[50%] translate-x-[-50%] rounded-full dark:bg-white dark:dark:bg-neutral-950 dark:dark:hover:bg-neutral-800 dark:hover:bg-neutral-100",
className,
)}
onClick={handleScrollToBottom}
size="icon"
type="button"
variant="outline"
{...props}
>
<ArrowDownIcon className="size-4" />
</Button>
)
);
};

View File

@@ -1,338 +0,0 @@
"use client";
import { Button } from "@/components/ui/button";
import { ButtonGroup, ButtonGroupText } from "@/components/ui/button-group";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { cn } from "@/lib/utils";
import { cjk } from "@streamdown/cjk";
import { code } from "@streamdown/code";
import { math } from "@streamdown/math";
import { mermaid } from "@streamdown/mermaid";
import type { UIMessage } from "ai";
import { ChevronLeftIcon, ChevronRightIcon } from "lucide-react";
import type { ComponentProps, HTMLAttributes, ReactElement } from "react";
import { createContext, memo, useContext, useEffect, useState } from "react";
import { Streamdown } from "streamdown";
export type MessageProps = HTMLAttributes<HTMLDivElement> & {
from: UIMessage["role"];
};
export const Message = ({ className, from, ...props }: MessageProps) => (
<div
className={cn(
"group flex w-full max-w-[95%] flex-col gap-2",
from === "user" ? "is-user ml-auto justify-end" : "is-assistant",
className,
)}
{...props}
/>
);
export type MessageContentProps = HTMLAttributes<HTMLDivElement>;
export const MessageContent = ({
children,
className,
...props
}: MessageContentProps) => (
<div
className={cn(
"is-user:dark flex w-full min-w-0 max-w-full flex-col gap-2 overflow-hidden text-sm",
"group-[.is-user]:w-fit",
"group-[.is-user]:ml-auto group-[.is-user]:rounded-lg group-[.is-user]:bg-neutral-100 group-[.is-user]:px-4 group-[.is-user]:py-3 group-[.is-user]:text-neutral-950 dark:group-[.is-user]:bg-neutral-800 dark:group-[.is-user]:text-neutral-50",
"group-[.is-assistant]:text-neutral-950 dark:group-[.is-assistant]:text-neutral-50",
className,
)}
{...props}
>
{children}
</div>
);
export type MessageActionsProps = ComponentProps<"div">;
export const MessageActions = ({
className,
children,
...props
}: MessageActionsProps) => (
<div className={cn("flex items-center gap-1", className)} {...props}>
{children}
</div>
);
export type MessageActionProps = ComponentProps<typeof Button> & {
tooltip?: string;
label?: string;
};
export const MessageAction = ({
tooltip,
children,
label,
variant = "ghost",
size = "icon-sm",
...props
}: MessageActionProps) => {
const button = (
<Button size={size} type="button" variant={variant} {...props}>
{children}
<span className="sr-only">{label || tooltip}</span>
</Button>
);
if (tooltip) {
return (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>{button}</TooltipTrigger>
<TooltipContent>
<p>{tooltip}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
);
}
return button;
};
interface MessageBranchContextType {
currentBranch: number;
totalBranches: number;
goToPrevious: () => void;
goToNext: () => void;
branches: ReactElement[];
setBranches: (branches: ReactElement[]) => void;
}
const MessageBranchContext = createContext<MessageBranchContextType | null>(
null,
);
const useMessageBranch = () => {
const context = useContext(MessageBranchContext);
if (!context) {
throw new Error("MessageBranch components must be used within");
}
return context;
};
export type MessageBranchProps = HTMLAttributes<HTMLDivElement> & {
defaultBranch?: number;
onBranchChange?: (branchIndex: number) => void;
};
export const MessageBranch = ({
defaultBranch = 0,
onBranchChange,
className,
...props
}: MessageBranchProps) => {
const [currentBranch, setCurrentBranch] = useState(defaultBranch);
const [branches, setBranches] = useState<ReactElement[]>([]);
const handleBranchChange = (newBranch: number) => {
setCurrentBranch(newBranch);
onBranchChange?.(newBranch);
};
const goToPrevious = () => {
const newBranch =
currentBranch > 0 ? currentBranch - 1 : branches.length - 1;
handleBranchChange(newBranch);
};
const goToNext = () => {
const newBranch =
currentBranch < branches.length - 1 ? currentBranch + 1 : 0;
handleBranchChange(newBranch);
};
const contextValue: MessageBranchContextType = {
currentBranch,
totalBranches: branches.length,
goToPrevious,
goToNext,
branches,
setBranches,
};
return (
<MessageBranchContext.Provider value={contextValue}>
<div
className={cn("grid w-full gap-2 [&>div]:pb-0", className)}
{...props}
/>
</MessageBranchContext.Provider>
);
};
export type MessageBranchContentProps = HTMLAttributes<HTMLDivElement>;
export const MessageBranchContent = ({
children,
...props
}: MessageBranchContentProps) => {
const { currentBranch, setBranches, branches } = useMessageBranch();
const childrenArray = Array.isArray(children) ? children : [children];
// Use useEffect to update branches when they change
useEffect(() => {
if (branches.length !== childrenArray.length) {
setBranches(childrenArray);
}
}, [childrenArray, branches, setBranches]);
return childrenArray.map((branch, index) => (
<div
className={cn(
"grid gap-2 overflow-hidden [&>div]:pb-0",
index === currentBranch ? "block" : "hidden",
)}
key={branch.key}
{...props}
>
{branch}
</div>
));
};
export type MessageBranchSelectorProps = HTMLAttributes<HTMLDivElement> & {
from: UIMessage["role"];
};
export const MessageBranchSelector = ({
className,
from: _from,
...props
}: MessageBranchSelectorProps) => {
const { totalBranches } = useMessageBranch();
// Don't render if there's only one branch
if (totalBranches <= 1) {
return null;
}
return (
<ButtonGroup
className={cn(
"[&>*:not(:first-child)]:rounded-l-md [&>*:not(:last-child)]:rounded-r-md",
className,
)}
orientation="horizontal"
{...props}
/>
);
};
export type MessageBranchPreviousProps = ComponentProps<typeof Button>;
export const MessageBranchPrevious = ({
children,
...props
}: MessageBranchPreviousProps) => {
const { goToPrevious, totalBranches } = useMessageBranch();
return (
<Button
aria-label="Previous branch"
disabled={totalBranches <= 1}
onClick={goToPrevious}
size="icon-sm"
type="button"
variant="ghost"
{...props}
>
{children ?? <ChevronLeftIcon size={14} />}
</Button>
);
};
export type MessageBranchNextProps = ComponentProps<typeof Button>;
export const MessageBranchNext = ({
children,
...props
}: MessageBranchNextProps) => {
const { goToNext, totalBranches } = useMessageBranch();
return (
<Button
aria-label="Next branch"
disabled={totalBranches <= 1}
onClick={goToNext}
size="icon-sm"
type="button"
variant="ghost"
{...props}
>
{children ?? <ChevronRightIcon size={14} />}
</Button>
);
};
export type MessageBranchPageProps = HTMLAttributes<HTMLSpanElement>;
export const MessageBranchPage = ({
className,
...props
}: MessageBranchPageProps) => {
const { currentBranch, totalBranches } = useMessageBranch();
return (
<ButtonGroupText
className={cn(
"border-none bg-transparent text-neutral-500 shadow-none dark:text-neutral-400",
className,
)}
{...props}
>
{currentBranch + 1} of {totalBranches}
</ButtonGroupText>
);
};
export type MessageResponseProps = ComponentProps<typeof Streamdown>;
export const MessageResponse = memo(
({ className, ...props }: MessageResponseProps) => (
<Streamdown
className={cn(
"size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0",
className,
)}
plugins={{ code, mermaid, math, cjk }}
{...props}
/>
),
(prevProps, nextProps) => prevProps.children === nextProps.children,
);
MessageResponse.displayName = "MessageResponse";
export type MessageToolbarProps = ComponentProps<"div">;
export const MessageToolbar = ({
className,
children,
...props
}: MessageToolbarProps) => (
<div
className={cn(
"mt-4 flex w-full items-center justify-between gap-4",
className,
)}
{...props}
>
{children}
</div>
);

View File

@@ -77,7 +77,7 @@ export function OverflowText(props: Props) {
"block min-w-0 overflow-hidden text-ellipsis whitespace-nowrap",
)}
>
<Text variant={variant} as="span" className={className} {...restProps}>
<Text variant={variant} className={className} {...restProps}>
{value}
</Text>
</span>

View File

@@ -6,19 +6,17 @@ import {
MicrophoneIcon,
StopIcon,
} from "@phosphor-icons/react";
import { ChangeEvent, useCallback } from "react";
import { RecordingIndicator } from "./components/RecordingIndicator";
import { useChatInput } from "./useChatInput";
import { useVoiceRecording } from "./useVoiceRecording";
export interface Props {
onSend: (message: string) => void | Promise<void>;
onSend: (message: string) => void;
disabled?: boolean;
isStreaming?: boolean;
onStop?: () => void;
placeholder?: string;
className?: string;
inputId?: string;
}
export function ChatInput({
@@ -28,14 +26,14 @@ export function ChatInput({
onStop,
placeholder = "Type your message...",
className,
inputId = "chat-input",
}: Props) {
const inputId = "chat-input";
const {
value,
setValue,
handleKeyDown: baseHandleKeyDown,
handleSubmit,
handleChange: baseHandleChange,
handleChange,
hasMultipleLines,
} = useChatInput({
onSend,
@@ -62,15 +60,6 @@ export function ChatInput({
inputId,
});
// Block text changes when recording
const handleChange = useCallback(
(e: ChangeEvent<HTMLTextAreaElement>) => {
if (isRecording) return;
baseHandleChange(e);
},
[isRecording, baseHandleChange],
);
return (
<form onSubmit={handleSubmit} className={cn("relative flex-1", className)}>
<div className="relative">

View File

@@ -21,7 +21,6 @@ export function useChatInput({
}: Args) {
const [value, setValue] = useState("");
const [hasMultipleLines, setHasMultipleLines] = useState(false);
const [isSending, setIsSending] = useState(false);
useEffect(
function focusOnMount() {
@@ -101,40 +100,34 @@ export function useChatInput({
}
}, [value, maxRows, inputId]);
async function handleSend() {
if (disabled || isSending || !value.trim()) return;
setIsSending(true);
try {
await onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
} finally {
setIsSending(false);
const handleSend = () => {
if (disabled || !value.trim()) return;
onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
};
function handleKeyDown(event: KeyboardEvent<HTMLTextAreaElement>) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
void handleSend();
handleSend();
}
}
function handleSubmit(e: FormEvent<HTMLFormElement>) {
e.preventDefault();
void handleSend();
handleSend();
}
function handleChange(e: ChangeEvent<HTMLTextAreaElement>) {
@@ -149,6 +142,5 @@ export function useChatInput({
handleSubmit,
handleChange,
hasMultipleLines,
isSending,
};
}

View File

@@ -38,13 +38,9 @@ export function useVoiceRecording({
const streamRef = useRef<MediaStream | null>(null);
const isRecordingRef = useRef(false);
const [isSupported, setIsSupported] = useState(false);
useEffect(() => {
setIsSupported(
!!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia),
);
}, []);
const isSupported =
typeof window !== "undefined" &&
!!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia);
const clearTimer = useCallback(() => {
if (timerRef.current) {
@@ -218,33 +214,17 @@ export function useVoiceRecording({
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
// Allow space to toggle recording (start when empty, stop when recording)
if (event.key === " " && !isTranscribing) {
if (isRecordingRef.current) {
// Stop recording on space
event.preventDefault();
stopRecording();
return;
} else if (!value.trim()) {
// Start recording on space when input is empty
event.preventDefault();
void startRecording();
return;
}
}
// Block all key events when recording (except space handled above)
if (isRecordingRef.current) {
if (event.key === " " && !value.trim() && !isTranscribing) {
event.preventDefault();
toggleRecording();
return;
}
baseHandleKeyDown(event);
},
[value, isTranscribing, stopRecording, startRecording, baseHandleKeyDown],
[value, isTranscribing, toggleRecording, baseHandleKeyDown],
);
const showMicButton = isSupported;
// Don't include isRecording in disabled state - we need key events to work
// Text input is blocked via handleKeyDown instead
const isInputDisabled = disabled || isStreaming || isTranscribing;
// Cleanup on unmount

View File

@@ -346,6 +346,7 @@ export function ChatMessage({
toolId={message.toolId}
toolName={message.toolName}
result={message.result}
onSendMessage={onSendMessage}
/>
</div>
);

View File

@@ -3,7 +3,7 @@
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import { cn } from "@/lib/utils";
import { EyeSlash } from "@phosphor-icons/react";
import React from "react";
import React, { useState } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
@@ -48,7 +48,9 @@ interface InputProps extends React.InputHTMLAttributes<HTMLInputElement> {
*/
function resolveWorkspaceUrl(src: string): string {
if (src.startsWith("workspace://")) {
const fileId = src.replace("workspace://", "");
// Strip MIME type fragment if present (e.g., workspace://abc123#video/mp4 → abc123)
const withoutPrefix = src.replace("workspace://", "");
const fileId = withoutPrefix.split("#")[0];
// Use the generated API URL helper to get the correct path
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
// Route through the Next.js proxy (same pattern as customMutator for client-side)
@@ -65,13 +67,49 @@ function isWorkspaceImage(src: string | undefined): boolean {
return src?.includes("/workspace/files/") ?? false;
}
/**
* Renders a workspace video with controls and an optional "AI cannot see" badge.
*/
function WorkspaceVideo({
src,
aiCannotSee,
}: {
src: string;
aiCannotSee: boolean;
}) {
return (
<span className="relative my-2 inline-block">
<video
controls
className="h-auto max-w-full rounded-md border border-zinc-200"
preload="metadata"
>
<source src={src} />
Your browser does not support the video tag.
</video>
{aiCannotSee && (
<span
className="absolute bottom-2 right-2 flex items-center gap-1 rounded bg-black/70 px-2 py-1 text-xs text-white"
title="The AI cannot see this video"
>
<EyeSlash size={14} />
<span>AI cannot see this video</span>
</span>
)}
</span>
);
}
/**
* Custom image component that shows an indicator when the AI cannot see the image.
* Also handles the "video:" alt-text prefix convention to render <video> elements.
* For workspace files with unknown types, falls back to <video> if <img> fails.
* Note: src is already transformed by urlTransform, so workspace:// is now /api/workspace/...
*/
function MarkdownImage(props: Record<string, unknown>) {
const src = props.src as string | undefined;
const alt = props.alt as string | undefined;
const [imgFailed, setImgFailed] = useState(false);
const aiCannotSee = isWorkspaceImage(src);
@@ -84,6 +122,18 @@ function MarkdownImage(props: Record<string, unknown>) {
);
}
// Detect video: prefix in alt text (set by formatOutputValue in helpers.ts)
if (alt?.startsWith("video:")) {
return <WorkspaceVideo src={src} aiCannotSee={aiCannotSee} />;
}
// If the <img> failed to load and this is a workspace file, try as video.
// This handles generic output keys like "file_out" where the MIME type
// isn't known from the key name alone.
if (imgFailed && aiCannotSee) {
return <WorkspaceVideo src={src} aiCannotSee={aiCannotSee} />;
}
return (
<span className="relative my-2 inline-block">
{/* eslint-disable-next-line @next/next/no-img-element */}
@@ -92,6 +142,9 @@ function MarkdownImage(props: Record<string, unknown>) {
alt={alt || "Image"}
className="h-auto max-w-full rounded-md border border-zinc-200"
loading="lazy"
onError={() => {
if (aiCannotSee) setImgFailed(true);
}}
/>
{aiCannotSee && (
<span

View File

@@ -73,6 +73,7 @@ export function MessageList({
key={index}
message={message}
prevMessage={messages[index - 1]}
onSendMessage={onSendMessage}
/>
);
}

View File

@@ -5,11 +5,13 @@ import { shouldSkipAgentOutput } from "../../helpers";
export interface LastToolResponseProps {
message: ChatMessageData;
prevMessage: ChatMessageData | undefined;
onSendMessage?: (content: string) => void;
}
export function LastToolResponse({
message,
prevMessage,
onSendMessage,
}: LastToolResponseProps) {
if (message.type !== "tool_response") return null;
@@ -21,6 +23,7 @@ export function LastToolResponse({
toolId={message.toolId}
toolName={message.toolName}
result={message.result}
onSendMessage={onSendMessage}
/>
</div>
);

View File

@@ -1,6 +1,8 @@
import { Progress } from "@/components/atoms/Progress/Progress";
import { cn } from "@/lib/utils";
import { useEffect, useRef, useState } from "react";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { useAsymptoticProgress } from "../ToolCallMessage/useAsymptoticProgress";
export interface ThinkingMessageProps {
className?: string;
@@ -11,18 +13,19 @@ export function ThinkingMessage({ className }: ThinkingMessageProps) {
const [showCoffeeMessage, setShowCoffeeMessage] = useState(false);
const timerRef = useRef<NodeJS.Timeout | null>(null);
const coffeeTimerRef = useRef<NodeJS.Timeout | null>(null);
const progress = useAsymptoticProgress(showCoffeeMessage);
useEffect(() => {
if (timerRef.current === null) {
timerRef.current = setTimeout(() => {
setShowSlowLoader(true);
}, 8000);
}, 3000);
}
if (coffeeTimerRef.current === null) {
coffeeTimerRef.current = setTimeout(() => {
setShowCoffeeMessage(true);
}, 10000);
}, 8000);
}
return () => {
@@ -49,9 +52,18 @@ export function ThinkingMessage({ className }: ThinkingMessageProps) {
<AIChatBubble>
<div className="transition-all duration-500 ease-in-out">
{showCoffeeMessage ? (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
This could take a few minutes, grab a coffee
</span>
<div className="flex flex-col items-center gap-3">
<div className="flex w-full max-w-[280px] flex-col gap-1.5">
<div className="flex items-center justify-between text-xs text-neutral-500">
<span>Working on it...</span>
<span>{Math.round(progress)}%</span>
</div>
<Progress value={progress} className="h-2 w-full" />
</div>
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
This could take a few minutes, grab a coffee
</span>
</div>
) : showSlowLoader ? (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
Taking a bit more time...

View File

@@ -0,0 +1,50 @@
import { useEffect, useRef, useState } from "react";
/**
* Hook that returns a progress value that starts fast and slows down,
* asymptotically approaching but never reaching the max value.
*
* Uses a half-life formula: progress = max * (1 - 0.5^(time/halfLife))
* This creates the "game loading bar" effect where:
* - 50% is reached at halfLifeSeconds
* - 75% is reached at 2 * halfLifeSeconds
* - 87.5% is reached at 3 * halfLifeSeconds
* - and so on...
*
* @param isActive - Whether the progress should be animating
* @param halfLifeSeconds - Time in seconds to reach 50% progress (default: 30)
* @param maxProgress - Maximum progress value to approach (default: 100)
* @param intervalMs - Update interval in milliseconds (default: 100)
* @returns Current progress value (0-maxProgress)
*/
export function useAsymptoticProgress(
isActive: boolean,
halfLifeSeconds = 30,
maxProgress = 100,
intervalMs = 100,
) {
const [progress, setProgress] = useState(0);
const elapsedTimeRef = useRef(0);
useEffect(() => {
if (!isActive) {
setProgress(0);
elapsedTimeRef.current = 0;
return;
}
const interval = setInterval(() => {
elapsedTimeRef.current += intervalMs / 1000;
// Half-life approach: progress = max * (1 - 0.5^(time/halfLife))
// At t=halfLife: 50%, at t=2*halfLife: 75%, at t=3*halfLife: 87.5%, etc.
const newProgress =
maxProgress *
(1 - Math.pow(0.5, elapsedTimeRef.current / halfLifeSeconds));
setProgress(newProgress);
}, intervalMs);
return () => clearInterval(interval);
}, [isActive, halfLifeSeconds, maxProgress, intervalMs]);
return progress;
}

View File

@@ -0,0 +1,128 @@
"use client";
import { useGetV2GetLibraryAgent } from "@/app/api/__generated__/endpoints/library/library";
import { GraphExecutionJobInfo } from "@/app/api/__generated__/models/graphExecutionJobInfo";
import { GraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
import { RunAgentModal } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/RunAgentModal";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
CheckCircleIcon,
PencilLineIcon,
PlayIcon,
} from "@phosphor-icons/react";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
interface Props {
agentName: string;
libraryAgentId: string;
onSendMessage?: (content: string) => void;
}
export function AgentCreatedPrompt({
agentName,
libraryAgentId,
onSendMessage,
}: Props) {
// Fetch library agent eagerly so modal is ready when user clicks
const { data: libraryAgentResponse, isLoading } = useGetV2GetLibraryAgent(
libraryAgentId,
{
query: {
enabled: !!libraryAgentId,
},
},
);
const libraryAgent =
libraryAgentResponse?.status === 200 ? libraryAgentResponse.data : null;
function handleRunWithPlaceholders() {
onSendMessage?.(
`Run the agent "${agentName}" with placeholder/example values so I can test it.`,
);
}
function handleRunCreated(execution: GraphExecutionMeta) {
onSendMessage?.(
`I've started the agent "${agentName}". The execution ID is ${execution.id}. Please monitor its progress and let me know when it completes.`,
);
}
function handleScheduleCreated(schedule: GraphExecutionJobInfo) {
const scheduleInfo = schedule.cron
? `with cron schedule "${schedule.cron}"`
: "to run on the specified schedule";
onSendMessage?.(
`I've scheduled the agent "${agentName}" ${scheduleInfo}. The schedule ID is ${schedule.id}.`,
);
}
return (
<AIChatBubble>
<div className="flex flex-col gap-4">
<div className="flex items-center gap-2">
<div className="flex h-8 w-8 items-center justify-center rounded-full bg-green-100">
<CheckCircleIcon
size={18}
weight="fill"
className="text-green-600"
/>
</div>
<div>
<Text variant="body-medium" className="text-neutral-900">
Agent Created Successfully
</Text>
<Text variant="small" className="text-neutral-500">
&quot;{agentName}&quot; is ready to test
</Text>
</div>
</div>
<div className="flex flex-col gap-2">
<Text variant="small-medium" className="text-neutral-700">
Ready to test?
</Text>
<div className="flex flex-wrap gap-2">
<Button
variant="outline"
size="small"
onClick={handleRunWithPlaceholders}
className="gap-2"
>
<PlayIcon size={16} />
Run with example values
</Button>
{libraryAgent ? (
<RunAgentModal
triggerSlot={
<Button variant="outline" size="small" className="gap-2">
<PencilLineIcon size={16} />
Run with my inputs
</Button>
}
agent={libraryAgent}
onRunCreated={handleRunCreated}
onScheduleCreated={handleScheduleCreated}
/>
) : (
<Button
variant="outline"
size="small"
loading={isLoading}
disabled
className="gap-2"
>
<PencilLineIcon size={16} />
Run with my inputs
</Button>
)}
</div>
<Text variant="small" className="text-neutral-500">
or just ask me
</Text>
</div>
</div>
</AIChatBubble>
);
}

View File

@@ -2,11 +2,13 @@ import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import type { ToolResult } from "@/types/chat";
import { WarningCircleIcon } from "@phosphor-icons/react";
import { AgentCreatedPrompt } from "./AgentCreatedPrompt";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { MarkdownContent } from "../MarkdownContent/MarkdownContent";
import {
formatToolResponse,
getErrorMessage,
isAgentSavedResponse,
isErrorResponse,
} from "./helpers";
@@ -16,6 +18,7 @@ export interface ToolResponseMessageProps {
result?: ToolResult;
success?: boolean;
className?: string;
onSendMessage?: (content: string) => void;
}
export function ToolResponseMessage({
@@ -24,6 +27,7 @@ export function ToolResponseMessage({
result,
success: _success,
className,
onSendMessage,
}: ToolResponseMessageProps) {
if (isErrorResponse(result)) {
const errorMessage = getErrorMessage(result);
@@ -43,6 +47,18 @@ export function ToolResponseMessage({
);
}
// Check for agent_saved response - show special prompt
const agentSavedData = isAgentSavedResponse(result);
if (agentSavedData.isSaved) {
return (
<AgentCreatedPrompt
agentName={agentSavedData.agentName}
libraryAgentId={agentSavedData.libraryAgentId}
onSendMessage={onSendMessage}
/>
);
}
const formattedText = formatToolResponse(result, toolName);
return (

View File

@@ -6,6 +6,43 @@ function stripInternalReasoning(content: string): string {
.trim();
}
export interface AgentSavedData {
isSaved: boolean;
agentName: string;
agentId: string;
libraryAgentId: string;
libraryAgentLink: string;
}
export function isAgentSavedResponse(result: unknown): AgentSavedData {
if (typeof result !== "object" || result === null) {
return {
isSaved: false,
agentName: "",
agentId: "",
libraryAgentId: "",
libraryAgentLink: "",
};
}
const response = result as Record<string, unknown>;
if (response.type === "agent_saved") {
return {
isSaved: true,
agentName: (response.agent_name as string) || "Agent",
agentId: (response.agent_id as string) || "",
libraryAgentId: (response.library_agent_id as string) || "",
libraryAgentLink: (response.library_agent_link as string) || "",
};
}
return {
isSaved: false,
agentName: "",
agentId: "",
libraryAgentId: "",
libraryAgentLink: "",
};
}
export function isErrorResponse(result: unknown): boolean {
if (typeof result === "string") {
const lower = result.toLowerCase();
@@ -39,69 +76,101 @@ export function getErrorMessage(result: unknown): string {
/**
* Check if a value is a workspace file reference.
* Format: workspace://{fileId} or workspace://{fileId}#{mimeType}
*/
function isWorkspaceRef(value: unknown): value is string {
return typeof value === "string" && value.startsWith("workspace://");
}
/**
* Check if a workspace reference appears to be an image based on common patterns.
* Since workspace refs don't have extensions, we check the context or assume image
* for certain block types.
*
* TODO: Replace keyword matching with MIME type encoded in workspace ref.
* e.g., workspace://abc123#image/png or workspace://abc123#video/mp4
* This would let frontend render correctly without fragile keyword matching.
* Extract MIME type from a workspace reference fragment.
* e.g., "workspace://abc123#video/mp4" → "video/mp4"
* Returns undefined if no fragment is present.
*/
function isLikelyImageRef(value: string, outputKey?: string): boolean {
if (!isWorkspaceRef(value)) return false;
// Check output key name for video-related hints (these are NOT images)
const videoKeywords = ["video", "mp4", "mov", "avi", "webm", "movie", "clip"];
if (outputKey) {
const lowerKey = outputKey.toLowerCase();
if (videoKeywords.some((kw) => lowerKey.includes(kw))) {
return false;
}
}
// Check output key name for image-related hints
const imageKeywords = [
"image",
"img",
"photo",
"picture",
"thumbnail",
"avatar",
"icon",
"screenshot",
];
if (outputKey) {
const lowerKey = outputKey.toLowerCase();
if (imageKeywords.some((kw) => lowerKey.includes(kw))) {
return true;
}
}
// Default to treating workspace refs as potential images
// since that's the most common case for generated content
return true;
function getWorkspaceMimeType(value: string): string | undefined {
const hashIndex = value.indexOf("#");
if (hashIndex === -1) return undefined;
return value.slice(hashIndex + 1) || undefined;
}
/**
* Format a single output value, converting workspace refs to markdown images.
* Determine the media category of a workspace ref or data URI.
* Uses the MIME type fragment on workspace refs when available,
* falls back to output key keyword matching for older refs without it.
*/
function formatOutputValue(value: unknown, outputKey?: string): string {
if (isWorkspaceRef(value) && isLikelyImageRef(value, outputKey)) {
// Format as markdown image
return `![${outputKey || "Generated image"}](${value})`;
function getMediaCategory(
value: string,
outputKey?: string,
): "video" | "image" | "audio" | "unknown" {
// Data URIs carry their own MIME type
if (value.startsWith("data:video/")) return "video";
if (value.startsWith("data:image/")) return "image";
if (value.startsWith("data:audio/")) return "audio";
// Workspace refs: prefer MIME type fragment
if (isWorkspaceRef(value)) {
const mime = getWorkspaceMimeType(value);
if (mime) {
if (mime.startsWith("video/")) return "video";
if (mime.startsWith("image/")) return "image";
if (mime.startsWith("audio/")) return "audio";
return "unknown";
}
// Fallback: keyword matching on output key for older refs without fragment
if (outputKey) {
const lowerKey = outputKey.toLowerCase();
const videoKeywords = [
"video",
"mp4",
"mov",
"avi",
"webm",
"movie",
"clip",
];
if (videoKeywords.some((kw) => lowerKey.includes(kw))) return "video";
const imageKeywords = [
"image",
"img",
"photo",
"picture",
"thumbnail",
"avatar",
"icon",
"screenshot",
];
if (imageKeywords.some((kw) => lowerKey.includes(kw))) return "image";
}
// Default to image for backward compatibility
return "image";
}
return "unknown";
}
/**
* Format a single output value, converting workspace refs to markdown images/videos.
* Videos use a "video:" alt-text prefix so the MarkdownContent renderer can
* distinguish them from images and render a <video> element.
*/
function formatOutputValue(value: unknown, outputKey?: string): string {
if (typeof value === "string") {
// Check for data URIs (images)
if (value.startsWith("data:image/")) {
const category = getMediaCategory(value, outputKey);
if (category === "video") {
// Format with "video:" prefix so MarkdownContent renders <video>
return `![video:${outputKey || "Video"}](${value})`;
}
if (category === "image") {
return `![${outputKey || "Generated image"}](${value})`;
}
// For audio, unknown workspace refs, data URIs, etc. - return as-is
return value;
}

View File

@@ -26,6 +26,7 @@ export const providerIcons: Partial<
nvidia: fallbackIcon,
discord: FaDiscord,
d_id: fallbackIcon,
elevenlabs: fallbackIcon,
google_maps: FaGoogle,
jina: fallbackIcon,
ideogram: fallbackIcon,

View File

@@ -47,7 +47,7 @@ export function Navbar() {
const actualLoggedInLinks = [
{ name: "Home", href: homeHref },
...(isChatEnabled === true ? [{ name: "Tasks", href: "/library" }] : []),
...(isChatEnabled === true ? [{ name: "Agents", href: "/library" }] : []),
...loggedInLinks,
];
@@ -62,7 +62,7 @@ export function Navbar() {
<PreviewBanner branchName={previewBranchName} />
) : null}
<nav
className="inline-flex w-full items-center border border-none bg-[#FAFAFA] p-3 backdrop-blur-[26px]"
className="border-zinc-[#EFEFF0] inline-flex w-full items-center border border-[#EFEFF0] bg-[#F3F4F6]/20 p-3 backdrop-blur-[26px]"
style={{ height: NAVBAR_HEIGHT_PX }}
>
{/* Left section */}

View File

@@ -1,83 +0,0 @@
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";
import { Separator } from "@/components/ui/separator";
const buttonGroupVariants = cva(
"flex w-fit items-stretch has-[>[data-slot=button-group]]:gap-2 [&>*]:focus-visible:relative [&>*]:focus-visible:z-10 has-[select[aria-hidden=true]:last-child]:[&>[data-slot=select-trigger]:last-of-type]:rounded-r-md [&>[data-slot=select-trigger]:not([class*='w-'])]:w-fit [&>input]:flex-1",
{
variants: {
orientation: {
horizontal:
"[&>*:not(:first-child)]:rounded-l-none [&>*:not(:first-child)]:border-l-0 [&>*:not(:last-child)]:rounded-r-none",
vertical:
"flex-col [&>*:not(:first-child)]:rounded-t-none [&>*:not(:first-child)]:border-t-0 [&>*:not(:last-child)]:rounded-b-none",
},
},
defaultVariants: {
orientation: "horizontal",
},
},
);
function ButtonGroup({
className,
orientation,
...props
}: React.ComponentProps<"div"> & VariantProps<typeof buttonGroupVariants>) {
return (
<div
role="group"
data-slot="button-group"
data-orientation={orientation}
className={cn(buttonGroupVariants({ orientation }), className)}
{...props}
/>
);
}
function ButtonGroupText({
className,
asChild = false,
...props
}: React.ComponentProps<"div"> & {
asChild?: boolean;
}) {
const Comp = asChild ? Slot : "div";
return (
<Comp
className={cn(
"shadow-xs flex items-center gap-2 rounded-md border border-neutral-200 bg-neutral-100 px-4 text-sm font-medium dark:border-neutral-800 dark:bg-neutral-800 [&_svg:not([class*='size-'])]:size-4 [&_svg]:pointer-events-none",
className,
)}
{...props}
/>
);
}
function ButtonGroupSeparator({
className,
orientation = "vertical",
...props
}: React.ComponentProps<typeof Separator>) {
return (
<Separator
data-slot="button-group-separator"
orientation={orientation}
className={cn(
"relative !m-0 self-stretch bg-neutral-200 data-[orientation=vertical]:h-auto dark:bg-neutral-800",
className,
)}
{...props}
/>
);
}
export {
ButtonGroup,
ButtonGroupSeparator,
ButtonGroupText,
buttonGroupVariants,
};

Some files were not shown because too many files have changed in this diff Show More