Compare commits

..

10 Commits

Author SHA1 Message Date
Toran Bruce Richards
d12a929472 Merge branch 'dev' into fix/secrt-1822-clarification-format 2026-02-05 23:46:14 +01:00
Nicholas Tindle
85b6520710 feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)

**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations

**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
  - [x] Resource cleanup is implemented in `finally` blocks
  - [x] Exception chaining is properly implemented
  - [x] Input validation is in place
  - [x] Test mocks are provided for CI environments

#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

N/A - No configuration changes required.


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
> 
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
> 
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
> 
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
da7a44d794. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 22:22:33 +00:00
Otto
5c0a30e728 fix(frontend): Improve clarification answer message formatting
Improves the auto-generated message format when users submit clarification
answers in the agent generator:

Before:
  I have the answers to your questions:
  keyword_1: User answer 1
  keyword_2: User answer 2
  Please proceed with creating the agent.

After:
  **Here are my answers:**
  > What is the primary purpose?
  User answer 1
  > What is the target audience?
  User answer 2
  Please proceed with creating the agent.

Changes:
- Use human-readable question text instead of machine-readable keywords
- Use blockquote format for questions (natural "quote and reply" pattern)
- Use double newlines for proper Markdown paragraph breaks
- Iterate over questions array to preserve original question order
- Move handler inside conditional for proper TypeScript type narrowing

Fixes SECRT-1822
2026-02-05 20:01:47 +00:00
Bently
bfa942e032 feat(platform): Add Claude Opus 4.6 model support (#11983)
## Summary
Adds support for Anthropic's newly released Claude Opus 4.6 model.

## Changes
- Added `claude-opus-4-6` to the `LlmModel` enum
- Added model metadata: 200K context window (1M beta), **128K max output
tokens**
- Added block cost config (same pricing tier as Opus 4.5: $5/MTok input,
$25/MTok output)
- Updated chat config default model to Claude Opus 4.6

## Model Details
From [Anthropic's
docs](https://docs.anthropic.com/en/docs/about-claude/models):
- **API ID:** `claude-opus-4-6`
- **Context window:** 200K tokens (1M beta)
- **Max output:** 128K tokens (up from 64K on Opus 4.5)
- **Extended thinking:** Yes
- **Adaptive thinking:** Yes (new, Opus 4.6 exclusive)
- **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training)
- **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5)

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2026-02-05 19:19:51 +00:00
Otto
11256076d8 fix(frontend): Rename "Tasks" tab to "Agents" in navbar (#11982)
## Summary
Renames the "Tasks" tab in the navbar to "Agents" per the Figma design.

## Changes
- `Navbar.tsx`: Changed label from "Tasks" to "Agents"

<img width="1069" height="153" alt="image"
src="https://github.com/user-attachments/assets/3869d2a2-9bd9-4346-b650-15dabbdb46c4"
/>


## Why
- "Tasks" was incorrectly named and confusing for users trying to find
their agent builds
- Matches the Figma design

## Linear Ticket
Fixes [SECRT-1894](https://linear.app/autogpt/issue/SECRT-1894)

## Related
- [SECRT-1865](https://linear.app/autogpt/issue/SECRT-1865) - Find and
Manage Existing/Unpublished or Recent Agent Builds Is Unintuitive
2026-02-05 17:54:39 +00:00
Bently
3ca2387631 feat(blocks): Implement Text Encode block (#11857)
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).

## Changes

### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`

### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)

### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines

## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).

## Related Issue
Fixes #11111

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-05 17:31:02 +00:00
Otto
ed07f02738 fix(copilot): edit_agent updates existing agent instead of creating duplicate (#11981)
## Summary

When editing an agent via CoPilot's `edit_agent` tool, the code was
always creating a new `LibraryAgent` entry instead of updating the
existing one to point to the new graph version. This caused duplicate
agents to appear in the user's library.

## Changes

In `save_agent_to_library()`:
- When `is_update=True`, now checks if there's an existing library agent
for the graph using `get_library_agent_by_graph_id()`
- If found, uses `update_agent_version_in_library()` to update the
existing library agent to point to the new version
- Falls back to creating a new library agent if no existing one is found
(e.g., if editing a graph that wasn't added to library yet)

## Testing

- Verified lint/format checks pass
- Plan reviewed and approved by Staff Engineer Plan Reviewer agent

## Related

Fixes [SECRT-1857](https://linear.app/autogpt/issue/SECRT-1857)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-05 15:02:26 +00:00
Swifty
b121030c94 feat(frontend): Add progress indicator during agent generation [SECRT-1883] (#11974)
## Summary
- Add asymptotic progress bar that appears during long-running chat
tasks
- Progress bar shows after 10 seconds with "Working on it..." label and
percentage
- Uses half-life formula: ~50% at 30s, ~75% at 60s, ~87.5% at 90s, etc.
- Creates the classic "game loading bar" effect that never reaches 100%



https://github.com/user-attachments/assets/3c59289e-793c-4a08-b3fc-69e1eef28b1f



## Test plan
- [x] Start a chat that triggers agent generation
- [x] Wait 10+ seconds for the progress bar to appear
- [x] Verify progress bar is centered with label and percentage
- [x] Verify progress follows expected timing (~50% at 30s)
- [x] Verify progress bar disappears when task completes

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:37:51 +01:00
Swifty
c22c18374d feat(frontend): Add ready-to-test prompt after agent creation [SECRT-1882] (#11975)
## Summary
- Add special UI prompt when agent is successfully created in chat
- Show "Agent Created Successfully" with agent name
- Provide two action buttons:
- **Run with example values**: Sends chat message asking AI to run with
placeholders
- **Run with my inputs**: Opens RunAgentModal for custom input
configuration
- After run/schedule, automatically send chat message with execution
details for AI monitoring



https://github.com/user-attachments/assets/b11e118c-de59-4b79-a629-8bd0d52d9161



## Test plan
- [x] Create an agent through chat
- [x] Verify "Agent Created Successfully" prompt appears
- [x] Click "Run with example values" - verify chat message is sent
- [x] Click "Run with my inputs" - verify RunAgentModal opens
- [x] Fill inputs and run - verify chat message with execution ID is
sent
- [x] Fill inputs and schedule - verify chat message with schedule
details is sent

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:37:31 +01:00
Swifty
e40233a3ac fix(backend/chat): Guide find_agent users toward action with CTAs (#11976)
When users search for agents, guide them toward creating custom agents
if no results are found or after showing results. This improves user
engagement by offering a clear next step.

### Changes 🏗️

- Updated `agent_search.py` to add CTAs in search responses
- Added messaging to inform users they can create custom agents based on
their needs
- Applied to both "no results found" and "agents found" scenarios

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents in marketplace with matching results
  - [x] Search for agents in marketplace with no results
  - [x] Search for agents in library with matching results  
  - [x] Search for agents in library with no results
  - [x] Verify CTA message appears in all cases

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:36:55 +01:00
111 changed files with 2902 additions and 8679 deletions

View File

@@ -152,6 +152,7 @@ REPLICATE_API_KEY=
REVID_API_KEY=
SCREENSHOTONE_API_KEY=
UNREAL_SPEECH_API_KEY=
ELEVENLABS_API_KEY=
# Data & Search Services
E2B_API_KEY=

View File

@@ -19,3 +19,6 @@ load-tests/*.json
load-tests/*.log
load-tests/node_modules/*
migrations/*/rollback*.sql
# Workspace files
workspaces/

View File

@@ -62,10 +62,12 @@ ENV POETRY_HOME=/opt/poetry \
DEBIAN_FRONTEND=noninteractive
ENV PATH=/opt/poetry/bin:$PATH
# Install Python without upgrading system-managed packages
# Install Python, FFmpeg, and ImageMagick (required for video processing blocks)
RUN apt-get update && apt-get install -y \
python3.13 \
python3-pip \
ffmpeg \
imagemagick \
&& rm -rf /var/lib/apt/lists/*
# Copy only necessary files from builder

View File

@@ -11,7 +11,7 @@ class ChatConfig(BaseSettings):
# OpenAI API Configuration
model: str = Field(
default="anthropic/claude-opus-4.5", description="Default model to use"
default="anthropic/claude-opus-4.6", description="Default model to use"
)
title_model: str = Field(
default="openai/gpt-4o-mini",

View File

@@ -57,16 +57,6 @@ class StreamStart(StreamBaseResponse):
description="Task ID for SSE reconnection. Clients can reconnect using GET /tasks/{taskId}/stream",
)
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-protocol fields like taskId."""
import json
data: dict[str, Any] = {
"type": self.type.value,
"messageId": self.messageId,
}
return f"data: {json.dumps(data)}\n\n"
class StreamFinish(StreamBaseResponse):
"""End of message/stream."""
@@ -127,7 +117,7 @@ class StreamToolOutputAvailable(StreamBaseResponse):
type: ResponseType = ResponseType.TOOL_OUTPUT_AVAILABLE
toolCallId: str = Field(..., description="Tool call ID this responds to")
output: str | dict[str, Any] = Field(..., description="Tool execution output")
# Keep these for internal backend use
# Additional fields for internal use (not part of AI SDK spec but useful)
toolName: str | None = Field(
default=None, description="Name of the tool that was executed"
)
@@ -135,15 +125,6 @@ class StreamToolOutputAvailable(StreamBaseResponse):
default=True, description="Whether the tool execution succeeded"
)
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-spec fields."""
import json
data = {
"type": self.type.value,
"toolCallId": self.toolCallId,
"output": self.output,
}
return f"data: {json.dumps(data)}\n\n"
# ========== Other ==========

View File

@@ -18,29 +18,6 @@ from .completion_handler import process_operation_failure, process_operation_suc
from .config import ChatConfig
from .model import ChatSession, create_chat_session, get_chat_session, get_user_sessions
from .response_model import StreamFinish, StreamHeartbeat, StreamStart
from .tools.models import (
AgentDetailsResponse,
AgentOutputResponse,
AgentPreviewResponse,
AgentSavedResponse,
AgentsFoundResponse,
BlockListResponse,
BlockOutputResponse,
ClarificationNeededResponse,
DocPageResponse,
DocSearchResultsResponse,
ErrorResponse,
ExecutionStartedResponse,
InputValidationErrorResponse,
NeedLoginResponse,
NoResultsResponse,
OperationInProgressResponse,
OperationPendingResponse,
OperationStartedResponse,
ResponseType,
SetupRequirementsResponse,
UnderstandingUpdatedResponse,
)
config = ChatConfig()
@@ -774,42 +751,3 @@ async def health_check() -> dict:
"service": "chat",
"version": "0.1.0",
}
# ========== Schema Export (for OpenAPI / Orval codegen) ==========
ToolResponseUnion = (
AgentsFoundResponse
| NoResultsResponse
| AgentDetailsResponse
| SetupRequirementsResponse
| ExecutionStartedResponse
| NeedLoginResponse
| ErrorResponse
| InputValidationErrorResponse
| AgentOutputResponse
| UnderstandingUpdatedResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| BlockListResponse
| BlockOutputResponse
| DocSearchResultsResponse
| DocPageResponse
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
)
@router.get(
"/schema/tool-responses",
response_model=ToolResponseUnion,
include_in_schema=True,
summary="[Dummy] Tool response type export for codegen",
description="This endpoint is not meant to be called. It exists solely to "
"expose tool response models in the OpenAPI schema for frontend codegen.",
)
async def _tool_response_schema() -> ToolResponseUnion: # type: ignore[return]
"""Never called at runtime. Exists only so Orval generates TS types."""
raise HTTPException(status_code=501, detail="Schema-only endpoint")

View File

@@ -351,7 +351,6 @@ async def stream_chat_completion(
retry_count: int = 0,
session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str}
_continuation_message_id: str | None = None, # Internal: reuse message ID for tool call continuations
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
@@ -480,15 +479,11 @@ async def stream_chat_completion(
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
# Reuse message ID for continuations (tool call follow-ups) to avoid duplicate messages
is_continuation = _continuation_message_id is not None
message_id = _continuation_message_id or str(uuid_module.uuid4())
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Only yield message start for the initial call, not for continuations
# This prevents the AI SDK from creating duplicate message objects
if not is_continuation:
yield StreamStart(messageId=message_id)
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
@@ -719,7 +714,6 @@ async def stream_chat_completion(
retry_count=retry_count + 1,
session=session,
context=context,
_continuation_message_id=message_id, # Reuse message ID since start was already sent
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
@@ -789,7 +783,6 @@ async def stream_chat_completion(
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
_continuation_message_id=message_id, # Reuse message ID to avoid duplicates
):
yield chunk

View File

@@ -7,15 +7,7 @@ from typing import Any, NotRequired, TypedDict
from backend.api.features.library import db as library_db
from backend.api.features.store import db as store_db
from backend.data.graph import (
Graph,
Link,
Node,
create_graph,
get_graph,
get_graph_all_versions,
get_store_listed_graphs,
)
from backend.data.graph import Graph, Link, Node, get_graph, get_store_listed_graphs
from backend.util.exceptions import DatabaseError, NotFoundError
from .service import (
@@ -28,8 +20,6 @@ from .service import (
logger = logging.getLogger(__name__)
AGENT_EXECUTOR_BLOCK_ID = "e189baac-8c20-45a1-94a7-55177ea42565"
class ExecutionSummary(TypedDict):
"""Summary of a single execution for quality assessment."""
@@ -669,45 +659,6 @@ def json_to_graph(agent_json: dict[str, Any]) -> Graph:
)
def _reassign_node_ids(graph: Graph) -> None:
"""Reassign all node and link IDs to new UUIDs.
This is needed when creating a new version to avoid unique constraint violations.
"""
id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes}
for node in graph.nodes:
node.id = id_map[node.id]
for link in graph.links:
link.id = str(uuid.uuid4())
if link.source_id in id_map:
link.source_id = id_map[link.source_id]
if link.sink_id in id_map:
link.sink_id = id_map[link.sink_id]
def _populate_agent_executor_user_ids(agent_json: dict[str, Any], user_id: str) -> None:
"""Populate user_id in AgentExecutorBlock nodes.
The external agent generator creates AgentExecutorBlock nodes with empty user_id.
This function fills in the actual user_id so sub-agents run with correct permissions.
Args:
agent_json: Agent JSON dict (modified in place)
user_id: User ID to set
"""
for node in agent_json.get("nodes", []):
if node.get("block_id") == AGENT_EXECUTOR_BLOCK_ID:
input_default = node.get("input_default") or {}
if not input_default.get("user_id"):
input_default["user_id"] = user_id
node["input_default"] = input_default
logger.debug(
f"Set user_id for AgentExecutorBlock node {node.get('id')}"
)
async def save_agent_to_library(
agent_json: dict[str, Any], user_id: str, is_update: bool = False
) -> tuple[Graph, Any]:
@@ -721,35 +672,10 @@ async def save_agent_to_library(
Returns:
Tuple of (created Graph, LibraryAgent)
"""
# Populate user_id in AgentExecutorBlock nodes before conversion
_populate_agent_executor_user_ids(agent_json, user_id)
graph = json_to_graph(agent_json)
if is_update:
if graph.id:
existing_versions = await get_graph_all_versions(graph.id, user_id)
if existing_versions:
latest_version = max(v.version for v in existing_versions)
graph.version = latest_version + 1
_reassign_node_ids(graph)
logger.info(f"Updating agent {graph.id} to version {graph.version}")
else:
graph.id = str(uuid.uuid4())
graph.version = 1
_reassign_node_ids(graph)
logger.info(f"Creating new agent with ID {graph.id}")
created_graph = await create_graph(graph, user_id)
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)
return created_graph, library_agents[0]
return await library_db.update_graph_in_library(graph, user_id)
return await library_db.create_graph_in_library(graph, user_id)
def graph_to_json(graph: Graph) -> dict[str, Any]:

View File

@@ -206,9 +206,9 @@ async def search_agents(
]
)
no_results_msg = (
f"No agents found matching '{query}'. Try different keywords or browse the marketplace."
f"No agents found matching '{query}'. Let the user know they can try different keywords or browse the marketplace. Also let them know you can create a custom agent for them based on their needs."
if source == "marketplace"
else f"No agents matching '{query}' found in your library."
else f"No agents matching '{query}' found in your library. Let the user know you can create a custom agent for them based on their needs."
)
return NoResultsResponse(
message=no_results_msg, session_id=session_id, suggestions=suggestions
@@ -224,10 +224,10 @@ async def search_agents(
message = (
"Now you have found some options for the user to choose from. "
"You can add a link to a recommended agent at: /marketplace/agent/agent_id "
"Please ask the user if they would like to use any of these agents."
"Please ask the user if they would like to use any of these agents. Let the user know we can create a custom agent for them based on their needs."
if source == "marketplace"
else "Found agents in the user's library. You can provide a link to view an agent at: "
"/library/agents/{agent_id}. Use agent_output to get execution results, or run_agent to execute."
"/library/agents/{agent_id}. Use agent_output to get execution results, or run_agent to execute. Let the user know we can create a custom agent for them based on their needs."
)
return AgentsFoundResponse(

View File

@@ -19,7 +19,10 @@ from backend.data.graph import GraphSettings
from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include
from backend.data.model import CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks.graph_lifecycle_hooks import on_graph_activate
from backend.integrations.webhooks.graph_lifecycle_hooks import (
on_graph_activate,
on_graph_deactivate,
)
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, InvalidInputError, NotFoundError
from backend.util.json import SafeJson
@@ -537,6 +540,92 @@ async def update_agent_version_in_library(
return library_model.LibraryAgent.from_db(lib)
async def create_graph_in_library(
graph: graph_db.Graph,
user_id: str,
) -> tuple[graph_db.GraphModel, library_model.LibraryAgent]:
"""Create a new graph and add it to the user's library."""
graph.version = 1
graph_model = graph_db.make_graph_model(graph, user_id)
graph_model.reassign_ids(user_id=user_id, reassign_graph_id=True)
created_graph = await graph_db.create_graph(graph_model, user_id)
library_agents = await create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)
if created_graph.is_active:
created_graph = await on_graph_activate(created_graph, user_id=user_id)
return created_graph, library_agents[0]
async def update_graph_in_library(
graph: graph_db.Graph,
user_id: str,
) -> tuple[graph_db.GraphModel, library_model.LibraryAgent]:
"""Create a new version of an existing graph and update the library entry."""
existing_versions = await graph_db.get_graph_all_versions(graph.id, user_id)
current_active_version = (
next((v for v in existing_versions if v.is_active), None)
if existing_versions
else None
)
graph.version = (
max(v.version for v in existing_versions) + 1 if existing_versions else 1
)
graph_model = graph_db.make_graph_model(graph, user_id)
graph_model.reassign_ids(user_id=user_id, reassign_graph_id=False)
created_graph = await graph_db.create_graph(graph_model, user_id)
library_agent = await get_library_agent_by_graph_id(user_id, created_graph.id)
if not library_agent:
raise NotFoundError(f"Library agent not found for graph {created_graph.id}")
library_agent = await update_library_agent_version_and_settings(
user_id, created_graph
)
if created_graph.is_active:
created_graph = await on_graph_activate(created_graph, user_id=user_id)
await graph_db.set_graph_active_version(
graph_id=created_graph.id,
version=created_graph.version,
user_id=user_id,
)
if current_active_version:
await on_graph_deactivate(current_active_version, user_id=user_id)
return created_graph, library_agent
async def update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
"""Update library agent to point to new graph version and sync settings."""
library = await update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await update_library_agent(
library_agent_id=library.id,
user_id=user_id,
settings=updated_settings,
)
return library
async def update_library_agent(
library_agent_id: str,
user_id: str,

View File

@@ -101,7 +101,6 @@ from backend.util.timezone_utils import (
from backend.util.virus_scanner import scan_content_safe
from .library import db as library_db
from .library import model as library_model
from .store.model import StoreAgentDetails
@@ -823,18 +822,16 @@ async def update_graph(
graph: graph_db.Graph,
user_id: Annotated[str, Security(get_user_id)],
) -> graph_db.GraphModel:
# Sanity check
if graph.id and graph.id != graph_id:
raise HTTPException(400, detail="Graph ID does not match ID in URI")
# Determine new version
existing_versions = await graph_db.get_graph_all_versions(graph_id, user_id=user_id)
if not existing_versions:
raise HTTPException(404, detail=f"Graph #{graph_id} not found")
latest_version_number = max(g.version for g in existing_versions)
graph.version = latest_version_number + 1
graph.version = max(g.version for g in existing_versions) + 1
current_active_version = next((v for v in existing_versions if v.is_active), None)
graph = graph_db.make_graph_model(graph, user_id)
graph.reassign_ids(user_id=user_id, reassign_graph_id=False)
graph.validate_graph(for_run=False)
@@ -842,27 +839,23 @@ async def update_graph(
new_graph_version = await graph_db.create_graph(graph, user_id=user_id)
if new_graph_version.is_active:
# Keep the library agent up to date with the new active version
await _update_library_agent_version_and_settings(user_id, new_graph_version)
# Handle activation of the new graph first to ensure continuity
await library_db.update_library_agent_version_and_settings(
user_id, new_graph_version
)
new_graph_version = await on_graph_activate(new_graph_version, user_id=user_id)
# Ensure new version is the only active version
await graph_db.set_graph_active_version(
graph_id=graph_id, version=new_graph_version.version, user_id=user_id
)
if current_active_version:
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_version, user_id=user_id)
# Fetch new graph version *with sub-graphs* (needed for credentials input schema)
new_graph_version_with_subgraphs = await graph_db.get_graph(
graph_id,
new_graph_version.version,
user_id=user_id,
include_subgraphs=True,
)
assert new_graph_version_with_subgraphs # make type checker happy
assert new_graph_version_with_subgraphs
return new_graph_version_with_subgraphs
@@ -900,33 +893,15 @@ async def set_graph_active_version(
)
# Keep the library agent up to date with the new active version
await _update_library_agent_version_and_settings(user_id, new_active_graph)
await library_db.update_library_agent_version_and_settings(
user_id, new_active_graph
)
if current_active_graph and current_active_graph.version != new_active_version:
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_graph, user_id=user_id)
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id,
settings=updated_settings,
)
return library
@v1_router.patch(
path="/graphs/{graph_id}/settings",
summary="Update graph settings",

View File

@@ -0,0 +1,28 @@
"""ElevenLabs integration blocks - test credentials and shared utilities."""
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsMetaInput
from backend.integrations.providers import ProviderName
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="elevenlabs",
api_key=SecretStr("mock-elevenlabs-api-key"),
title="Mock ElevenLabs API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
ElevenLabsCredentials = APIKeyCredentials
ElevenLabsCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.ELEVENLABS], Literal["api_key"]
]

View File

@@ -0,0 +1,77 @@
"""Text encoding block for converting special characters to escape sequences."""
import codecs
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class TextEncoderBlock(Block):
"""
Encodes a string by converting special characters into escape sequences.
This block is the inverse of TextDecoderBlock. It takes text containing
special characters (like newlines, tabs, etc.) and converts them into
their escape sequence representations (e.g., newline becomes \\n).
"""
class Input(BlockSchemaInput):
"""Input schema for TextEncoderBlock."""
text: str = SchemaField(
description="A string containing special characters to be encoded",
placeholder="Your text with newlines and quotes to encode",
)
class Output(BlockSchemaOutput):
"""Output schema for TextEncoderBlock."""
encoded_text: str = SchemaField(
description="The encoded text with special characters converted to escape sequences"
)
error: str = SchemaField(description="Error message if encoding fails")
def __init__(self):
super().__init__(
id="5185f32e-4b65-4ecf-8fbb-873f003f09d6",
description="Encodes a string by converting special characters into escape sequences",
categories={BlockCategory.TEXT},
input_schema=TextEncoderBlock.Input,
output_schema=TextEncoderBlock.Output,
test_input={
"text": """Hello
World!
This is a "quoted" string."""
},
test_output=[
(
"encoded_text",
"""Hello\\nWorld!\\nThis is a "quoted" string.""",
)
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
"""
Encode the input text by converting special characters to escape sequences.
Args:
input_data: The input containing the text to encode.
**kwargs: Additional keyword arguments (unused).
Yields:
The encoded text with escape sequences, or an error message if encoding fails.
"""
try:
encoded_text = codecs.encode(input_data.text, "unicode_escape").decode(
"utf-8"
)
yield "encoded_text", encoded_text
except Exception as e:
yield "error", f"Encoding error: {str(e)}"

View File

@@ -115,6 +115,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001"
CLAUDE_4_6_OPUS = "claude-opus-4-6"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
@@ -270,6 +271,9 @@ MODEL_METADATA = {
LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 64000, "Claude Sonnet 4", "Anthropic", "Anthropic", 2
), # claude-4-sonnet-20250514
LlmModel.CLAUDE_4_6_OPUS: ModelMetadata(
"anthropic", 200000, 128000, "Claude Opus 4.6", "Anthropic", "Anthropic", 3
), # claude-opus-4-6
LlmModel.CLAUDE_4_5_OPUS: ModelMetadata(
"anthropic", 200000, 64000, "Claude Opus 4.5", "Anthropic", "Anthropic", 3
), # claude-opus-4-5-20251101

View File

@@ -1,246 +0,0 @@
import os
import tempfile
from typing import Optional
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.fx.Loop import Loop
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class MediaDurationBlock(Block):
class Input(BlockSchemaInput):
media_in: MediaFileType = SchemaField(
description="Media input (URL, data URI, or local path)."
)
is_video: bool = SchemaField(
description="Whether the media is a video (True) or audio (False).",
default=True,
)
class Output(BlockSchemaOutput):
duration: float = SchemaField(
description="Duration of the media file (in seconds)."
)
def __init__(self):
super().__init__(
id="d8b91fd4-da26-42d4-8ecb-8b196c6d84b6",
description="Block to get the duration of a media file.",
categories={BlockCategory.MULTIMEDIA},
input_schema=MediaDurationBlock.Input,
output_schema=MediaDurationBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# 1) Store the input media locally
local_media_path = await store_media_file(
file=input_data.media_in,
execution_context=execution_context,
return_format="for_local_processing",
)
assert execution_context.graph_exec_id is not None
media_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_media_path
)
# 2) Load the clip
if input_data.is_video:
clip = VideoFileClip(media_abspath)
else:
clip = AudioFileClip(media_abspath)
yield "duration", clip.duration
class LoopVideoBlock(Block):
"""
Block for looping (repeating) a video clip until a given duration or number of loops.
"""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="The input video (can be a URL, data URI, or local path)."
)
# Provide EITHER a `duration` or `n_loops` or both. We'll demonstrate `duration`.
duration: Optional[float] = SchemaField(
description="Target duration (in seconds) to loop the video to. If omitted, defaults to no looping.",
default=None,
ge=0.0,
)
n_loops: Optional[int] = SchemaField(
description="Number of times to repeat the video. If omitted, defaults to 1 (no repeat).",
default=None,
ge=1,
)
class Output(BlockSchemaOutput):
video_out: str = SchemaField(
description="Looped video returned either as a relative path or a data URI."
)
def __init__(self):
super().__init__(
id="8bf9eef6-5451-4213-b265-25306446e94b",
description="Block to loop a video to a given duration or number of repeats.",
categories={BlockCategory.MULTIMEDIA},
input_schema=LoopVideoBlock.Input,
output_schema=LoopVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the input video locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
input_abspath = get_exec_file_path(graph_exec_id, local_video_path)
# 2) Load the clip
clip = VideoFileClip(input_abspath)
# 3) Apply the loop effect
looped_clip = clip
if input_data.duration:
# Loop until we reach the specified duration
looped_clip = looped_clip.with_effects([Loop(duration=input_data.duration)])
elif input_data.n_loops:
looped_clip = looped_clip.with_effects([Loop(n=input_data.n_loops)])
else:
raise ValueError("Either 'duration' or 'n_loops' must be provided.")
assert isinstance(looped_clip, VideoFileClip)
# 4) Save the looped output
output_filename = MediaFileType(
f"{node_exec_id}_looped_{os.path.basename(local_video_path)}"
)
output_abspath = get_exec_file_path(graph_exec_id, output_filename)
looped_clip = looped_clip.with_audio(clip.audio)
looped_clip.write_videofile(output_abspath, codec="libx264", audio_codec="aac")
# Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out
class AddAudioToVideoBlock(Block):
"""
Block that adds (attaches) an audio track to an existing video.
Optionally scale the volume of the new track.
"""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Video input (URL, data URI, or local path)."
)
audio_in: MediaFileType = SchemaField(
description="Audio input (URL, data URI, or local path)."
)
volume: float = SchemaField(
description="Volume scale for the newly attached audio track (1.0 = original).",
default=1.0,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Final video (with attached audio), as a path or data URI."
)
def __init__(self):
super().__init__(
id="3503748d-62b6-4425-91d6-725b064af509",
description="Block to attach an audio file to a video file using moviepy.",
categories={BlockCategory.MULTIMEDIA},
input_schema=AddAudioToVideoBlock.Input,
output_schema=AddAudioToVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the inputs locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
local_audio_path = await store_media_file(
file=input_data.audio_in,
execution_context=execution_context,
return_format="for_local_processing",
)
abs_temp_dir = os.path.join(tempfile.gettempdir(), "exec_file", graph_exec_id)
video_abspath = os.path.join(abs_temp_dir, local_video_path)
audio_abspath = os.path.join(abs_temp_dir, local_audio_path)
# 2) Load video + audio with moviepy
video_clip = VideoFileClip(video_abspath)
audio_clip = AudioFileClip(audio_abspath)
# Optionally scale volume
if input_data.volume != 1.0:
audio_clip = audio_clip.with_volume_scaled(input_data.volume)
# 3) Attach the new audio track
final_clip = video_clip.with_audio(audio_clip)
# 4) Write to output file
output_filename = MediaFileType(
f"{node_exec_id}_audio_attached_{os.path.basename(local_video_path)}"
)
output_abspath = os.path.join(abs_temp_dir, output_filename)
final_clip.write_videofile(output_abspath, codec="libx264", audio_codec="aac")
# 5) Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out

View File

@@ -0,0 +1,77 @@
import pytest
from backend.blocks.encoder_block import TextEncoderBlock
@pytest.mark.asyncio
async def test_text_encoder_basic():
"""Test basic encoding of newlines and special characters."""
block = TextEncoderBlock()
result = []
async for output in block.run(TextEncoderBlock.Input(text="Hello\nWorld")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
assert result[0][1] == "Hello\\nWorld"
@pytest.mark.asyncio
async def test_text_encoder_multiple_escapes():
"""Test encoding of multiple escape sequences."""
block = TextEncoderBlock()
result = []
async for output in block.run(
TextEncoderBlock.Input(text="Line1\nLine2\tTabbed\rCarriage")
):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
assert "\\n" in result[0][1]
assert "\\t" in result[0][1]
assert "\\r" in result[0][1]
@pytest.mark.asyncio
async def test_text_encoder_unicode():
"""Test that unicode characters are handled correctly."""
block = TextEncoderBlock()
result = []
async for output in block.run(TextEncoderBlock.Input(text="Hello 世界\n")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
# Unicode characters should be escaped as \uXXXX sequences
assert "\\n" in result[0][1]
@pytest.mark.asyncio
async def test_text_encoder_empty_string():
"""Test encoding of an empty string."""
block = TextEncoderBlock()
result = []
async for output in block.run(TextEncoderBlock.Input(text="")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "encoded_text"
assert result[0][1] == ""
@pytest.mark.asyncio
async def test_text_encoder_error_handling():
"""Test that encoding errors are handled gracefully."""
from unittest.mock import patch
block = TextEncoderBlock()
result = []
with patch("codecs.encode", side_effect=Exception("Mocked encoding error")):
async for output in block.run(TextEncoderBlock.Input(text="test")):
result.append(output)
assert len(result) == 1
assert result[0][0] == "error"
assert "Mocked encoding error" in result[0][1]

View File

@@ -0,0 +1,37 @@
"""Video editing blocks for AutoGPT Platform.
This module provides blocks for:
- Downloading videos from URLs (YouTube, Vimeo, news sites, direct links)
- Clipping/trimming video segments
- Concatenating multiple videos
- Adding text overlays
- Adding AI-generated narration
- Getting media duration
- Looping videos
- Adding audio to videos
Dependencies:
- yt-dlp: For video downloading
- moviepy: For video editing operations
- elevenlabs: For AI narration (optional)
"""
from backend.blocks.video.add_audio import AddAudioToVideoBlock
from backend.blocks.video.clip import VideoClipBlock
from backend.blocks.video.concat import VideoConcatBlock
from backend.blocks.video.download import VideoDownloadBlock
from backend.blocks.video.duration import MediaDurationBlock
from backend.blocks.video.loop import LoopVideoBlock
from backend.blocks.video.narration import VideoNarrationBlock
from backend.blocks.video.text_overlay import VideoTextOverlayBlock
__all__ = [
"AddAudioToVideoBlock",
"LoopVideoBlock",
"MediaDurationBlock",
"VideoClipBlock",
"VideoConcatBlock",
"VideoDownloadBlock",
"VideoNarrationBlock",
"VideoTextOverlayBlock",
]

View File

@@ -0,0 +1,131 @@
"""Shared utilities for video blocks."""
from __future__ import annotations
import logging
import os
import re
import subprocess
from pathlib import Path
logger = logging.getLogger(__name__)
# Known operation tags added by video blocks
_VIDEO_OPS = (
r"(?:clip|overlay|narrated|looped|concat|audio_attached|with_audio|narration)"
)
# Matches: {node_exec_id}_{operation}_ where node_exec_id contains a UUID
_BLOCK_PREFIX_RE = re.compile(
r"^[a-zA-Z0-9_-]*"
r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
r"[a-zA-Z0-9_-]*"
r"_" + _VIDEO_OPS + r"_"
)
# Matches: a lone {node_exec_id}_ prefix (no operation keyword, e.g. download output)
_UUID_PREFIX_RE = re.compile(
r"^[a-zA-Z0-9_-]*"
r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
r"[a-zA-Z0-9_-]*_"
)
def extract_source_name(input_path: str, max_length: int = 50) -> str:
"""Extract the original source filename by stripping block-generated prefixes.
Iteratively removes {node_exec_id}_{operation}_ prefixes that accumulate
when chaining video blocks, recovering the original human-readable name.
Safe for plain filenames (no UUID -> no stripping).
Falls back to "video" if everything is stripped.
"""
stem = Path(input_path).stem
# Pass 1: strip {node_exec_id}_{operation}_ prefixes iteratively
while _BLOCK_PREFIX_RE.match(stem):
stem = _BLOCK_PREFIX_RE.sub("", stem, count=1)
# Pass 2: strip a lone {node_exec_id}_ prefix (e.g. from download block)
if _UUID_PREFIX_RE.match(stem):
stem = _UUID_PREFIX_RE.sub("", stem, count=1)
if not stem:
return "video"
return stem[:max_length]
def get_video_codecs(output_path: str) -> tuple[str, str]:
"""Get appropriate video and audio codecs based on output file extension.
Args:
output_path: Path to the output file (used to determine extension)
Returns:
Tuple of (video_codec, audio_codec)
Codec mappings:
- .mp4: H.264 + AAC (universal compatibility)
- .webm: VP8 + Vorbis (web streaming)
- .mkv: H.264 + AAC (container supports many codecs)
- .mov: H.264 + AAC (Apple QuickTime, widely compatible)
- .m4v: H.264 + AAC (Apple iTunes/devices)
- .avi: MPEG-4 + MP3 (legacy Windows)
"""
ext = os.path.splitext(output_path)[1].lower()
codec_map: dict[str, tuple[str, str]] = {
".mp4": ("libx264", "aac"),
".webm": ("libvpx", "libvorbis"),
".mkv": ("libx264", "aac"),
".mov": ("libx264", "aac"),
".m4v": ("libx264", "aac"),
".avi": ("mpeg4", "libmp3lame"),
}
return codec_map.get(ext, ("libx264", "aac"))
def strip_chapters_inplace(video_path: str) -> None:
"""Strip chapter metadata from a media file in-place using ffmpeg.
MoviePy 2.x crashes with IndexError when parsing files with embedded
chapter metadata (https://github.com/Zulko/moviepy/issues/2419).
This strips chapters without re-encoding.
Args:
video_path: Absolute path to the media file to strip chapters from.
"""
base, ext = os.path.splitext(video_path)
tmp_path = base + ".tmp" + ext
try:
result = subprocess.run(
[
"ffmpeg",
"-y",
"-i",
video_path,
"-map_chapters",
"-1",
"-codec",
"copy",
tmp_path,
],
capture_output=True,
text=True,
timeout=300,
)
if result.returncode != 0:
logger.warning(
"ffmpeg chapter strip failed (rc=%d): %s",
result.returncode,
result.stderr,
)
return
os.replace(tmp_path, video_path)
except FileNotFoundError:
logger.warning("ffmpeg not found; skipping chapter strip")
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)

View File

@@ -0,0 +1,113 @@
"""AddAudioToVideoBlock - Attach an audio track to a video file."""
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class AddAudioToVideoBlock(Block):
"""Add (attach) an audio track to an existing video."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Video input (URL, data URI, or local path)."
)
audio_in: MediaFileType = SchemaField(
description="Audio input (URL, data URI, or local path)."
)
volume: float = SchemaField(
description="Volume scale for the newly attached audio track (1.0 = original).",
default=1.0,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Final video (with attached audio), as a path or data URI."
)
def __init__(self):
super().__init__(
id="3503748d-62b6-4425-91d6-725b064af509",
description="Block to attach an audio file to a video file using moviepy.",
categories={BlockCategory.MULTIMEDIA},
input_schema=AddAudioToVideoBlock.Input,
output_schema=AddAudioToVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the inputs locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
local_audio_path = await store_media_file(
file=input_data.audio_in,
execution_context=execution_context,
return_format="for_local_processing",
)
video_abspath = get_exec_file_path(graph_exec_id, local_video_path)
audio_abspath = get_exec_file_path(graph_exec_id, local_audio_path)
# 2) Load video + audio with moviepy
strip_chapters_inplace(video_abspath)
strip_chapters_inplace(audio_abspath)
video_clip = None
audio_clip = None
final_clip = None
try:
video_clip = VideoFileClip(video_abspath)
audio_clip = AudioFileClip(audio_abspath)
# Optionally scale volume
if input_data.volume != 1.0:
audio_clip = audio_clip.with_volume_scaled(input_data.volume)
# 3) Attach the new audio track
final_clip = video_clip.with_audio(audio_clip)
# 4) Write to output file
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_with_audio_{source}.mp4")
output_abspath = get_exec_file_path(graph_exec_id, output_filename)
final_clip.write_videofile(
output_abspath, codec="libx264", audio_codec="aac"
)
finally:
if final_clip:
final_clip.close()
if audio_clip:
audio_clip.close()
if video_clip:
video_clip.close()
# 5) Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out

View File

@@ -0,0 +1,167 @@
"""VideoClipBlock - Extract a segment from a video file."""
from typing import Literal
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoClipBlock(Block):
"""Extract a time segment from a video."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Input video (URL, data URI, or local path)"
)
start_time: float = SchemaField(description="Start time in seconds", ge=0.0)
end_time: float = SchemaField(description="End time in seconds", ge=0.0)
output_format: Literal["mp4", "webm", "mkv", "mov"] = SchemaField(
description="Output format", default="mp4", advanced=True
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Clipped video file (path or data URI)"
)
duration: float = SchemaField(description="Clip duration in seconds")
def __init__(self):
super().__init__(
id="8f539119-e580-4d86-ad41-86fbcb22abb1",
description="Extract a time segment from a video",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"video_in": "/tmp/test.mp4",
"start_time": 0.0,
"end_time": 10.0,
},
test_output=[("video_out", str), ("duration", float)],
test_mock={
"_clip_video": lambda *args: 10.0,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "clip_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _clip_video(
self,
video_abspath: str,
output_abspath: str,
start_time: float,
end_time: float,
) -> float:
"""Extract a clip from a video. Extracted for testability."""
clip = None
subclip = None
try:
strip_chapters_inplace(video_abspath)
clip = VideoFileClip(video_abspath)
subclip = clip.subclipped(start_time, end_time)
video_codec, audio_codec = get_video_codecs(output_abspath)
subclip.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
return subclip.duration
finally:
if subclip:
subclip.close()
if clip:
clip.close()
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Validate time range
if input_data.end_time <= input_data.start_time:
raise BlockExecutionError(
message=f"end_time ({input_data.end_time}) must be greater than start_time ({input_data.start_time})",
block_name=self.name,
block_id=str(self.id),
)
try:
assert execution_context.graph_exec_id is not None
# Store the input video locally
local_video_path = await self._store_input_video(
execution_context, input_data.video_in
)
video_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_video_path
)
# Build output path
source = extract_source_name(local_video_path)
output_filename = MediaFileType(
f"{node_exec_id}_clip_{source}.{input_data.output_format}"
)
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
duration = self._clip_video(
video_abspath,
output_abspath,
input_data.start_time,
input_data.end_time,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
yield "video_out", video_out
yield "duration", duration
except BlockExecutionError:
raise
except Exception as e:
raise BlockExecutionError(
message=f"Failed to clip video: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,227 @@
"""VideoConcatBlock - Concatenate multiple video clips into one."""
from typing import Literal
from moviepy import concatenate_videoclips
from moviepy.video.fx import CrossFadeIn, CrossFadeOut, FadeIn, FadeOut
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoConcatBlock(Block):
"""Merge multiple video clips into one continuous video."""
class Input(BlockSchemaInput):
videos: list[MediaFileType] = SchemaField(
description="List of video files to concatenate (in order)"
)
transition: Literal["none", "crossfade", "fade_black"] = SchemaField(
description="Transition between clips", default="none"
)
transition_duration: int = SchemaField(
description="Transition duration in seconds",
default=1,
ge=0,
advanced=True,
)
output_format: Literal["mp4", "webm", "mkv", "mov"] = SchemaField(
description="Output format", default="mp4", advanced=True
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Concatenated video file (path or data URI)"
)
total_duration: float = SchemaField(description="Total duration in seconds")
def __init__(self):
super().__init__(
id="9b0f531a-1118-487f-aeec-3fa63ea8900a",
description="Merge multiple video clips into one continuous video",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"videos": ["/tmp/a.mp4", "/tmp/b.mp4"],
},
test_output=[
("video_out", str),
("total_duration", float),
],
test_mock={
"_concat_videos": lambda *args: 20.0,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "concat_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _concat_videos(
self,
video_abspaths: list[str],
output_abspath: str,
transition: str,
transition_duration: int,
) -> float:
"""Concatenate videos. Extracted for testability.
Returns:
Total duration of the concatenated video.
"""
clips = []
faded_clips = []
final = None
try:
# Load clips
for v in video_abspaths:
strip_chapters_inplace(v)
clips.append(VideoFileClip(v))
# Validate transition_duration against shortest clip
if transition in {"crossfade", "fade_black"} and transition_duration > 0:
min_duration = min(c.duration for c in clips)
if transition_duration >= min_duration:
raise BlockExecutionError(
message=(
f"transition_duration ({transition_duration}s) must be "
f"shorter than the shortest clip ({min_duration:.2f}s)"
),
block_name=self.name,
block_id=str(self.id),
)
if transition == "crossfade":
for i, clip in enumerate(clips):
effects = []
if i > 0:
effects.append(CrossFadeIn(transition_duration))
if i < len(clips) - 1:
effects.append(CrossFadeOut(transition_duration))
if effects:
clip = clip.with_effects(effects)
faded_clips.append(clip)
final = concatenate_videoclips(
faded_clips,
method="compose",
padding=-transition_duration,
)
elif transition == "fade_black":
for clip in clips:
faded = clip.with_effects(
[FadeIn(transition_duration), FadeOut(transition_duration)]
)
faded_clips.append(faded)
final = concatenate_videoclips(faded_clips)
else:
final = concatenate_videoclips(clips)
video_codec, audio_codec = get_video_codecs(output_abspath)
final.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
return final.duration
finally:
if final:
final.close()
for clip in faded_clips:
clip.close()
for clip in clips:
clip.close()
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Validate minimum clips
if len(input_data.videos) < 2:
raise BlockExecutionError(
message="At least 2 videos are required for concatenation",
block_name=self.name,
block_id=str(self.id),
)
try:
assert execution_context.graph_exec_id is not None
# Store all input videos locally
video_abspaths = []
for video in input_data.videos:
local_path = await self._store_input_video(execution_context, video)
video_abspaths.append(
get_exec_file_path(execution_context.graph_exec_id, local_path)
)
# Build output path
source = (
extract_source_name(video_abspaths[0]) if video_abspaths else "video"
)
output_filename = MediaFileType(
f"{node_exec_id}_concat_{source}.{input_data.output_format}"
)
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
total_duration = self._concat_videos(
video_abspaths,
output_abspath,
input_data.transition,
input_data.transition_duration,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
yield "video_out", video_out
yield "total_duration", total_duration
except BlockExecutionError:
raise
except Exception as e:
raise BlockExecutionError(
message=f"Failed to concatenate videos: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,172 @@
"""VideoDownloadBlock - Download video from URL (YouTube, Vimeo, news sites, direct links)."""
import os
import typing
from typing import Literal
import yt_dlp
if typing.TYPE_CHECKING:
from yt_dlp import _Params
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoDownloadBlock(Block):
"""Download video from URL using yt-dlp."""
class Input(BlockSchemaInput):
url: str = SchemaField(
description="URL of the video to download (YouTube, Vimeo, direct link, etc.)",
placeholder="https://www.youtube.com/watch?v=...",
)
quality: Literal["best", "1080p", "720p", "480p", "audio_only"] = SchemaField(
description="Video quality preference", default="720p"
)
output_format: Literal["mp4", "webm", "mkv"] = SchemaField(
description="Output video format", default="mp4", advanced=True
)
class Output(BlockSchemaOutput):
video_file: MediaFileType = SchemaField(
description="Downloaded video (path or data URI)"
)
duration: float = SchemaField(description="Video duration in seconds")
title: str = SchemaField(description="Video title from source")
source_url: str = SchemaField(description="Original source URL")
def __init__(self):
super().__init__(
id="c35daabb-cd60-493b-b9ad-51f1fe4b50c4",
description="Download video from URL (YouTube, Vimeo, news sites, direct links)",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
disabled=True, # Disable until we can sandbox yt-dlp and handle security implications
test_input={
"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"quality": "480p",
},
test_output=[
("video_file", str),
("duration", float),
("title", str),
("source_url", str),
],
test_mock={
"_download_video": lambda *args: (
"video.mp4",
212.0,
"Test Video",
),
"_store_output_video": lambda *args, **kwargs: "video.mp4",
},
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _get_format_string(self, quality: str) -> str:
formats = {
"best": "bestvideo+bestaudio/best",
"1080p": "bestvideo[height<=1080]+bestaudio/best[height<=1080]",
"720p": "bestvideo[height<=720]+bestaudio/best[height<=720]",
"480p": "bestvideo[height<=480]+bestaudio/best[height<=480]",
"audio_only": "bestaudio/best",
}
return formats.get(quality, formats["720p"])
def _download_video(
self,
url: str,
quality: str,
output_format: str,
output_dir: str,
node_exec_id: str,
) -> tuple[str, float, str]:
"""Download video. Extracted for testability."""
output_template = os.path.join(
output_dir, f"{node_exec_id}_%(title).50s.%(ext)s"
)
ydl_opts: "_Params" = {
"format": f"{self._get_format_string(quality)}/best",
"outtmpl": output_template,
"merge_output_format": output_format,
"quiet": True,
"no_warnings": True,
}
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
info = ydl.extract_info(url, download=True)
video_path = ydl.prepare_filename(info)
# Handle format conversion in filename
if not video_path.endswith(f".{output_format}"):
video_path = video_path.rsplit(".", 1)[0] + f".{output_format}"
# Return just the filename, not the full path
filename = os.path.basename(video_path)
return (
filename,
info.get("duration") or 0.0,
info.get("title") or "Unknown",
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
try:
assert execution_context.graph_exec_id is not None
# Get the exec file directory
output_dir = get_exec_file_path(execution_context.graph_exec_id, "")
os.makedirs(output_dir, exist_ok=True)
filename, duration, title = self._download_video(
input_data.url,
input_data.quality,
input_data.output_format,
output_dir,
node_exec_id,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, MediaFileType(filename)
)
yield "video_file", video_out
yield "duration", duration
yield "title", title
yield "source_url", input_data.url
except Exception as e:
raise BlockExecutionError(
message=f"Failed to download video: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,77 @@
"""MediaDurationBlock - Get the duration of a media file."""
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import strip_chapters_inplace
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class MediaDurationBlock(Block):
"""Get the duration of a media file (video or audio)."""
class Input(BlockSchemaInput):
media_in: MediaFileType = SchemaField(
description="Media input (URL, data URI, or local path)."
)
is_video: bool = SchemaField(
description="Whether the media is a video (True) or audio (False).",
default=True,
)
class Output(BlockSchemaOutput):
duration: float = SchemaField(
description="Duration of the media file (in seconds)."
)
def __init__(self):
super().__init__(
id="d8b91fd4-da26-42d4-8ecb-8b196c6d84b6",
description="Block to get the duration of a media file.",
categories={BlockCategory.MULTIMEDIA},
input_schema=MediaDurationBlock.Input,
output_schema=MediaDurationBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# 1) Store the input media locally
local_media_path = await store_media_file(
file=input_data.media_in,
execution_context=execution_context,
return_format="for_local_processing",
)
assert execution_context.graph_exec_id is not None
media_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_media_path
)
# 2) Strip chapters to avoid MoviePy crash, then load the clip
strip_chapters_inplace(media_abspath)
clip = None
try:
if input_data.is_video:
clip = VideoFileClip(media_abspath)
else:
clip = AudioFileClip(media_abspath)
duration = clip.duration
finally:
if clip:
clip.close()
yield "duration", duration

View File

@@ -0,0 +1,115 @@
"""LoopVideoBlock - Loop a video to a given duration or number of repeats."""
from typing import Optional
from moviepy.video.fx.Loop import Loop
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class LoopVideoBlock(Block):
"""Loop (repeat) a video clip until a given duration or number of loops."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="The input video (can be a URL, data URI, or local path)."
)
duration: Optional[float] = SchemaField(
description="Target duration (in seconds) to loop the video to. Either duration or n_loops must be provided.",
default=None,
ge=0.0,
le=3600.0, # Max 1 hour to prevent disk exhaustion
)
n_loops: Optional[int] = SchemaField(
description="Number of times to repeat the video. Either n_loops or duration must be provided.",
default=None,
ge=1,
le=10, # Max 10 loops to prevent disk exhaustion
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Looped video returned either as a relative path or a data URI."
)
def __init__(self):
super().__init__(
id="8bf9eef6-5451-4213-b265-25306446e94b",
description="Block to loop a video to a given duration or number of repeats.",
categories={BlockCategory.MULTIMEDIA},
input_schema=LoopVideoBlock.Input,
output_schema=LoopVideoBlock.Output,
)
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
assert execution_context.graph_exec_id is not None
assert execution_context.node_exec_id is not None
graph_exec_id = execution_context.graph_exec_id
node_exec_id = execution_context.node_exec_id
# 1) Store the input video locally
local_video_path = await store_media_file(
file=input_data.video_in,
execution_context=execution_context,
return_format="for_local_processing",
)
input_abspath = get_exec_file_path(graph_exec_id, local_video_path)
# 2) Load the clip
strip_chapters_inplace(input_abspath)
clip = None
looped_clip = None
try:
clip = VideoFileClip(input_abspath)
# 3) Apply the loop effect
if input_data.duration:
# Loop until we reach the specified duration
looped_clip = clip.with_effects([Loop(duration=input_data.duration)])
elif input_data.n_loops:
looped_clip = clip.with_effects([Loop(n=input_data.n_loops)])
else:
raise ValueError("Either 'duration' or 'n_loops' must be provided.")
assert isinstance(looped_clip, VideoFileClip)
# 4) Save the looped output
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_looped_{source}.mp4")
output_abspath = get_exec_file_path(graph_exec_id, output_filename)
looped_clip = looped_clip.with_audio(clip.audio)
looped_clip.write_videofile(
output_abspath, codec="libx264", audio_codec="aac"
)
finally:
if looped_clip:
looped_clip.close()
if clip:
clip.close()
# Return output - for_block_output returns workspace:// if available, else data URI
video_out = await store_media_file(
file=output_filename,
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_out", video_out

View File

@@ -0,0 +1,267 @@
"""VideoNarrationBlock - Generate AI voice narration and add to video."""
import os
from typing import Literal
from elevenlabs import ElevenLabs
from moviepy import CompositeAudioClip
from moviepy.audio.io.AudioFileClip import AudioFileClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.elevenlabs._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
ElevenLabsCredentials,
ElevenLabsCredentialsInput,
)
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import CredentialsField, SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoNarrationBlock(Block):
"""Generate AI narration and add to video."""
class Input(BlockSchemaInput):
credentials: ElevenLabsCredentialsInput = CredentialsField(
description="ElevenLabs API key for voice synthesis"
)
video_in: MediaFileType = SchemaField(
description="Input video (URL, data URI, or local path)"
)
script: str = SchemaField(description="Narration script text")
voice_id: str = SchemaField(
description="ElevenLabs voice ID", default="21m00Tcm4TlvDq8ikWAM" # Rachel
)
model_id: Literal[
"eleven_multilingual_v2",
"eleven_flash_v2_5",
"eleven_turbo_v2_5",
"eleven_turbo_v2",
] = SchemaField(
description="ElevenLabs TTS model",
default="eleven_multilingual_v2",
)
mix_mode: Literal["replace", "mix", "ducking"] = SchemaField(
description="How to combine with original audio. 'ducking' applies stronger attenuation than 'mix'.",
default="ducking",
)
narration_volume: float = SchemaField(
description="Narration volume (0.0 to 2.0)",
default=1.0,
ge=0.0,
le=2.0,
advanced=True,
)
original_volume: float = SchemaField(
description="Original audio volume when mixing (0.0 to 1.0)",
default=0.3,
ge=0.0,
le=1.0,
advanced=True,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Video with narration (path or data URI)"
)
audio_file: MediaFileType = SchemaField(
description="Generated audio file (path or data URI)"
)
def __init__(self):
super().__init__(
id="3d036b53-859c-4b17-9826-ca340f736e0e",
description="Generate AI narration and add to video",
categories={BlockCategory.MULTIMEDIA, BlockCategory.AI},
input_schema=self.Input,
output_schema=self.Output,
test_input={
"video_in": "/tmp/test.mp4",
"script": "Hello world",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("video_out", str), ("audio_file", str)],
test_mock={
"_generate_narration_audio": lambda *args: b"mock audio content",
"_add_narration_to_video": lambda *args: None,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "narrated_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _generate_narration_audio(
self, api_key: str, script: str, voice_id: str, model_id: str
) -> bytes:
"""Generate narration audio via ElevenLabs API."""
client = ElevenLabs(api_key=api_key)
audio_generator = client.text_to_speech.convert(
voice_id=voice_id,
text=script,
model_id=model_id,
)
# The SDK returns a generator, collect all chunks
return b"".join(audio_generator)
def _add_narration_to_video(
self,
video_abspath: str,
audio_abspath: str,
output_abspath: str,
mix_mode: str,
narration_volume: float,
original_volume: float,
) -> None:
"""Add narration audio to video. Extracted for testability."""
video = None
final = None
narration_original = None
narration_scaled = None
original = None
try:
strip_chapters_inplace(video_abspath)
video = VideoFileClip(video_abspath)
narration_original = AudioFileClip(audio_abspath)
narration_scaled = narration_original.with_volume_scaled(narration_volume)
narration = narration_scaled
if mix_mode == "replace":
final_audio = narration
elif mix_mode == "mix":
if video.audio:
original = video.audio.with_volume_scaled(original_volume)
final_audio = CompositeAudioClip([original, narration])
else:
final_audio = narration
else: # ducking - apply stronger attenuation
if video.audio:
# Ducking uses a much lower volume for original audio
ducking_volume = original_volume * 0.3
original = video.audio.with_volume_scaled(ducking_volume)
final_audio = CompositeAudioClip([original, narration])
else:
final_audio = narration
final = video.with_audio(final_audio)
video_codec, audio_codec = get_video_codecs(output_abspath)
final.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
finally:
if original:
original.close()
if narration_scaled:
narration_scaled.close()
if narration_original:
narration_original.close()
if final:
final.close()
if video:
video.close()
async def run(
self,
input_data: Input,
*,
credentials: ElevenLabsCredentials,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
try:
assert execution_context.graph_exec_id is not None
# Store the input video locally
local_video_path = await self._store_input_video(
execution_context, input_data.video_in
)
video_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_video_path
)
# Generate narration audio via ElevenLabs
audio_content = self._generate_narration_audio(
credentials.api_key.get_secret_value(),
input_data.script,
input_data.voice_id,
input_data.model_id,
)
# Save audio to exec file path
audio_filename = MediaFileType(f"{node_exec_id}_narration.mp3")
audio_abspath = get_exec_file_path(
execution_context.graph_exec_id, audio_filename
)
os.makedirs(os.path.dirname(audio_abspath), exist_ok=True)
with open(audio_abspath, "wb") as f:
f.write(audio_content)
# Add narration to video
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_narrated_{source}.mp4")
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
self._add_narration_to_video(
video_abspath,
audio_abspath,
output_abspath,
input_data.mix_mode,
input_data.narration_volume,
input_data.original_volume,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
audio_out = await self._store_output_video(
execution_context, audio_filename
)
yield "video_out", video_out
yield "audio_file", audio_out
except Exception as e:
raise BlockExecutionError(
message=f"Failed to add narration: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -0,0 +1,231 @@
"""VideoTextOverlayBlock - Add text overlay to video."""
from typing import Literal
from moviepy import CompositeVideoClip, TextClip
from moviepy.video.io.VideoFileClip import VideoFileClip
from backend.blocks.video._utils import (
extract_source_name,
get_video_codecs,
strip_chapters_inplace,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
class VideoTextOverlayBlock(Block):
"""Add text overlay/caption to video."""
class Input(BlockSchemaInput):
video_in: MediaFileType = SchemaField(
description="Input video (URL, data URI, or local path)"
)
text: str = SchemaField(description="Text to overlay on video")
position: Literal[
"top",
"center",
"bottom",
"top-left",
"top-right",
"bottom-left",
"bottom-right",
] = SchemaField(description="Position of text on screen", default="bottom")
start_time: float | None = SchemaField(
description="When to show text (seconds). None = entire video",
default=None,
advanced=True,
)
end_time: float | None = SchemaField(
description="When to hide text (seconds). None = until end",
default=None,
advanced=True,
)
font_size: int = SchemaField(
description="Font size", default=48, ge=12, le=200, advanced=True
)
font_color: str = SchemaField(
description="Font color (hex or name)", default="white", advanced=True
)
bg_color: str | None = SchemaField(
description="Background color behind text (None for transparent)",
default=None,
advanced=True,
)
class Output(BlockSchemaOutput):
video_out: MediaFileType = SchemaField(
description="Video with text overlay (path or data URI)"
)
def __init__(self):
super().__init__(
id="8ef14de6-cc90-430a-8cfa-3a003be92454",
description="Add text overlay/caption to video",
categories={BlockCategory.MULTIMEDIA},
input_schema=self.Input,
output_schema=self.Output,
disabled=True, # Disable until we can lockdown imagemagick security policy
test_input={"video_in": "/tmp/test.mp4", "text": "Hello World"},
test_output=[("video_out", str)],
test_mock={
"_add_text_overlay": lambda *args: None,
"_store_input_video": lambda *args, **kwargs: "test.mp4",
"_store_output_video": lambda *args, **kwargs: "overlay_test.mp4",
},
)
async def _store_input_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store input video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_local_processing",
)
async def _store_output_video(
self, execution_context: ExecutionContext, file: MediaFileType
) -> MediaFileType:
"""Store output video. Extracted for testability."""
return await store_media_file(
file=file,
execution_context=execution_context,
return_format="for_block_output",
)
def _add_text_overlay(
self,
video_abspath: str,
output_abspath: str,
text: str,
position: str,
start_time: float | None,
end_time: float | None,
font_size: int,
font_color: str,
bg_color: str | None,
) -> None:
"""Add text overlay to video. Extracted for testability."""
video = None
final = None
txt_clip = None
try:
strip_chapters_inplace(video_abspath)
video = VideoFileClip(video_abspath)
txt_clip = TextClip(
text=text,
font_size=font_size,
color=font_color,
bg_color=bg_color,
)
# Position mapping
pos_map = {
"top": ("center", "top"),
"center": ("center", "center"),
"bottom": ("center", "bottom"),
"top-left": ("left", "top"),
"top-right": ("right", "top"),
"bottom-left": ("left", "bottom"),
"bottom-right": ("right", "bottom"),
}
txt_clip = txt_clip.with_position(pos_map[position])
# Set timing
start = start_time or 0
end = end_time or video.duration
duration = max(0, end - start)
txt_clip = txt_clip.with_start(start).with_end(end).with_duration(duration)
final = CompositeVideoClip([video, txt_clip])
video_codec, audio_codec = get_video_codecs(output_abspath)
final.write_videofile(
output_abspath, codec=video_codec, audio_codec=audio_codec
)
finally:
if txt_clip:
txt_clip.close()
if final:
final.close()
if video:
video.close()
async def run(
self,
input_data: Input,
*,
execution_context: ExecutionContext,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Validate time range if both are provided
if (
input_data.start_time is not None
and input_data.end_time is not None
and input_data.end_time <= input_data.start_time
):
raise BlockExecutionError(
message=f"end_time ({input_data.end_time}) must be greater than start_time ({input_data.start_time})",
block_name=self.name,
block_id=str(self.id),
)
try:
assert execution_context.graph_exec_id is not None
# Store the input video locally
local_video_path = await self._store_input_video(
execution_context, input_data.video_in
)
video_abspath = get_exec_file_path(
execution_context.graph_exec_id, local_video_path
)
# Build output path
source = extract_source_name(local_video_path)
output_filename = MediaFileType(f"{node_exec_id}_overlay_{source}.mp4")
output_abspath = get_exec_file_path(
execution_context.graph_exec_id, output_filename
)
self._add_text_overlay(
video_abspath,
output_abspath,
input_data.text,
input_data.position,
input_data.start_time,
input_data.end_time,
input_data.font_size,
input_data.font_color,
input_data.bg_color,
)
# Return as workspace path or data URI based on context
video_out = await self._store_output_video(
execution_context, output_filename
)
yield "video_out", video_out
except BlockExecutionError:
raise
except Exception as e:
raise BlockExecutionError(
message=f"Failed to add text overlay: {e}",
block_name=self.name,
block_id=str(self.id),
) from e

View File

@@ -36,12 +36,14 @@ from backend.blocks.replicate.replicate_block import ReplicateModelBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock
from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock
from backend.blocks.video.narration import VideoNarrationBlock
from backend.data.block import Block, BlockCost, BlockCostType
from backend.integrations.credentials_store import (
aiml_api_credentials,
anthropic_credentials,
apollo_credentials,
did_credentials,
elevenlabs_credentials,
enrichlayer_credentials,
groq_credentials,
ideogram_credentials,
@@ -78,6 +80,7 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.CLAUDE_4_1_OPUS: 21,
LlmModel.CLAUDE_4_OPUS: 21,
LlmModel.CLAUDE_4_SONNET: 5,
LlmModel.CLAUDE_4_6_OPUS: 14,
LlmModel.CLAUDE_4_5_HAIKU: 4,
LlmModel.CLAUDE_4_5_OPUS: 14,
LlmModel.CLAUDE_4_5_SONNET: 9,
@@ -639,4 +642,16 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = {
},
),
],
VideoNarrationBlock: [
BlockCost(
cost_amount=5, # ElevenLabs TTS cost
cost_filter={
"credentials": {
"id": elevenlabs_credentials.id,
"provider": elevenlabs_credentials.provider,
"type": elevenlabs_credentials.type,
}
},
)
],
}

View File

@@ -224,6 +224,14 @@ openweathermap_credentials = APIKeyCredentials(
expires_at=None,
)
elevenlabs_credentials = APIKeyCredentials(
id="f4a8b6c2-3d1e-4f5a-9b8c-7d6e5f4a3b2c",
provider="elevenlabs",
api_key=SecretStr(settings.secrets.elevenlabs_api_key),
title="Use Credits for ElevenLabs",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
ollama_credentials,
revid_credentials,
@@ -252,6 +260,7 @@ DEFAULT_CREDENTIALS = [
v0_credentials,
webshare_proxy_credentials,
openweathermap_credentials,
elevenlabs_credentials,
]
SYSTEM_CREDENTIAL_IDS = {cred.id for cred in DEFAULT_CREDENTIALS}
@@ -366,6 +375,8 @@ class IntegrationCredentialsStore:
all_credentials.append(webshare_proxy_credentials)
if settings.secrets.openweathermap_api_key:
all_credentials.append(openweathermap_credentials)
if settings.secrets.elevenlabs_api_key:
all_credentials.append(elevenlabs_credentials)
return all_credentials
async def get_creds_by_id(

View File

@@ -18,6 +18,7 @@ class ProviderName(str, Enum):
DISCORD = "discord"
D_ID = "d_id"
E2B = "e2b"
ELEVENLABS = "elevenlabs"
FAL = "fal"
GITHUB = "github"
GOOGLE = "google"

View File

@@ -8,6 +8,8 @@ from pathlib import Path
from typing import TYPE_CHECKING, Literal
from urllib.parse import urlparse
from pydantic import BaseModel
from backend.util.cloud_storage import get_cloud_storage_handler
from backend.util.request import Requests
from backend.util.settings import Config
@@ -17,6 +19,35 @@ from backend.util.virus_scanner import scan_content_safe
if TYPE_CHECKING:
from backend.data.execution import ExecutionContext
class WorkspaceUri(BaseModel):
"""Parsed workspace:// URI."""
file_ref: str # File ID or path (e.g. "abc123" or "/path/to/file.txt")
mime_type: str | None = None # MIME type from fragment (e.g. "video/mp4")
is_path: bool = False # True if file_ref is a path (starts with "/")
def parse_workspace_uri(uri: str) -> WorkspaceUri:
"""Parse a workspace:// URI into its components.
Examples:
"workspace://abc123" → WorkspaceUri(file_ref="abc123", mime_type=None, is_path=False)
"workspace://abc123#video/mp4" → WorkspaceUri(file_ref="abc123", mime_type="video/mp4", is_path=False)
"workspace:///path/to/file.txt" → WorkspaceUri(file_ref="/path/to/file.txt", mime_type=None, is_path=True)
"""
raw = uri.removeprefix("workspace://")
mime_type: str | None = None
if "#" in raw:
raw, fragment = raw.split("#", 1)
mime_type = fragment or None
return WorkspaceUri(
file_ref=raw,
mime_type=mime_type,
is_path=raw.startswith("/"),
)
# Return format options for store_media_file
# - "for_local_processing": Returns local file path - use with ffmpeg, MoviePy, PIL, etc.
# - "for_external_api": Returns data URI (base64) - use when sending content to external APIs
@@ -183,22 +214,20 @@ async def store_media_file(
"This file type is only available in CoPilot sessions."
)
# Parse workspace reference
# workspace://abc123 - by file ID
# workspace:///path/to/file.txt - by virtual path
file_ref = file[12:] # Remove "workspace://"
# Parse workspace reference (strips #mimeType fragment from file ID)
ws = parse_workspace_uri(file)
if file_ref.startswith("/"):
# Path reference
workspace_content = await workspace_manager.read_file(file_ref)
file_info = await workspace_manager.get_file_info_by_path(file_ref)
if ws.is_path:
# Path reference: workspace:///path/to/file.txt
workspace_content = await workspace_manager.read_file(ws.file_ref)
file_info = await workspace_manager.get_file_info_by_path(ws.file_ref)
filename = sanitize_filename(
file_info.name if file_info else f"{uuid.uuid4()}.bin"
)
else:
# ID reference
workspace_content = await workspace_manager.read_file_by_id(file_ref)
file_info = await workspace_manager.get_file_info(file_ref)
# ID reference: workspace://abc123 or workspace://abc123#video/mp4
workspace_content = await workspace_manager.read_file_by_id(ws.file_ref)
file_info = await workspace_manager.get_file_info(ws.file_ref)
filename = sanitize_filename(
file_info.name if file_info else f"{uuid.uuid4()}.bin"
)
@@ -334,7 +363,21 @@ async def store_media_file(
# Don't re-save if input was already from workspace
if is_from_workspace:
# Return original workspace reference
# Return original workspace reference, ensuring MIME type fragment
ws = parse_workspace_uri(file)
if not ws.mime_type:
# Add MIME type fragment if missing (older refs without it)
try:
if ws.is_path:
info = await workspace_manager.get_file_info_by_path(
ws.file_ref
)
else:
info = await workspace_manager.get_file_info(ws.file_ref)
if info:
return MediaFileType(f"{file}#{info.mimeType}")
except Exception:
pass
return MediaFileType(file)
# Save new content to workspace
@@ -346,7 +389,7 @@ async def store_media_file(
filename=filename,
overwrite=True,
)
return MediaFileType(f"workspace://{file_record.id}")
return MediaFileType(f"workspace://{file_record.id}#{file_record.mimeType}")
else:
raise ValueError(f"Invalid return_format: {return_format}")

View File

@@ -656,6 +656,7 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
e2b_api_key: str = Field(default="", description="E2B API key")
nvidia_api_key: str = Field(default="", description="Nvidia API key")
mem0_api_key: str = Field(default="", description="Mem0 API key")
elevenlabs_api_key: str = Field(default="", description="ElevenLabs API key")
linear_client_id: str = Field(default="", description="Linear client ID")
linear_client_secret: str = Field(default="", description="Linear client secret")

View File

@@ -1169,6 +1169,29 @@ attrs = ">=21.3.0"
e2b = ">=1.5.4,<2.0.0"
httpx = ">=0.20.0,<1.0.0"
[[package]]
name = "elevenlabs"
version = "1.59.0"
description = ""
optional = false
python-versions = "<4.0,>=3.8"
groups = ["main"]
files = [
{file = "elevenlabs-1.59.0-py3-none-any.whl", hash = "sha256:468145db81a0bc867708b4a8619699f75583e9481b395ec1339d0b443da771ed"},
{file = "elevenlabs-1.59.0.tar.gz", hash = "sha256:16e735bd594e86d415dd445d249c8cc28b09996cfd627fbc10102c0a84698859"},
]
[package.dependencies]
httpx = ">=0.21.2"
pydantic = ">=1.9.2"
pydantic-core = ">=2.18.2,<3.0.0"
requests = ">=2.20"
typing_extensions = ">=4.0.0"
websockets = ">=11.0"
[package.extras]
pyaudio = ["pyaudio (>=0.2.14)"]
[[package]]
name = "email-validator"
version = "2.2.0"
@@ -7361,6 +7384,28 @@ files = [
defusedxml = ">=0.7.1,<0.8.0"
requests = "*"
[[package]]
name = "yt-dlp"
version = "2025.12.8"
description = "A feature-rich command-line audio/video downloader"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "yt_dlp-2025.12.8-py3-none-any.whl", hash = "sha256:36e2584342e409cfbfa0b5e61448a1c5189e345cf4564294456ee509e7d3e065"},
{file = "yt_dlp-2025.12.8.tar.gz", hash = "sha256:b773c81bb6b71cb2c111cfb859f453c7a71cf2ef44eff234ff155877184c3e4f"},
]
[package.extras]
build = ["build", "hatchling (>=1.27.0)", "pip", "setuptools (>=71.0.2)", "wheel"]
curl-cffi = ["curl-cffi (>=0.5.10,<0.6.dev0 || >=0.10.dev0,<0.14) ; implementation_name == \"cpython\""]
default = ["brotli ; implementation_name == \"cpython\"", "brotlicffi ; implementation_name != \"cpython\"", "certifi", "mutagen", "pycryptodomex", "requests (>=2.32.2,<3)", "urllib3 (>=2.0.2,<3)", "websockets (>=13.0)", "yt-dlp-ejs (==0.3.2)"]
dev = ["autopep8 (>=2.0,<3.0)", "pre-commit", "pytest (>=8.1,<9.0)", "pytest-rerunfailures (>=14.0,<15.0)", "ruff (>=0.14.0,<0.15.0)"]
pyinstaller = ["pyinstaller (>=6.17.0)"]
secretstorage = ["cffi", "secretstorage"]
static-analysis = ["autopep8 (>=2.0,<3.0)", "ruff (>=0.14.0,<0.15.0)"]
test = ["pytest (>=8.1,<9.0)", "pytest-rerunfailures (>=14.0,<15.0)"]
[[package]]
name = "zerobouncesdk"
version = "1.1.2"
@@ -7512,4 +7557,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.14"
content-hash = "ee5742dc1a9df50dfc06d4b26a1682cbb2b25cab6b79ce5625ec272f93e4f4bf"
content-hash = "8239323f9ae6713224dffd1fe8ba8b449fe88b6c3c7a90940294a74f43a0387a"

View File

@@ -20,6 +20,7 @@ click = "^8.2.0"
cryptography = "^45.0"
discord-py = "^2.5.2"
e2b-code-interpreter = "^1.5.2"
elevenlabs = "^1.50.0"
fastapi = "^0.116.1"
feedparser = "^6.0.11"
flake8 = "^7.3.0"
@@ -71,6 +72,7 @@ tweepy = "^4.16.0"
uvicorn = { extras = ["standard"], version = "^0.35.0" }
websockets = "^15.0"
youtube-transcript-api = "^1.2.1"
yt-dlp = "2025.12.08"
zerobouncesdk = "^1.1.2"
# NOTE: please insert new dependencies in their alphabetical location
pytest-snapshot = "^0.9.0"

View File

@@ -30,7 +30,6 @@
"defaults"
],
"dependencies": {
"@ai-sdk/react": "3.0.61",
"@faker-js/faker": "10.0.0",
"@hookform/resolvers": "5.2.2",
"@next/third-parties": "15.4.6",
@@ -61,10 +60,6 @@
"@rjsf/utils": "6.1.2",
"@rjsf/validator-ajv8": "6.1.2",
"@sentry/nextjs": "10.27.0",
"@streamdown/cjk": "1.0.1",
"@streamdown/code": "1.0.1",
"@streamdown/math": "1.0.1",
"@streamdown/mermaid": "1.0.1",
"@supabase/ssr": "0.7.0",
"@supabase/supabase-js": "2.78.0",
"@tanstack/react-query": "5.90.6",
@@ -73,7 +68,6 @@
"@vercel/analytics": "1.5.0",
"@vercel/speed-insights": "1.2.0",
"@xyflow/react": "12.9.2",
"ai": "6.0.59",
"boring-avatars": "1.11.2",
"class-variance-authority": "0.7.1",
"clsx": "2.1.1",
@@ -118,11 +112,9 @@
"remark-math": "6.0.0",
"shepherd.js": "14.5.1",
"sonner": "2.0.7",
"streamdown": "2.1.0",
"tailwind-merge": "2.6.0",
"tailwind-scrollbar": "3.1.0",
"tailwindcss-animate": "1.0.7",
"use-stick-to-bottom": "1.1.2",
"uuid": "11.1.0",
"vaul": "1.1.2",
"zod": "3.25.76",

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
import { beautifyString } from "@/lib/utils";
import { Clipboard, Maximize2 } from "lucide-react";
import React, { useState } from "react";
import React, { useMemo, useState } from "react";
import { Button } from "../../../../../components/__legacy__/ui/button";
import { ContentRenderer } from "../../../../../components/__legacy__/ui/render";
import {
@@ -11,6 +11,12 @@ import {
TableHeader,
TableRow,
} from "../../../../../components/__legacy__/ui/table";
import type { OutputMetadata } from "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { useToast } from "../../../../../components/molecules/Toast/use-toast";
import ExpandableOutputDialog from "./ExpandableOutputDialog";
@@ -26,6 +32,9 @@ export default function DataTable({
data,
}: DataTableProps) {
const { toast } = useToast();
const enableEnhancedOutputHandling = useGetFlag(
Flag.ENABLE_ENHANCED_OUTPUT_HANDLING,
);
const [expandedDialog, setExpandedDialog] = useState<{
isOpen: boolean;
execId: string;
@@ -33,6 +42,15 @@ export default function DataTable({
data: any[];
} | null>(null);
// Prepare renderers for each item when enhanced mode is enabled
const getItemRenderer = useMemo(() => {
if (!enableEnhancedOutputHandling) return null;
return (item: unknown) => {
const metadata: OutputMetadata = {};
return globalRegistry.getRenderer(item, metadata);
};
}, [enableEnhancedOutputHandling]);
const copyData = (pin: string, data: string) => {
navigator.clipboard.writeText(data).then(() => {
toast({
@@ -102,15 +120,31 @@ export default function DataTable({
<Clipboard size={18} />
</Button>
</div>
{value.map((item, index) => (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < value.length - 1 && ", "}
</React.Fragment>
))}
{value.map((item, index) => {
const renderer = getItemRenderer?.(item);
if (enableEnhancedOutputHandling && renderer) {
const metadata: OutputMetadata = {};
return (
<React.Fragment key={index}>
<OutputItem
value={item}
metadata={metadata}
renderer={renderer}
/>
{index < value.length - 1 && ", "}
</React.Fragment>
);
}
return (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < value.length - 1 && ", "}
</React.Fragment>
);
})}
</div>
</TableCell>
</TableRow>

View File

@@ -1,8 +1,14 @@
import React, { useContext, useState } from "react";
import React, { useContext, useMemo, useState } from "react";
import { Button } from "@/components/__legacy__/ui/button";
import { Maximize2 } from "lucide-react";
import * as Separator from "@radix-ui/react-separator";
import { ContentRenderer } from "@/components/__legacy__/ui/render";
import type { OutputMetadata } from "@/components/contextual/OutputRenderers";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import { beautifyString } from "@/lib/utils";
@@ -21,6 +27,9 @@ export default function NodeOutputs({
data,
}: NodeOutputsProps) {
const builderContext = useContext(BuilderContext);
const enableEnhancedOutputHandling = useGetFlag(
Flag.ENABLE_ENHANCED_OUTPUT_HANDLING,
);
const [expandedDialog, setExpandedDialog] = useState<{
isOpen: boolean;
@@ -37,6 +46,15 @@ export default function NodeOutputs({
const { getNodeTitle } = builderContext;
// Prepare renderers for each item when enhanced mode is enabled
const getItemRenderer = useMemo(() => {
if (!enableEnhancedOutputHandling) return null;
return (item: unknown) => {
const metadata: OutputMetadata = {};
return globalRegistry.getRenderer(item, metadata);
};
}, [enableEnhancedOutputHandling]);
const getBeautifiedPinName = (pin: string) => {
if (!pin.startsWith("tools_^_")) {
return beautifyString(pin);
@@ -87,15 +105,31 @@ export default function NodeOutputs({
<div className="mt-2">
<strong className="mr-2">Data:</strong>
<div className="mt-1">
{dataArray.slice(0, 10).map((item, index) => (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < Math.min(dataArray.length, 10) - 1 && ", "}
</React.Fragment>
))}
{dataArray.slice(0, 10).map((item, index) => {
const renderer = getItemRenderer?.(item);
if (enableEnhancedOutputHandling && renderer) {
const metadata: OutputMetadata = {};
return (
<React.Fragment key={index}>
<OutputItem
value={item}
metadata={metadata}
renderer={renderer}
/>
{index < Math.min(dataArray.length, 10) - 1 && ", "}
</React.Fragment>
);
}
return (
<React.Fragment key={index}>
<ContentRenderer
value={item}
truncateLongData={truncateLongData}
/>
{index < Math.min(dataArray.length, 10) - 1 && ", "}
</React.Fragment>
);
})}
{dataArray.length > 10 && (
<span style={{ color: "#888" }}>
<br />

View File

@@ -1,68 +0,0 @@
"use client";
import { ChatInput } from "@/components/contextual/Chat/components/ChatInput/ChatInput";
import { UIDataTypes, UIMessage, UITools } from "ai";
import { LayoutGroup, motion } from "framer-motion";
import { ChatMessagesContainer } from "../ChatMessagesContainer/ChatMessagesContainer";
import { CopilotChatActionsProvider } from "../CopilotChatActionsProvider/CopilotChatActionsProvider";
import { EmptySession } from "../EmptySession/EmptySession";
export interface ChatContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
sessionId: string | null;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
}
export const ChatContainer = ({
messages,
status,
error,
sessionId,
isCreatingSession,
onCreateSession,
onSend,
}: ChatContainerProps) => {
const inputLayoutId = "copilot-2-chat-input";
return (
<CopilotChatActionsProvider onSend={onSend}>
<LayoutGroup id="copilot-2-chat-layout">
<div className="flex h-full min-h-0 w-full flex-col bg-[#f8f8f9] px-2 lg:px-0">
{sessionId ? (
<div className="mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col">
<ChatMessagesContainer
messages={messages}
status={status}
error={error}
/>
<motion.div
layoutId={inputLayoutId}
transition={{ type: "spring", bounce: 0.2, duration: 0.65 }}
className="relative px-3 pb-2 pt-2"
>
<div className="pointer-events-none absolute left-0 right-0 top-[-18px] z-10 h-6 bg-gradient-to-b from-transparent to-[#f8f8f9]" />
<ChatInput
inputId="chat-input-session"
onSend={onSend}
disabled={status === "streaming"}
isStreaming={status === "streaming"}
onStop={() => {}}
placeholder="What else can I help with?"
/>
</motion.div>
</div>
) : (
<EmptySession
inputLayoutId={inputLayoutId}
isCreatingSession={isCreatingSession}
onCreateSession={onCreateSession}
onSend={onSend}
/>
)}
</div>
</LayoutGroup>
</CopilotChatActionsProvider>
);
};

View File

@@ -1,136 +0,0 @@
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import {
Message,
MessageContent,
MessageResponse,
} from "@/components/ai-elements/message";
import { UIDataTypes, UIMessage, UITools, ToolUIPart } from "ai";
import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks";
import { FindAgentsTool } from "../../tools/FindAgents/FindAgents";
import { SearchDocsTool } from "../../tools/SearchDocs/SearchDocs";
import { RunBlockTool } from "../../tools/RunBlock/RunBlock";
import { RunAgentTool } from "../../tools/RunAgent/RunAgent";
import { ViewAgentOutputTool } from "../../tools/ViewAgentOutput/ViewAgentOutput";
import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../tools/EditAgent/EditAgent";
interface ChatMessagesContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
}
export const ChatMessagesContainer = ({
messages,
status,
error,
}: ChatMessagesContainerProps) => {
return (
<Conversation className="min-h-0 flex-1">
<ConversationContent className="gap-6 px-3 py-6">
{messages.map((message) => (
<Message from={message.role} key={message.id}>
<MessageContent
className={
"text-[1rem] leading-relaxed " +
"group-[.is-user]:rounded-xl group-[.is-user]:bg-purple-100 group-[.is-user]:px-3 group-[.is-user]:py-2.5 group-[.is-user]:text-slate-900 group-[.is-user]:[border-bottom-right-radius:0] " +
"group-[.is-assistant]:bg-transparent group-[.is-assistant]:text-slate-900"
}
>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return (
<MessageResponse key={`${message.id}-${i}`}>
{part.text}
</MessageResponse>
);
case "tool-find_block":
return (
<FindBlocksTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-find_agent":
case "tool-find_library_agent":
return (
<FindAgentsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-search_docs":
case "tool-get_doc_page":
return (
<SearchDocsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_block":
return (
<RunBlockTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_agent":
case "tool-schedule_agent":
return (
<RunAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-create_agent":
return (
<CreateAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-edit_agent":
return (
<EditAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-view_agent_output":
return (
<ViewAgentOutputTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
default:
return null;
}
})}
</MessageContent>
</Message>
))}
{status === "submitted" && (
<Message from="assistant">
<MessageContent className="text-[1rem] leading-relaxed">
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
Thinking...
</span>
</MessageContent>
</Message>
)}
{error && (
<div className="rounded-lg bg-red-50 p-3 text-red-600">
Error: {error.message}
</div>
)}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
);
};

View File

@@ -1,191 +0,0 @@
"use client";
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
Sidebar,
SidebarContent,
SidebarFooter,
SidebarHeader,
SidebarTrigger,
useSidebar,
} from "@/components/ui/sidebar";
import { cn } from "@/lib/utils";
import {
PlusCircleIcon,
PlusIcon,
SpinnerGapIcon,
} from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { parseAsString, useQueryState } from "nuqs";
export function ChatSidebar() {
const { state } = useSidebar();
const isCollapsed = state === "collapsed";
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
function handleNewChat() {
setSessionId(null);
}
function handleSelectSession(id: string) {
setSessionId(id);
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
return (
<Sidebar
variant="inset"
collapsible="icon"
className="!top-[50px] !h-[calc(100vh-50px)] border-r border-zinc-100 px-0"
>
{isCollapsed && (
<SidebarHeader
className={cn(
"flex",
isCollapsed
? "flex-row items-center justify-between gap-y-4 md:flex-col md:items-start md:justify-start"
: "flex-row items-center justify-between",
)}
>
<motion.div
key={isCollapsed ? "header-collapsed" : "header-expanded"}
className="flex flex-col items-center gap-3 pt-4"
initial={{ opacity: 0, filter: "blur(3px)" }}
animate={{ opacity: 1, filter: "blur(0px)" }}
transition={{ type: "spring", bounce: 0.2 }}
>
<div className="flex flex-col items-center gap-2">
<SidebarTrigger />
<Button
variant="ghost"
onClick={handleNewChat}
style={{ minWidth: "auto", width: "auto" }}
>
<PlusCircleIcon className="!size-5" />
<span className="sr-only">New Chat</span>
</Button>
</div>
</motion.div>
</SidebarHeader>
)}
<SidebarContent className="gap-4 overflow-y-auto px-4 py-4 [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.1 }}
className="flex items-center justify-between px-3"
>
<Text variant="h3" size="body-medium">
Your chats
</Text>
<div className="relative left-6">
<SidebarTrigger />
</div>
</motion.div>
)}
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.15 }}
className="mt-4 flex flex-col gap-1"
>
{isLoadingSessions ? (
<div className="flex items-center justify-center py-4">
<SpinnerGapIcon className="h-5 w-5 animate-spin text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => handleSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === sessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === sessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || `Untitled chat`}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</motion.div>
)}
</SidebarContent>
{!isCollapsed && sessionId && (
<SidebarFooter className="shrink-0 bg-zinc-50 p-3 pb-1 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.2 }}
>
<Button
variant="primary"
size="small"
onClick={handleNewChat}
className="w-full"
leftIcon={<PlusIcon className="h-4 w-4" weight="bold" />}
>
New Chat
</Button>
</motion.div>
</SidebarFooter>
)}
</Sidebar>
);
}

View File

@@ -1,16 +0,0 @@
"use client";
import { CopilotChatActionsContext } from "./useCopilotChatActions";
interface Props {
onSend: (message: string) => void | Promise<void>;
children: React.ReactNode;
}
export function CopilotChatActionsProvider({ onSend, children }: Props) {
return (
<CopilotChatActionsContext.Provider value={{ onSend }}>
{children}
</CopilotChatActionsContext.Provider>
);
}

View File

@@ -1,23 +0,0 @@
"use client";
import { createContext, useContext } from "react";
interface CopilotChatActions {
onSend: (message: string) => void | Promise<void>;
}
const CopilotChatActionsContext = createContext<CopilotChatActions | null>(
null,
);
export function useCopilotChatActions(): CopilotChatActions {
const ctx = useContext(CopilotChatActionsContext);
if (!ctx) {
throw new Error(
"useCopilotChatActions must be used within CopilotChatActionsProvider",
);
}
return ctx;
}
export { CopilotChatActionsContext };

View File

@@ -1,111 +0,0 @@
"use client";
import {
getGreetingName,
getInputPlaceholder,
getQuickActions,
} from "@/app/(platform)/copilot/helpers";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { ChatInput } from "@/components/contextual/Chat/components/ChatInput/ChatInput";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { SpinnerGapIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { useEffect, useState } from "react";
interface Props {
inputLayoutId: string;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
}
export function EmptySession({
inputLayoutId,
isCreatingSession,
onSend,
}: Props) {
const { user } = useSupabase();
const greetingName = getGreetingName(user);
const quickActions = getQuickActions();
const [loadingAction, setLoadingAction] = useState<string | null>(null);
const [inputPlaceholder, setInputPlaceholder] = useState(
getInputPlaceholder(),
);
useEffect(() => {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
}, [window.innerWidth]);
async function handleQuickActionClick(action: string) {
if (isCreatingSession || loadingAction) return;
setLoadingAction(action);
try {
await onSend(action);
} finally {
setLoadingAction(null);
}
}
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-0 py-5 md:px-6 md:py-10">
<motion.div
className="w-full max-w-3xl text-center"
initial={{ opacity: 0, y: 14, filter: "blur(6px)" }}
animate={{ opacity: 1, y: 0, filter: "blur(0px)" }}
transition={{ type: "spring", bounce: 0.2, duration: 0.7 }}
>
<div className="mx-auto max-w-3xl">
<Text variant="h3" className="mb-1 !text-[1.375rem] text-zinc-700">
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 !font-normal">
Tell me about your work I&apos;ll find what to automate.
</Text>
<div className="mb-6">
<motion.div
layoutId={inputLayoutId}
transition={{ type: "spring", bounce: 0.2, duration: 0.65 }}
className="w-full px-2"
>
<ChatInput
inputId="chat-input-empty"
onSend={onSend}
disabled={isCreatingSession}
placeholder={inputPlaceholder}
className="w-full"
/>
</motion.div>
</div>
</div>
<div className="flex flex-wrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => void handleQuickActionClick(action)}
disabled={isCreatingSession || loadingAction !== null}
aria-busy={loadingAction === action}
leftIcon={
loadingAction === action ? (
<SpinnerGapIcon
className="h-4 w-4 animate-spin"
weight="bold"
/>
) : null
}
className="h-auto shrink-0 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
</motion.div>
</div>
);
}

View File

@@ -1,11 +0,0 @@
export function getInputPlaceholder(width?: number) {
if (!width) return "What's your role and what eats up most of your day?";
if (width < 500) {
return "I'm a chef and I hate...";
}
if (width <= 1080) {
return "What's your role and what eats up most of your day?";
}
return "What's your role and what eats up most of your day? e.g. 'I'm a recruiter and I hate...'";
}

View File

@@ -1,140 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, SpinnerGapIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
onSelectSession: (sessionId: string) => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
onSelectSession,
onNewChat,
onClose,
onOpenChange,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 px-4 py-2">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1rem" height="1rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col gap-1 overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
{isLoading ? (
<div className="flex items-center justify-center py-4">
<SpinnerGapIcon className="h-5 w-5 animate-spin text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === currentSessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === currentSessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || "Untitled chat"}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</div>
{currentSessionId && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -1,22 +0,0 @@
import { Button } from "@/components/atoms/Button/Button";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import { ListIcon } from "@phosphor-icons/react";
interface Props {
onOpenDrawer: () => void;
}
export function MobileHeader({ onOpenDrawer }: Props) {
return (
<Button
variant="icon"
size="icon"
aria-label="Open sessions"
onClick={onOpenDrawer}
className="fixed z-50 bg-white shadow-md"
style={{ left: "1rem", top: `${NAVBAR_HEIGHT_PX + 20}px` }}
>
<ListIcon width="1.25rem" height="1.25rem" />
</Button>
);
}

View File

@@ -1,54 +0,0 @@
import { cn } from "@/lib/utils";
import { AnimatePresence, motion } from "framer-motion";
interface Props {
text: string;
className?: string;
}
export function MorphingTextAnimation({ text, className }: Props) {
const letters = text.split("");
return (
<div className={cn(className)}>
<AnimatePresence mode="popLayout" initial={false}>
<motion.div key={text} className="whitespace-nowrap">
<motion.span className="inline-flex overflow-hidden">
{letters.map((char, index) => (
<motion.span
key={`${text}-${index}`}
initial={{
opacity: 0,
y: 8,
rotateX: "80deg",
filter: "blur(6px)",
}}
animate={{
opacity: 1,
y: 0,
rotateX: "0deg",
filter: "blur(0px)",
}}
exit={{
opacity: 0,
y: -8,
rotateX: "-80deg",
filter: "blur(6px)",
}}
style={{ willChange: "transform" }}
transition={{
delay: 0.015 * index,
type: "spring",
bounce: 0.5,
}}
className="inline-block"
>
{char === " " ? "\u00A0" : char}
</motion.span>
))}
</motion.span>
</motion.div>
</AnimatePresence>
</div>
);
}

View File

@@ -1,92 +0,0 @@
"use client";
import { cn } from "@/lib/utils";
import { CaretDownIcon } from "@phosphor-icons/react";
import { AnimatePresence, motion, useReducedMotion } from "framer-motion";
import { useId } from "react";
import { useToolAccordion } from "./useToolAccordion";
interface Props {
badgeText: string;
title: React.ReactNode;
description?: React.ReactNode;
children: React.ReactNode;
className?: string;
defaultExpanded?: boolean;
expanded?: boolean;
onExpandedChange?: (expanded: boolean) => void;
}
export function ToolAccordion({
badgeText,
title,
description,
children,
className,
defaultExpanded,
expanded,
onExpandedChange,
}: Props) {
const shouldReduceMotion = useReducedMotion();
const contentId = useId();
const { isExpanded, toggle } = useToolAccordion({
expanded,
defaultExpanded,
onExpandedChange,
});
return (
<div className={cn("mt-2 w-full rounded-lg border px-3 py-2", className)}>
<button
type="button"
aria-expanded={isExpanded}
aria-controls={contentId}
onClick={toggle}
className="flex w-full items-center justify-between gap-3 py-1 text-left"
>
<div className="flex min-w-0 items-center gap-2">
<span className="px-2 py-0.5 text-[11px] font-medium text-muted-foreground">
{badgeText}
</span>
<div className="min-w-0">
<p className="truncate text-sm font-medium text-foreground">
{title}
</p>
{description && (
<p className="truncate text-xs text-muted-foreground">
{description}
</p>
)}
</div>
</div>
<CaretDownIcon
className={cn(
"h-4 w-4 shrink-0 text-muted-foreground transition-transform",
isExpanded && "rotate-180",
)}
weight="bold"
/>
</button>
<AnimatePresence initial={false}>
{isExpanded && (
<motion.div
id={contentId}
initial={{ height: 0, opacity: 0, filter: "blur(10px)" }}
animate={{ height: "auto", opacity: 1, filter: "blur(0px)" }}
exit={{ height: 0, opacity: 0, filter: "blur(10px)" }}
transition={
shouldReduceMotion
? { duration: 0 }
: { type: "spring", bounce: 0.35, duration: 0.55 }
}
className="overflow-hidden"
style={{ willChange: "height, opacity, filter" }}
>
<div className="pb-2 pt-3">{children}</div>
</motion.div>
)}
</AnimatePresence>
</div>
);
}

View File

@@ -1,32 +0,0 @@
import { useState } from "react";
interface UseToolAccordionOptions {
expanded?: boolean;
defaultExpanded?: boolean;
onExpandedChange?: (expanded: boolean) => void;
}
interface UseToolAccordionResult {
isExpanded: boolean;
toggle: () => void;
}
export function useToolAccordion({
expanded,
defaultExpanded = false,
onExpandedChange,
}: UseToolAccordionOptions): UseToolAccordionResult {
const [uncontrolledExpanded, setUncontrolledExpanded] =
useState(defaultExpanded);
const isControlled = typeof expanded === "boolean";
const isExpanded = isControlled ? expanded : uncontrolledExpanded;
function toggle() {
const next = !isExpanded;
if (!isControlled) setUncontrolledExpanded(next);
onExpandedChange?.(next);
}
return { isExpanded, toggle };
}

View File

@@ -1,128 +0,0 @@
import type { UIMessage, UIDataTypes, UITools } from "ai";
interface SessionChatMessage {
role: string;
content: string | null;
tool_call_id: string | null;
tool_calls: unknown[] | null;
}
function coerceSessionChatMessages(
rawMessages: unknown[],
): SessionChatMessage[] {
return rawMessages
.map((m) => {
if (!m || typeof m !== "object") return null;
const msg = m as Record<string, unknown>;
const role = typeof msg.role === "string" ? msg.role : null;
if (!role) return null;
return {
role,
content:
typeof msg.content === "string"
? msg.content
: msg.content == null
? null
: String(msg.content),
tool_call_id:
typeof msg.tool_call_id === "string"
? msg.tool_call_id
: msg.tool_call_id == null
? null
: String(msg.tool_call_id),
tool_calls: Array.isArray(msg.tool_calls) ? msg.tool_calls : null,
};
})
.filter((m): m is SessionChatMessage => m !== null);
}
function safeJsonParse(value: string): unknown {
try {
return JSON.parse(value) as unknown;
} catch {
return value;
}
}
function toToolInput(rawArguments: unknown): unknown {
if (typeof rawArguments === "string") {
const trimmed = rawArguments.trim();
return trimmed ? safeJsonParse(trimmed) : {};
}
if (rawArguments && typeof rawArguments === "object") return rawArguments;
return {};
}
export function convertChatSessionMessagesToUiMessages(
sessionId: string,
rawMessages: unknown[],
): UIMessage<unknown, UIDataTypes, UITools>[] {
const messages = coerceSessionChatMessages(rawMessages);
const toolOutputsByCallId = new Map<string, unknown>();
for (const msg of messages) {
if (msg.role !== "tool") continue;
if (!msg.tool_call_id) continue;
if (msg.content == null) continue;
toolOutputsByCallId.set(msg.tool_call_id, msg.content);
}
const uiMessages: UIMessage<unknown, UIDataTypes, UITools>[] = [];
messages.forEach((msg, index) => {
if (msg.role === "tool") return;
if (msg.role !== "user" && msg.role !== "assistant") return;
const parts: UIMessage<unknown, UIDataTypes, UITools>["parts"] = [];
if (typeof msg.content === "string" && msg.content.trim()) {
parts.push({ type: "text", text: msg.content, state: "done" });
}
if (msg.role === "assistant" && Array.isArray(msg.tool_calls)) {
for (const rawToolCall of msg.tool_calls) {
if (!rawToolCall || typeof rawToolCall !== "object") continue;
const toolCall = rawToolCall as {
id?: unknown;
function?: { name?: unknown; arguments?: unknown };
};
const toolCallId = String(toolCall.id ?? "").trim();
const toolName = String(toolCall.function?.name ?? "").trim();
if (!toolCallId || !toolName) continue;
const input = toToolInput(toolCall.function?.arguments);
const output = toolOutputsByCallId.get(toolCallId);
if (output !== undefined) {
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "output-available",
input,
output: typeof output === "string" ? safeJsonParse(output) : output,
});
} else {
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "input-available",
input,
});
}
}
}
if (parts.length === 0) return;
uiMessages.push({
id: `${sessionId}-${index}`,
role: msg.role,
parts,
});
});
return uiMessages;
}

View File

@@ -1,65 +0,0 @@
"use client";
import { SidebarProvider } from "@/components/ui/sidebar";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotPage } from "./useCopilotPage";
export default function Page() {
const {
sessionId,
messages,
status,
error,
isCreatingSession,
createSession,
onSend,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
} = useCopilotPage();
return (
<SidebarProvider
defaultOpen={true}
className="h-[calc(100vh-72px)] min-h-0"
>
{!isMobile && <ChatSidebar />}
<div className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex-1 overflow-hidden">
<ChatContainer
messages={messages}
status={status}
error={error}
sessionId={sessionId}
isCreatingSession={isCreatingSession}
onCreateSession={createSession}
onSend={onSend}
/>
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={sessionId}
isLoading={isLoadingSessions}
onSelectSession={handleSelectSession}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
/>
)}
</SidebarProvider>
);
}

View File

@@ -1,218 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ClarificationQuestionsWidget,
type ClarifyingQuestion as WidgetClarifyingQuestion,
} from "@/components/contextual/Chat/components/ClarificationQuestionsWidget/ClarificationQuestionsWidget";
import {
formatMaybeJson,
getAnimationText,
getCreateAgentToolOutput,
isAgentPreviewOutput,
isAgentSavedOutput,
isClarificationNeededOutput,
isErrorOutput,
isOperationInProgressOutput,
isOperationPendingOutput,
isOperationStartedOutput,
ToolIcon,
truncateText,
type CreateAgentToolOutput,
} from "./helpers";
export interface CreateAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: CreateAgentToolPart;
}
function getAccordionMeta(output: CreateAgentToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isAgentSavedOutput(output)) {
return { badgeText: "Create agent", title: output.agent_name };
}
if (isAgentPreviewOutput(output)) {
return {
badgeText: "Create agent",
title: output.agent_name,
description: `${output.node_count} block${output.node_count === 1 ? "" : "s"}`,
};
}
if (isClarificationNeededOutput(output)) {
const questions = output.questions ?? [];
return {
badgeText: "Create agent",
title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
};
}
if (
isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { badgeText: "Create agent", title: "Creating agent" };
}
return { badgeText: "Create agent", title: "Error" };
}
export function CreateAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getCreateAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output) ||
isAgentPreviewOutput(output) ||
isAgentSavedOutput(output) ||
isClarificationNeededOutput(output) ||
isErrorOutput(output));
function handleClarificationAnswers(answers: Record<string, string>) {
const contextMessage = Object.entries(answers)
.map(([keyword, answer]) => `${keyword}: ${answer}`)
.join("\n");
onSend(
`I have the answers to your questions:\n\n${contextMessage}\n\nPlease proceed with creating the agent.`,
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isClarificationNeededOutput(output)}
>
{(isOperationStartedOutput(output) ||
isOperationPendingOutput(output)) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs text-muted-foreground">
Operation: {output.operation_id}
</p>
<p className="text-xs italic text-muted-foreground">
Check your library in a few minutes.
</p>
</div>
)}
{isOperationInProgressOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs italic text-muted-foreground">
Please wait for the current operation to finish.
</p>
</div>
)}
{isAgentSavedOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<div className="flex flex-wrap gap-2">
<Link
href={output.library_agent_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in library
</Link>
<Link
href={output.agent_page_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in builder
</Link>
</div>
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</pre>
</div>
)}
{isAgentPreviewOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.description?.trim() && (
<p className="text-xs text-muted-foreground">
{output.description}
</p>
)}
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(formatMaybeJson(output.agent_json), 1600)}
</pre>
</div>
)}
{isClarificationNeededOutput(output) && (
<ClarificationQuestionsWidget
questions={(output.questions ?? []).map((q) => {
const item: WidgetClarifyingQuestion = {
question: q.question,
keyword: q.keyword,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
})}
message={output.message}
onSubmitAnswers={handleClarificationAnswers}
/>
)}
{isErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,181 +0,0 @@
import type { ToolUIPart } from "ai";
import { PlusIcon } from "@phosphor-icons/react";
import type { AgentPreviewResponse } from "@/app/api/__generated__/models/agentPreviewResponse";
import type { AgentSavedResponse } from "@/app/api/__generated__/models/agentSavedResponse";
import type { ClarificationNeededResponse } from "@/app/api/__generated__/models/clarificationNeededResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { OperationInProgressResponse } from "@/app/api/__generated__/models/operationInProgressResponse";
import type { OperationPendingResponse } from "@/app/api/__generated__/models/operationPendingResponse";
import type { OperationStartedResponse } from "@/app/api/__generated__/models/operationStartedResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export type CreateAgentToolOutput =
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| ErrorResponse;
function parseOutput(output: unknown): CreateAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.operation_started ||
type === ResponseType.operation_pending ||
type === ResponseType.operation_in_progress ||
type === ResponseType.agent_preview ||
type === ResponseType.agent_saved ||
type === ResponseType.clarification_needed ||
type === ResponseType.error
) {
return output as CreateAgentToolOutput;
}
if ("operation_id" in output && "tool_name" in output)
return output as OperationStartedResponse | OperationPendingResponse;
if ("tool_call_id" in output) return output as OperationInProgressResponse;
if ("agent_json" in output && "agent_name" in output)
return output as AgentPreviewResponse;
if ("agent_id" in output && "library_agent_id" in output)
return output as AgentSavedResponse;
if ("questions" in output) return output as ClarificationNeededResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getCreateAgentToolOutput(
part: unknown,
): CreateAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isOperationStartedOutput(
output: CreateAgentToolOutput,
): output is OperationStartedResponse {
return (
output.type === ResponseType.operation_started ||
("operation_id" in output && "tool_name" in output)
);
}
export function isOperationPendingOutput(
output: CreateAgentToolOutput,
): output is OperationPendingResponse {
return output.type === ResponseType.operation_pending;
}
export function isOperationInProgressOutput(
output: CreateAgentToolOutput,
): output is OperationInProgressResponse {
return (
output.type === ResponseType.operation_in_progress ||
"tool_call_id" in output
);
}
export function isAgentPreviewOutput(
output: CreateAgentToolOutput,
): output is AgentPreviewResponse {
return output.type === ResponseType.agent_preview || "agent_json" in output;
}
export function isAgentSavedOutput(
output: CreateAgentToolOutput,
): output is AgentSavedResponse {
return (
output.type === ResponseType.agent_saved || "agent_page_link" in output
);
}
export function isClarificationNeededOutput(
output: CreateAgentToolOutput,
): output is ClarificationNeededResponse {
return (
output.type === ResponseType.clarification_needed || "questions" in output
);
}
export function isErrorOutput(
output: CreateAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.state) {
case "input-streaming":
case "input-available":
return "Creating a new agent";
case "output-available": {
const output = parseOutput(part.output);
if (!output) return "Creating a new agent";
if (isOperationStartedOutput(output)) return "Agent creation started";
if (isOperationPendingOutput(output)) return "Agent creation in progress";
if (isOperationInProgressOutput(output))
return "Agent creation already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error creating agent";
}
case "output-error":
return "Error creating agent";
default:
return "Creating a new agent";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PlusIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function truncateText(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}

View File

@@ -1,218 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ClarificationQuestionsWidget,
type ClarifyingQuestion as WidgetClarifyingQuestion,
} from "@/components/contextual/Chat/components/ClarificationQuestionsWidget/ClarificationQuestionsWidget";
import {
formatMaybeJson,
getAnimationText,
getEditAgentToolOutput,
isAgentPreviewOutput,
isAgentSavedOutput,
isClarificationNeededOutput,
isErrorOutput,
isOperationInProgressOutput,
isOperationPendingOutput,
isOperationStartedOutput,
ToolIcon,
truncateText,
type EditAgentToolOutput,
} from "./helpers";
export interface EditAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: EditAgentToolPart;
}
function getAccordionMeta(output: EditAgentToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isAgentSavedOutput(output)) {
return { badgeText: "Edit agent", title: output.agent_name };
}
if (isAgentPreviewOutput(output)) {
return {
badgeText: "Edit agent",
title: output.agent_name,
description: `${output.node_count} block${output.node_count === 1 ? "" : "s"}`,
};
}
if (isClarificationNeededOutput(output)) {
const questions = output.questions ?? [];
return {
badgeText: "Edit agent",
title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
};
}
if (
isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { badgeText: "Edit agent", title: "Editing agent" };
}
return { badgeText: "Edit agent", title: "Error" };
}
export function EditAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getEditAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output) ||
isAgentPreviewOutput(output) ||
isAgentSavedOutput(output) ||
isClarificationNeededOutput(output) ||
isErrorOutput(output));
function handleClarificationAnswers(answers: Record<string, string>) {
const contextMessage = Object.entries(answers)
.map(([keyword, answer]) => `${keyword}: ${answer}`)
.join("\n");
onSend(
`I have the answers to your questions:\n\n${contextMessage}\n\nPlease proceed with editing the agent.`,
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isClarificationNeededOutput(output)}
>
{(isOperationStartedOutput(output) ||
isOperationPendingOutput(output)) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs text-muted-foreground">
Operation: {output.operation_id}
</p>
<p className="text-xs italic text-muted-foreground">
Check your library in a few minutes.
</p>
</div>
)}
{isOperationInProgressOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<p className="text-xs italic text-muted-foreground">
Please wait for the current operation to finish.
</p>
</div>
)}
{isAgentSavedOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
<div className="flex flex-wrap gap-2">
<Link
href={output.library_agent_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in library
</Link>
<Link
href={output.agent_page_link}
className="text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open in builder
</Link>
</div>
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</pre>
</div>
)}
{isAgentPreviewOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.description?.trim() && (
<p className="text-xs text-muted-foreground">
{output.description}
</p>
)}
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{truncateText(formatMaybeJson(output.agent_json), 1600)}
</pre>
</div>
)}
{isClarificationNeededOutput(output) && (
<ClarificationQuestionsWidget
questions={(output.questions ?? []).map((q) => {
const item: WidgetClarifyingQuestion = {
question: q.question,
keyword: q.keyword,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
})}
message={output.message}
onSubmitAnswers={handleClarificationAnswers}
/>
)}
{isErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,181 +0,0 @@
import type { ToolUIPart } from "ai";
import { PencilLineIcon } from "@phosphor-icons/react";
import type { AgentPreviewResponse } from "@/app/api/__generated__/models/agentPreviewResponse";
import type { AgentSavedResponse } from "@/app/api/__generated__/models/agentSavedResponse";
import type { ClarificationNeededResponse } from "@/app/api/__generated__/models/clarificationNeededResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { OperationInProgressResponse } from "@/app/api/__generated__/models/operationInProgressResponse";
import type { OperationPendingResponse } from "@/app/api/__generated__/models/operationPendingResponse";
import type { OperationStartedResponse } from "@/app/api/__generated__/models/operationStartedResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export type EditAgentToolOutput =
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| ErrorResponse;
function parseOutput(output: unknown): EditAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.operation_started ||
type === ResponseType.operation_pending ||
type === ResponseType.operation_in_progress ||
type === ResponseType.agent_preview ||
type === ResponseType.agent_saved ||
type === ResponseType.clarification_needed ||
type === ResponseType.error
) {
return output as EditAgentToolOutput;
}
if ("operation_id" in output && "tool_name" in output)
return output as OperationStartedResponse | OperationPendingResponse;
if ("tool_call_id" in output) return output as OperationInProgressResponse;
if ("agent_json" in output && "agent_name" in output)
return output as AgentPreviewResponse;
if ("agent_id" in output && "library_agent_id" in output)
return output as AgentSavedResponse;
if ("questions" in output) return output as ClarificationNeededResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getEditAgentToolOutput(
part: unknown,
): EditAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isOperationStartedOutput(
output: EditAgentToolOutput,
): output is OperationStartedResponse {
return (
output.type === ResponseType.operation_started ||
("operation_id" in output && "tool_name" in output)
);
}
export function isOperationPendingOutput(
output: EditAgentToolOutput,
): output is OperationPendingResponse {
return output.type === ResponseType.operation_pending;
}
export function isOperationInProgressOutput(
output: EditAgentToolOutput,
): output is OperationInProgressResponse {
return (
output.type === ResponseType.operation_in_progress ||
"tool_call_id" in output
);
}
export function isAgentPreviewOutput(
output: EditAgentToolOutput,
): output is AgentPreviewResponse {
return output.type === ResponseType.agent_preview || "agent_json" in output;
}
export function isAgentSavedOutput(
output: EditAgentToolOutput,
): output is AgentSavedResponse {
return (
output.type === ResponseType.agent_saved || "agent_page_link" in output
);
}
export function isClarificationNeededOutput(
output: EditAgentToolOutput,
): output is ClarificationNeededResponse {
return (
output.type === ResponseType.clarification_needed || "questions" in output
);
}
export function isErrorOutput(
output: EditAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.state) {
case "input-streaming":
case "input-available":
return "Editing the agent";
case "output-available": {
const output = parseOutput(part.output);
if (!output) return "Editing the agent";
if (isOperationStartedOutput(output)) return "Agent update started";
if (isOperationPendingOutput(output)) return "Agent update in progress";
if (isOperationInProgressOutput(output))
return "Agent update already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error editing agent";
}
case "output-error":
return "Error editing agent";
default:
return "Editing the agent";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PencilLineIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function truncateText(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}

View File

@@ -1,127 +0,0 @@
"use client";
import { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import {
getAgentHref,
getAnimationText,
getFindAgentsOutput,
getSourceLabelFromToolType,
isAgentsFoundOutput,
isErrorOutput,
ToolIcon,
} from "./helpers";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
export interface FindAgentsToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: FindAgentsToolPart;
}
export function FindAgentsTool({ part }: Props) {
const text = getAnimationText(part);
const output = getFindAgentsOutput(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const query =
typeof part.input === "object" && part.input !== null
? String((part.input as { query?: unknown }).query ?? "").trim()
: "";
const agentsFoundOutput =
part.state === "output-available" && output && isAgentsFoundOutput(output)
? output
: null;
const hasAgents =
!!agentsFoundOutput &&
agentsFoundOutput.agents.length > 0 &&
(typeof agentsFoundOutput.count !== "number" ||
agentsFoundOutput.count > 0);
const totalCount = agentsFoundOutput ? agentsFoundOutput.count : 0;
const { label: sourceLabel, source } = getSourceLabelFromToolType(part.type);
const scopeText =
source === "library"
? "in your library"
: source === "marketplace"
? "in marketplace"
: "";
const accordionDescription = `Found ${totalCount}${scopeText ? ` ${scopeText}` : ""}${
query ? ` for "${query}"` : ""
}`;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon toolType={part.type} isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasAgents && agentsFoundOutput && (
<ToolAccordion
badgeText={sourceLabel}
title="Agent results"
description={accordionDescription}
>
<div className="grid gap-2 sm:grid-cols-2">
{agentsFoundOutput.agents.map((agent) => {
const href = getAgentHref(agent);
const agentSource =
agent.source === "library"
? "Library"
: agent.source === "marketplace"
? "Marketplace"
: null;
return (
<div
key={agent.id}
className="rounded-2xl border bg-background p-3"
>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<div className="flex items-center gap-2">
<p className="truncate text-sm font-medium text-foreground">
{agent.name}
</p>
{agentSource && (
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{agentSource}
</span>
)}
</div>
<p className="mt-1 line-clamp-2 text-xs text-muted-foreground">
{agent.description}
</p>
</div>
{href && (
<Link
href={href}
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
)}
</div>
</div>
);
})}
</div>
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,179 +0,0 @@
import { ToolUIPart } from "ai";
import {
MagnifyingGlassIcon,
SquaresFourIcon,
} from "@phosphor-icons/react";
import type { AgentInfo } from "@/app/api/__generated__/models/agentInfo";
import type { AgentsFoundResponse } from "@/app/api/__generated__/models/agentsFoundResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export interface FindAgentInput {
query: string;
}
export type FindAgentsOutput =
| AgentsFoundResponse
| NoResultsResponse
| ErrorResponse;
export type FindAgentsToolType =
| "tool-find_agent"
| "tool-find_library_agent"
| (string & {});
function parseOutput(output: unknown): FindAgentsOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.agents_found ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as FindAgentsOutput;
}
if ("agents" in output && "count" in output)
return output as AgentsFoundResponse;
if ("suggestions" in output && !("error" in output))
return output as NoResultsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getFindAgentsOutput(part: unknown): FindAgentsOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isAgentsFoundOutput(
output: FindAgentsOutput,
): output is AgentsFoundResponse {
return output.type === ResponseType.agents_found || "agents" in output;
}
export function isNoResultsOutput(
output: FindAgentsOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorOutput(
output: FindAgentsOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getSourceLabelFromToolType(toolType?: FindAgentsToolType): {
source: "marketplace" | "library" | "unknown";
label: string;
} {
if (toolType === "tool-find_library_agent") {
return { source: "library", label: "Library" };
}
if (toolType === "tool-find_agent") {
return { source: "marketplace", label: "Marketplace" };
}
return { source: "unknown", label: "Agents" };
}
export function getAnimationText(part: {
type?: FindAgentsToolType;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const { source } = getSourceLabelFromToolType(part.type);
const query = (part.input as FindAgentInput | undefined)?.query?.trim();
// Action phrase matching legacy ToolCallMessage
const actionPhrase =
source === "library"
? "Looking for library agents"
: "Looking for agents in the marketplace";
const queryText = query ? ` matching "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `${actionPhrase}${queryText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) {
return `${actionPhrase}${queryText}`;
}
if (isNoResultsOutput(output)) {
return `No agents found${queryText}`;
}
if (isAgentsFoundOutput(output)) {
const count = output.count ?? output.agents?.length ?? 0;
return `Found ${count} agent${count === 1 ? "" : "s"}${queryText}`;
}
if (isErrorOutput(output)) {
return `Error finding agents${queryText}`;
}
return `${actionPhrase}${queryText}`;
}
case "output-error":
return `Error finding agents${queryText}`;
default:
return actionPhrase;
}
}
export function getAgentHref(agent: AgentInfo): string | null {
if (agent.source === "library") {
return `/library/agents/${encodeURIComponent(agent.id)}`;
}
const [creator, slug, ...rest] = agent.id.split("/");
if (!creator || !slug || rest.length > 0) return null;
return `/marketplace/agent/${encodeURIComponent(creator)}/${encodeURIComponent(slug)}`;
}
export function ToolIcon({
toolType,
isStreaming,
isError,
}: {
toolType?: FindAgentsToolType;
isStreaming?: boolean;
isError?: boolean;
}) {
const { source } = getSourceLabelFromToolType(toolType);
const IconComponent =
source === "library" ? MagnifyingGlassIcon : SquaresFourIcon;
return (
<IconComponent
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}

View File

@@ -1,41 +0,0 @@
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import type { BlockListResponse } from "@/app/api/__generated__/models/blockListResponse";
import { ToolUIPart } from "ai";
import { getAnimationText, ToolIcon } from "./helpers";
export interface FindBlockInput {
query: string;
}
export type FindBlockOutput = BlockListResponse;
export interface FindBlockToolPart {
type: string;
toolName?: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: FindBlockInput | unknown;
output?: string | FindBlockOutput | unknown;
title?: string;
}
interface Props {
part: FindBlockToolPart;
}
export function FindBlocksTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError = part.state === "output-error";
return (
<div className="flex items-center gap-2 py-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
);
}

View File

@@ -1,72 +0,0 @@
import { ToolUIPart } from "ai";
import type { BlockListResponse } from "@/app/api/__generated__/models/blockListResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import { FindBlockInput, FindBlockToolPart } from "./FindBlocks";
import { PackageIcon } from "@phosphor-icons/react";
function parseOutput(output: unknown): BlockListResponse | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (type === ResponseType.block_list || "blocks" in output) {
return output as BlockListResponse;
}
}
return null;
}
export function getAnimationText(part: FindBlockToolPart): string {
const query = (part.input as FindBlockInput | undefined)?.query?.trim();
const queryText = query ? ` matching "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Searching for blocks${queryText}`;
case "output-available": {
const parsed = parseOutput(part.output);
if (parsed) {
return `Found ${parsed.count} block${parsed.count === 1 ? "" : "s"}${queryText}`;
}
return `Searching for blocks${queryText}`;
}
case "output-error":
return `Error finding blocks${queryText}`;
default:
return "Searching for blocks";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PackageIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}

View File

@@ -1,376 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ChatCredentialsSetup,
type CredentialInfo,
} from "@/components/contextual/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup";
import {
formatMaybeJson,
getAnimationText,
getRunAgentToolOutput,
isRunAgentAgentDetailsOutput,
isRunAgentErrorOutput,
isRunAgentExecutionStartedOutput,
isRunAgentNeedLoginOutput,
isRunAgentSetupRequirementsOutput,
ToolIcon,
type RunAgentToolOutput,
} from "./helpers";
export interface RunAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: RunAgentToolPart;
}
function getAccordionMeta(output: RunAgentToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isRunAgentExecutionStartedOutput(output)) {
const statusText =
typeof output.status === "string" && output.status.trim()
? output.status.trim()
: "started";
return {
badgeText: "Run agent",
title: output.graph_name,
description: `Status: ${statusText}`,
};
}
if (isRunAgentAgentDetailsOutput(output)) {
return {
badgeText: "Run agent",
title: output.agent.name,
description: "Inputs required",
};
}
if (isRunAgentSetupRequirementsOutput(output)) {
const missingCredsCount = Object.keys(
(output.setup_info.user_readiness?.missing_credentials ?? {}) as Record<
string,
unknown
>,
).length;
return {
badgeText: "Run agent",
title: output.setup_info.agent_name,
description:
missingCredsCount > 0
? `Missing ${missingCredsCount} credential${missingCredsCount === 1 ? "" : "s"}`
: output.message,
};
}
if (isRunAgentNeedLoginOutput(output)) {
return { badgeText: "Run agent", title: "Sign in required" };
}
return { badgeText: "Run agent", title: "Error" };
}
function coerceMissingCredentials(
rawMissingCredentials: unknown,
): CredentialInfo[] {
const missing =
rawMissingCredentials && typeof rawMissingCredentials === "object"
? (rawMissingCredentials as Record<string, unknown>)
: {};
const validTypes = new Set([
"api_key",
"oauth2",
"user_password",
"host_scoped",
]);
const results: CredentialInfo[] = [];
Object.values(missing).forEach((value) => {
if (!value || typeof value !== "object") return;
const cred = value as Record<string, unknown>;
const provider =
typeof cred.provider === "string" ? cred.provider.trim() : "";
if (!provider) return;
const providerName =
typeof cred.provider_name === "string" && cred.provider_name.trim()
? cred.provider_name.trim()
: provider.replace(/_/g, " ");
const title =
typeof cred.title === "string" && cred.title.trim()
? cred.title.trim()
: providerName;
const types =
Array.isArray(cred.types) && cred.types.length > 0
? cred.types
: typeof cred.type === "string"
? [cred.type]
: [];
const credentialTypes = types
.map((t) => (typeof t === "string" ? t.trim() : ""))
.filter(
(t): t is "api_key" | "oauth2" | "user_password" | "host_scoped" =>
validTypes.has(t),
);
if (credentialTypes.length === 0) return;
const scopes = Array.isArray(cred.scopes)
? cred.scopes.filter((s): s is string => typeof s === "string")
: undefined;
const item: CredentialInfo = {
provider,
providerName,
credentialTypes,
title,
};
if (scopes && scopes.length > 0) {
item.scopes = scopes;
}
results.push(item);
});
return results;
}
function coerceExpectedInputs(rawInputs: unknown): Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> {
if (!Array.isArray(rawInputs)) return [];
const results: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> = [];
rawInputs.forEach((value, index) => {
if (!value || typeof value !== "object") return;
const input = value as Record<string, unknown>;
const name =
typeof input.name === "string" && input.name.trim()
? input.name.trim()
: `input-${index}`;
const title =
typeof input.title === "string" && input.title.trim()
? input.title.trim()
: name;
const type = typeof input.type === "string" ? input.type : "unknown";
const description =
typeof input.description === "string" && input.description.trim()
? input.description.trim()
: undefined;
const required = Boolean(input.required);
const item: {
name: string;
title: string;
type: string;
description?: string;
required: boolean;
} = { name, title, type, required };
if (description) item.description = description;
results.push(item);
});
return results;
}
export function RunAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getRunAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isRunAgentErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isRunAgentExecutionStartedOutput(output) ||
isRunAgentAgentDetailsOutput(output) ||
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentNeedLoginOutput(output) ||
isRunAgentErrorOutput(output));
function handleAllCredentialsComplete() {
onSend(
"I've configured the required credentials. Please check if everything is ready and proceed with running the agent.",
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentAgentDetailsOutput(output)
}
>
{isRunAgentExecutionStartedOutput(output) && (
<div className="grid gap-2">
<div className="rounded-2xl border bg-background p-3">
<div className="flex items-start justify-between gap-3">
<div className="min-w-0">
<p className="text-sm font-medium text-foreground">
Execution started
</p>
<p className="mt-0.5 truncate text-xs text-muted-foreground">
{output.execution_id}
</p>
<p className="mt-2 text-xs text-muted-foreground">
{output.message}
</p>
</div>
{output.library_agent_link && (
<Link
href={output.library_agent_link}
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
)}
</div>
</div>
</div>
)}
{isRunAgentAgentDetailsOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.agent.description?.trim() && (
<p className="text-xs text-muted-foreground">
{output.agent.description}
</p>
)}
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">Inputs</p>
<p className="mt-1 text-xs text-muted-foreground">
Provide required inputs in chat, or ask to run with defaults.
</p>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(output.agent.inputs)}
</pre>
</div>
</div>
)}
{isRunAgentSetupRequirementsOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
).length > 0 && (
<ChatCredentialsSetup
credentials={coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
)}
agentName={output.setup_info.agent_name}
message={output.message}
onAllCredentialsComplete={handleAllCredentialsComplete}
onCancel={() => {}}
/>
)}
{coerceExpectedInputs(
(output.setup_info.requirements as Record<string, unknown>)
?.inputs,
).length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Expected inputs
</p>
<div className="mt-2 grid gap-2">
{coerceExpectedInputs(
(
output.setup_info.requirements as Record<
string,
unknown
>
)?.inputs,
).map((input) => (
<div key={input.name} className="rounded-xl border p-2">
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{input.title}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{input.required ? "Required" : "Optional"}
</span>
</div>
<p className="mt-1 text-xs text-muted-foreground">
{input.name} {input.type}
{input.description ? `${input.description}` : ""}
</p>
</div>
))}
</div>
</div>
)}
</div>
)}
{isRunAgentNeedLoginOutput(output) && (
<p className="text-sm text-foreground">{output.message}</p>
)}
{isRunAgentErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,191 +0,0 @@
import type { ToolUIPart } from "ai";
import { PlayIcon } from "@phosphor-icons/react";
import type { AgentDetailsResponse } from "@/app/api/__generated__/models/agentDetailsResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { ExecutionStartedResponse } from "@/app/api/__generated__/models/executionStartedResponse";
import type { NeedLoginResponse } from "@/app/api/__generated__/models/needLoginResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
export interface RunAgentInput {
username_agent_slug?: string;
library_agent_id?: string;
inputs?: Record<string, unknown>;
use_defaults?: boolean;
schedule_name?: string;
cron?: string;
timezone?: string;
}
export type RunAgentToolOutput =
| SetupRequirementsResponse
| ExecutionStartedResponse
| AgentDetailsResponse
| NeedLoginResponse
| ErrorResponse;
const RUN_AGENT_OUTPUT_TYPES = new Set<string>([
ResponseType.setup_requirements,
ResponseType.execution_started,
ResponseType.agent_details,
ResponseType.need_login,
ResponseType.error,
]);
export function isRunAgentSetupRequirementsOutput(
output: RunAgentToolOutput,
): output is SetupRequirementsResponse {
return (
output.type === ResponseType.setup_requirements ||
("setup_info" in output && typeof output.setup_info === "object")
);
}
export function isRunAgentExecutionStartedOutput(
output: RunAgentToolOutput,
): output is ExecutionStartedResponse {
return (
output.type === ResponseType.execution_started || "execution_id" in output
);
}
export function isRunAgentAgentDetailsOutput(
output: RunAgentToolOutput,
): output is AgentDetailsResponse {
return output.type === ResponseType.agent_details || "agent" in output;
}
export function isRunAgentNeedLoginOutput(
output: RunAgentToolOutput,
): output is NeedLoginResponse {
return output.type === ResponseType.need_login;
}
export function isRunAgentErrorOutput(
output: RunAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
function parseOutput(output: unknown): RunAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (typeof type === "string" && RUN_AGENT_OUTPUT_TYPES.has(type)) {
return output as RunAgentToolOutput;
}
if ("execution_id" in output) return output as ExecutionStartedResponse;
if ("setup_info" in output) return output as SetupRequirementsResponse;
if ("agent" in output) return output as AgentDetailsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
if (type === ResponseType.need_login) return output as NeedLoginResponse;
}
return null;
}
export function getRunAgentToolOutput(
part: unknown,
): RunAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getAgentIdentifierText(
input: RunAgentInput | undefined,
): string | null {
if (!input) return null;
const slug = input.username_agent_slug?.trim();
if (slug) return slug;
const libraryId = input.library_agent_id?.trim();
if (libraryId) return `Library agent ${libraryId}`;
return null;
}
function getExecutionModeText(input: RunAgentInput | undefined): string | null {
if (!input) return null;
const isSchedule = Boolean(input.schedule_name?.trim() || input.cron?.trim());
return isSchedule ? "Scheduled run" : "Run";
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as RunAgentInput | undefined;
const agentIdentifier = getAgentIdentifierText(input);
const isSchedule = Boolean(
input?.schedule_name?.trim() || input?.cron?.trim(),
);
const actionPhrase = isSchedule
? "Scheduling the agent to run"
: "Running the agent";
const identifierText = agentIdentifier ? ` "${agentIdentifier}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `${actionPhrase}${identifierText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `${actionPhrase}${identifierText}`;
if (isRunAgentExecutionStartedOutput(output)) {
return `Started "${output.graph_name}"`;
}
if (isRunAgentAgentDetailsOutput(output)) {
return `Agent inputs needed for "${output.agent.name}"`;
}
if (isRunAgentSetupRequirementsOutput(output)) {
return `Setup needed for "${output.setup_info.agent_name}"`;
}
if (isRunAgentNeedLoginOutput(output))
return "Sign in required to run agent";
return "Error running agent";
}
case "output-error":
return "Error running agent";
default:
return actionPhrase;
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PlayIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -1,324 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ChatCredentialsSetup,
type CredentialInfo,
} from "@/components/contextual/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup";
import {
formatMaybeJson,
getAnimationText,
getRunBlockToolOutput,
isRunBlockBlockOutput,
isRunBlockErrorOutput,
isRunBlockSetupRequirementsOutput,
ToolIcon,
type RunBlockToolOutput,
} from "./helpers";
export interface RunBlockToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: RunBlockToolPart;
}
function getAccordionMeta(output: RunBlockToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isRunBlockBlockOutput(output)) {
const keys = Object.keys(output.outputs ?? {});
return {
badgeText: "Run block",
title: output.block_name,
description:
keys.length > 0
? `${keys.length} output key${keys.length === 1 ? "" : "s"}`
: output.message,
};
}
if (isRunBlockSetupRequirementsOutput(output)) {
const missingCredsCount = Object.keys(
(output.setup_info.user_readiness?.missing_credentials ?? {}) as Record<
string,
unknown
>,
).length;
return {
badgeText: "Run block",
title: output.setup_info.agent_name,
description:
missingCredsCount > 0
? `Missing ${missingCredsCount} credential${missingCredsCount === 1 ? "" : "s"}`
: output.message,
};
}
return { badgeText: "Run block", title: "Error" };
}
function coerceMissingCredentials(
rawMissingCredentials: unknown,
): CredentialInfo[] {
const missing =
rawMissingCredentials && typeof rawMissingCredentials === "object"
? (rawMissingCredentials as Record<string, unknown>)
: {};
const validTypes = new Set([
"api_key",
"oauth2",
"user_password",
"host_scoped",
]);
const results: CredentialInfo[] = [];
Object.values(missing).forEach((value) => {
if (!value || typeof value !== "object") return;
const cred = value as Record<string, unknown>;
const provider =
typeof cred.provider === "string" ? cred.provider.trim() : "";
if (!provider) return;
const providerName =
typeof cred.provider_name === "string" && cred.provider_name.trim()
? cred.provider_name.trim()
: provider.replace(/_/g, " ");
const title =
typeof cred.title === "string" && cred.title.trim()
? cred.title.trim()
: providerName;
const types =
Array.isArray(cred.types) && cred.types.length > 0
? cred.types
: typeof cred.type === "string"
? [cred.type]
: [];
const credentialTypes = types
.map((t) => (typeof t === "string" ? t.trim() : ""))
.filter(
(t): t is "api_key" | "oauth2" | "user_password" | "host_scoped" =>
validTypes.has(t),
);
if (credentialTypes.length === 0) return;
const scopes = Array.isArray(cred.scopes)
? cred.scopes.filter((s): s is string => typeof s === "string")
: undefined;
const item: CredentialInfo = {
provider,
providerName,
credentialTypes,
title,
};
if (scopes && scopes.length > 0) {
item.scopes = scopes;
}
results.push(item);
});
return results;
}
function coerceExpectedInputs(rawInputs: unknown): Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> {
if (!Array.isArray(rawInputs)) return [];
const results: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> = [];
rawInputs.forEach((value, index) => {
if (!value || typeof value !== "object") return;
const input = value as Record<string, unknown>;
const name =
typeof input.name === "string" && input.name.trim()
? input.name.trim()
: `input-${index}`;
const title =
typeof input.title === "string" && input.title.trim()
? input.title.trim()
: name;
const type = typeof input.type === "string" ? input.type : "unknown";
const description =
typeof input.description === "string" && input.description.trim()
? input.description.trim()
: undefined;
const required = Boolean(input.required);
const item: {
name: string;
title: string;
type: string;
description?: string;
required: boolean;
} = { name, title, type, required };
if (description) item.description = description;
results.push(item);
});
return results;
}
export function RunBlockTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getRunBlockToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isRunBlockErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isRunBlockBlockOutput(output) ||
isRunBlockSetupRequirementsOutput(output) ||
isRunBlockErrorOutput(output));
function handleAllCredentialsComplete() {
onSend(
"I've configured the required credentials. Please re-run the block now.",
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isRunBlockSetupRequirementsOutput(output)}
>
{isRunBlockBlockOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{Object.entries(output.outputs ?? {}).map(([key, items]) => (
<div key={key} className="rounded-2xl border bg-background p-3">
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{key}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{items.length} item{items.length === 1 ? "" : "s"}
</span>
</div>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(items.slice(0, 3))}
</pre>
</div>
))}
</div>
)}
{isRunBlockSetupRequirementsOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
).length > 0 && (
<ChatCredentialsSetup
credentials={coerceMissingCredentials(
output.setup_info.user_readiness?.missing_credentials,
)}
agentName={output.setup_info.agent_name}
message={output.message}
onAllCredentialsComplete={handleAllCredentialsComplete}
onCancel={() => {}}
/>
)}
{coerceExpectedInputs(
(output.setup_info.requirements as Record<string, unknown>)
?.inputs,
).length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Expected inputs
</p>
<div className="mt-2 grid gap-2">
{coerceExpectedInputs(
(
output.setup_info.requirements as Record<
string,
unknown
>
)?.inputs,
).map((input) => (
<div key={input.name} className="rounded-xl border p-2">
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{input.title}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{input.required ? "Required" : "Optional"}
</span>
</div>
<p className="mt-1 text-xs text-muted-foreground">
{input.name} {input.type}
{input.description ? `${input.description}` : ""}
</p>
</div>
))}
</div>
</div>
)}
</div>
)}
{isRunBlockErrorOutput(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,141 +0,0 @@
import type { ToolUIPart } from "ai";
import { PlayIcon } from "@phosphor-icons/react";
import type { BlockOutputResponse } from "@/app/api/__generated__/models/blockOutputResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
export interface RunBlockInput {
block_id?: string;
input_data?: Record<string, unknown>;
}
export type RunBlockToolOutput =
| SetupRequirementsResponse
| BlockOutputResponse
| ErrorResponse;
const RUN_BLOCK_OUTPUT_TYPES = new Set<string>([
ResponseType.setup_requirements,
ResponseType.block_output,
ResponseType.error,
]);
export function isRunBlockSetupRequirementsOutput(
output: RunBlockToolOutput,
): output is SetupRequirementsResponse {
return (
output.type === ResponseType.setup_requirements ||
("setup_info" in output && typeof output.setup_info === "object")
);
}
export function isRunBlockBlockOutput(
output: RunBlockToolOutput,
): output is BlockOutputResponse {
return output.type === ResponseType.block_output || "block_id" in output;
}
export function isRunBlockErrorOutput(
output: RunBlockToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
function parseOutput(output: unknown): RunBlockToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (typeof type === "string" && RUN_BLOCK_OUTPUT_TYPES.has(type)) {
return output as RunBlockToolOutput;
}
if ("block_id" in output) return output as BlockOutputResponse;
if ("setup_info" in output) return output as SetupRequirementsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getRunBlockToolOutput(
part: unknown,
): RunBlockToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getBlockLabel(input: RunBlockInput | undefined): string | null {
const blockId = input?.block_id?.trim();
if (!blockId) return null;
return `Block ${blockId.slice(0, 8)}`;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as RunBlockInput | undefined;
const blockId = input?.block_id?.trim();
const blockText = blockId ? ` "${blockId}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Running the block${blockText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Running the block${blockText}`;
if (isRunBlockBlockOutput(output))
return `Ran "${output.block_name}"`;
if (isRunBlockSetupRequirementsOutput(output)) {
return `Setup needed for "${output.setup_info.agent_name}"`;
}
return "Error running block";
}
case "output-error":
return "Error running block";
default:
return "Running the block";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PlayIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -1,193 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { useMemo } from "react";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
getDocsToolOutput,
getDocsToolTitle,
getToolLabel,
getAnimationText,
isDocPageOutput,
isDocSearchResultsOutput,
isErrorOutput,
isNoResultsOutput,
ToolIcon,
toDocsUrl,
type DocsToolType,
} from "./helpers";
export interface DocsToolPart {
type: DocsToolType;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: DocsToolPart;
}
function truncate(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}
export function SearchDocsTool({ part }: Props) {
const output = getDocsToolOutput(part);
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const normalized = useMemo(() => {
if (!output) return null;
const title = getDocsToolTitle(part.type, output);
const label = getToolLabel(part.type);
return { title, label };
}, [output, part.type]);
const isOutputAvailable = part.state === "output-available" && !!output;
const docSearchOutput =
isOutputAvailable && output && isDocSearchResultsOutput(output)
? output
: null;
const docPageOutput =
isOutputAvailable && output && isDocPageOutput(output) ? output : null;
const noResultsOutput =
isOutputAvailable && output && isNoResultsOutput(output) ? output : null;
const errorOutput =
isOutputAvailable && output && isErrorOutput(output) ? output : null;
const hasExpandableContent =
isOutputAvailable &&
((!!docSearchOutput && docSearchOutput.count > 0) ||
!!docPageOutput ||
!!noResultsOutput ||
!!errorOutput);
const accordionDescription =
hasExpandableContent && docSearchOutput
? `Found ${docSearchOutput.count} result${docSearchOutput.count === 1 ? "" : "s"} for "${docSearchOutput.query}"`
: hasExpandableContent && docPageOutput
? docPageOutput.path
: hasExpandableContent && (noResultsOutput || errorOutput)
? ((noResultsOutput ?? errorOutput)?.message ?? null)
: null;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon toolType={part.type} isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && normalized && (
<ToolAccordion
badgeText={normalized.label}
title={normalized.title}
description={accordionDescription}
>
{docSearchOutput && (
<div className="grid gap-2">
{docSearchOutput.results.map((r) => {
const href = r.doc_url ?? toDocsUrl(r.path);
return (
<div
key={r.path}
className="rounded-2xl border bg-background p-3"
>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<p className="truncate text-sm font-medium text-foreground">
{r.title}
</p>
<p className="mt-0.5 truncate text-xs text-muted-foreground">
{r.path}
{r.section ? `${r.section}` : ""}
</p>
<p className="mt-2 text-xs text-muted-foreground">
{truncate(r.snippet, 240)}
</p>
</div>
<Link
href={href}
target="_blank"
rel="noreferrer"
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
</div>
</div>
);
})}
</div>
)}
{docPageOutput && (
<div>
<div className="flex items-start justify-between gap-2">
<div className="min-w-0">
<p className="truncate text-sm font-medium text-foreground">
{docPageOutput.title}
</p>
<p className="mt-0.5 truncate text-xs text-muted-foreground">
{docPageOutput.path}
</p>
</div>
<Link
href={docPageOutput.doc_url ?? toDocsUrl(docPageOutput.path)}
target="_blank"
rel="noreferrer"
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
</div>
<p className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{truncate(docPageOutput.content, 800)}
</p>
</div>
)}
{noResultsOutput && (
<div>
<p className="text-sm text-foreground">
{noResultsOutput.message}
</p>
{noResultsOutput.suggestions &&
noResultsOutput.suggestions.length > 0 && (
<ul className="mt-2 list-disc space-y-1 pl-5 text-xs text-muted-foreground">
{noResultsOutput.suggestions.slice(0, 5).map((s) => (
<li key={s}>{s}</li>
))}
</ul>
)}
</div>
)}
{errorOutput && (
<div>
<p className="text-sm text-foreground">{errorOutput.message}</p>
{errorOutput.error && (
<p className="mt-2 text-xs text-muted-foreground">
{errorOutput.error}
</p>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,205 +0,0 @@
import { ToolUIPart } from "ai";
import { FileMagnifyingGlassIcon, FileTextIcon } from "@phosphor-icons/react";
import type { DocPageResponse } from "@/app/api/__generated__/models/docPageResponse";
import type { DocSearchResultsResponse } from "@/app/api/__generated__/models/docSearchResultsResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export interface SearchDocsInput {
query: string;
}
export interface GetDocPageInput {
path: string;
}
export type DocsToolOutput =
| DocSearchResultsResponse
| DocPageResponse
| NoResultsResponse
| ErrorResponse;
export type DocsToolType = "tool-search_docs" | "tool-get_doc_page" | string;
export function getToolLabel(toolType: DocsToolType): string {
switch (toolType) {
case "tool-search_docs":
return "Docs";
case "tool-get_doc_page":
return "Docs page";
default:
return "Docs";
}
}
function parseOutput(output: unknown): DocsToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.doc_search_results ||
type === ResponseType.doc_page ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as DocsToolOutput;
}
if ("results" in output && "query" in output)
return output as DocSearchResultsResponse;
if ("content" in output && "path" in output)
return output as DocPageResponse;
if ("suggestions" in output && !("error" in output))
return output as NoResultsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getDocsToolOutput(part: unknown): DocsToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isDocSearchResultsOutput(
output: DocsToolOutput,
): output is DocSearchResultsResponse {
return output.type === ResponseType.doc_search_results || "results" in output;
}
export function isDocPageOutput(
output: DocsToolOutput,
): output is DocPageResponse {
return output.type === ResponseType.doc_page || "content" in output;
}
export function isNoResultsOutput(
output: DocsToolOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorOutput(output: DocsToolOutput): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getDocsToolTitle(
toolType: DocsToolType,
output: DocsToolOutput,
): string {
if (toolType === "tool-search_docs") {
if (isDocSearchResultsOutput(output)) return "Documentation results";
if (isNoResultsOutput(output)) return "No documentation found";
return "Documentation search error";
}
if (isDocPageOutput(output)) return "Documentation page";
if (isNoResultsOutput(output)) return "No documentation found";
return "Documentation page error";
}
export function getAnimationText(part: {
type: DocsToolType;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.type) {
case "tool-search_docs": {
const query = (part.input as SearchDocsInput | undefined)?.query?.trim();
const queryText = query ? ` for "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Searching documentation${queryText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Searching documentation${queryText}`;
if (isDocSearchResultsOutput(output)) {
const count = output.count ?? output.results.length;
return `Found ${count} result${count === 1 ? "" : "s"}${queryText}`;
}
if (isNoResultsOutput(output)) {
return `No results found${queryText}`;
}
return `Error searching documentation${queryText}`;
}
case "output-error":
return `Error searching documentation${queryText}`;
default:
return "Searching documentation";
}
}
case "tool-get_doc_page": {
const path = (part.input as GetDocPageInput | undefined)?.path?.trim();
const pathText = path ? ` "${path}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Loading documentation page${pathText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Loading documentation page${pathText}`;
if (isDocPageOutput(output)) return `Loaded "${output.title}"`;
if (isNoResultsOutput(output)) return "Documentation page not found";
return "Error loading documentation page";
}
case "output-error":
return "Error loading documentation page";
default:
return "Loading documentation page";
}
}
}
return "Processing";
}
export function ToolIcon({
toolType,
isStreaming,
isError,
}: {
toolType: DocsToolType;
isStreaming?: boolean;
isError?: boolean;
}) {
const IconComponent =
toolType === "tool-get_doc_page" ? FileTextIcon : FileMagnifyingGlassIcon;
return (
<IconComponent
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function toDocsUrl(path: string): string {
const urlPath = path.includes(".")
? path.slice(0, path.lastIndexOf("."))
: path;
return `https://docs.agpt.co/${urlPath}`;
}

View File

@@ -1,181 +0,0 @@
"use client";
import type { ToolUIPart } from "ai";
import Link from "next/link";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
formatMaybeJson,
getAnimationText,
getViewAgentOutputToolOutput,
isAgentOutputResponse,
isErrorResponse,
isNoResultsResponse,
ToolIcon,
type ViewAgentOutputToolOutput,
} from "./helpers";
export interface ViewAgentOutputToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: ViewAgentOutputToolPart;
}
function getAccordionMeta(output: ViewAgentOutputToolOutput): {
badgeText: string;
title: string;
description?: string;
} {
if (isAgentOutputResponse(output)) {
const status = output.execution?.status;
return {
badgeText: "Agent output",
title: output.agent_name,
description: status ? `Status: ${status}` : output.message,
};
}
if (isNoResultsResponse(output)) {
return { badgeText: "Agent output", title: "No results" };
}
return { badgeText: "Agent output", title: "Error" };
}
export function ViewAgentOutputTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getViewAgentOutputToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorResponse(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isAgentOutputResponse(output) ||
isNoResultsResponse(output) ||
isErrorResponse(output));
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}>
{isAgentOutputResponse(output) && (
<div className="grid gap-2">
<div className="flex items-start justify-between gap-3">
<p className="text-sm text-foreground">{output.message}</p>
{output.library_agent_link && (
<Link
href={output.library_agent_link}
className="shrink-0 text-xs font-medium text-purple-600 hover:text-purple-700"
>
Open
</Link>
)}
</div>
{output.execution ? (
<div className="grid gap-2">
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Execution
</p>
<p className="mt-1 truncate text-xs text-muted-foreground">
{output.execution.execution_id}
</p>
<p className="mt-1 text-xs text-muted-foreground">
Status: {output.execution.status}
</p>
</div>
{output.execution.inputs_summary && (
<div className="rounded-2xl border bg-background p-3">
<p className="text-xs font-medium text-foreground">
Inputs summary
</p>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(output.execution.inputs_summary)}
</pre>
</div>
)}
{Object.entries(output.execution.outputs ?? {}).map(
([key, items]) => (
<div
key={key}
className="rounded-2xl border bg-background p-3"
>
<div className="flex items-center justify-between gap-2">
<p className="truncate text-xs font-medium text-foreground">
{key}
</p>
<span className="shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-muted-foreground">
{items.length} item{items.length === 1 ? "" : "s"}
</span>
</div>
<pre className="mt-2 whitespace-pre-wrap text-xs text-muted-foreground">
{formatMaybeJson(items.slice(0, 3))}
</pre>
</div>
),
)}
</div>
) : (
<div className="rounded-2xl border bg-background p-3">
<p className="text-sm text-foreground">
No execution selected.
</p>
<p className="mt-1 text-xs text-muted-foreground">
Try asking for a specific run or execution_id.
</p>
</div>
)}
</div>
)}
{isNoResultsResponse(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.suggestions && output.suggestions.length > 0 && (
<ul className="mt-1 list-disc space-y-1 pl-5 text-xs text-muted-foreground">
{output.suggestions.slice(0, 5).map((s) => (
<li key={s}>{s}</li>
))}
</ul>
)}
</div>
)}
{isErrorResponse(output) && (
<div className="grid gap-2">
<p className="text-sm text-foreground">{output.message}</p>
{output.error && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.error)}
</pre>
)}
{output.details && (
<pre className="whitespace-pre-wrap rounded-2xl border bg-muted/30 p-3 text-xs text-muted-foreground">
{formatMaybeJson(output.details)}
</pre>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -1,154 +0,0 @@
import type { ToolUIPart } from "ai";
import { EyeIcon } from "@phosphor-icons/react";
import type { AgentOutputResponse } from "@/app/api/__generated__/models/agentOutputResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
export interface ViewAgentOutputInput {
agent_name?: string;
library_agent_id?: string;
store_slug?: string;
execution_id?: string;
run_time?: string;
}
export type ViewAgentOutputToolOutput =
| AgentOutputResponse
| NoResultsResponse
| ErrorResponse;
function parseOutput(output: unknown): ViewAgentOutputToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.agent_output ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as ViewAgentOutputToolOutput;
}
if ("agent_id" in output && "agent_name" in output) {
return output as AgentOutputResponse;
}
if ("suggestions" in output && !("error" in output)) {
return output as NoResultsResponse;
}
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function isAgentOutputResponse(
output: ViewAgentOutputToolOutput,
): output is AgentOutputResponse {
return output.type === ResponseType.agent_output || "agent_id" in output;
}
export function isNoResultsResponse(
output: ViewAgentOutputToolOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorResponse(
output: ViewAgentOutputToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getViewAgentOutputToolOutput(
part: unknown,
): ViewAgentOutputToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getAgentIdentifierText(
input: ViewAgentOutputInput | undefined,
): string | null {
if (!input) return null;
const libraryId = input.library_agent_id?.trim();
if (libraryId) return `Library agent ${libraryId}`;
const slug = input.store_slug?.trim();
if (slug) return slug;
const name = input.agent_name?.trim();
if (name) return name;
return null;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as ViewAgentOutputInput | undefined;
const agent = getAgentIdentifierText(input);
const agentText = agent ? ` "${agent}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Retrieving agent output${agentText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Retrieving agent output${agentText}`;
if (isAgentOutputResponse(output)) {
if (output.execution)
return `Retrieved output (${output.execution.status})`;
return "Retrieved agent output";
}
if (isNoResultsResponse(output)) return "No outputs found";
return "Error loading agent output";
}
case "output-error":
return "Error loading agent output";
default:
return "Retrieving agent output";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<EyeIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -1,219 +0,0 @@
import {
getGetV2ListSessionsQueryKey,
getV2GetSession,
postV2CreateSession,
useGetV2ListSessions,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useChat } from "@ai-sdk/react";
import { useQueryClient } from "@tanstack/react-query";
import { DefaultChatTransport } from "ai";
import { parseAsString, useQueryState } from "nuqs";
import { useCallback, useEffect, useRef, useState } from "react";
import { convertChatSessionMessagesToUiMessages } from "./helpers/convertChatSessionToUiMessages";
export function useCopilotPage() {
const [isCreatingSession, setIsCreatingSession] = useState(false);
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const breakpoint = useBreakpoint();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const hydrationSeq = useRef(0);
const lastHydratedSessionIdRef = useRef<string | null>(null);
const createSessionPromiseRef = useRef<Promise<string> | null>(null);
const queuedFirstMessageRef = useRef<string | null>(null);
const queuedFirstMessageResolverRef = useRef<(() => void) | null>(null);
const queryClient = useQueryClient();
const transport = sessionId
? new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
prepareSendMessagesRequest: ({ messages }) => {
const last = messages[messages.length - 1];
return {
body: {
message: last.parts
?.map((p) => (p.type === "text" ? p.text : ""))
.join(""),
is_user_message: last.role === "user",
context: null,
},
};
},
})
: null;
const { messages, sendMessage, status, error, setMessages } = useChat({
id: sessionId ?? undefined,
transport: transport ?? undefined,
});
const messagesRef = useRef(messages);
useEffect(() => {
messagesRef.current = messages;
}, [messages]);
async function createSession() {
if (sessionId) return sessionId;
if (createSessionPromiseRef.current) return createSessionPromiseRef.current;
setIsCreatingSession(true);
const promise = (async () => {
const response = await postV2CreateSession({
body: JSON.stringify({}),
});
if (response.status !== 200 || !response.data?.id) {
throw new Error("Failed to create chat session");
}
setSessionId(response.data.id);
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
return response.data.id;
})();
createSessionPromiseRef.current = promise;
try {
return await promise;
} finally {
createSessionPromiseRef.current = null;
setIsCreatingSession(false);
}
}
useEffect(() => {
hydrationSeq.current += 1;
const seq = hydrationSeq.current;
const controller = new AbortController();
if (!sessionId) {
setMessages([]);
lastHydratedSessionIdRef.current = null;
return;
}
const currentSessionId = sessionId;
if (lastHydratedSessionIdRef.current !== currentSessionId) {
setMessages([]);
lastHydratedSessionIdRef.current = currentSessionId;
}
async function hydrate() {
try {
const response = await getV2GetSession(currentSessionId, {
signal: controller.signal,
});
if (response.status !== 200) return;
const uiMessages = convertChatSessionMessagesToUiMessages(
currentSessionId,
response.data.messages ?? [],
);
if (controller.signal.aborted) return;
if (hydrationSeq.current !== seq) return;
const localMessagesCount = messagesRef.current.length;
const remoteMessagesCount = uiMessages.length;
if (remoteMessagesCount === 0) return;
if (localMessagesCount > remoteMessagesCount) return;
setMessages(uiMessages);
} catch (error) {
if ((error as { name?: string } | null)?.name === "AbortError") return;
console.warn("Failed to hydrate chat session:", error);
}
}
void hydrate();
return () => controller.abort();
}, [sessionId, setMessages]);
useEffect(() => {
if (!sessionId) return;
const firstMessage = queuedFirstMessageRef.current;
if (!firstMessage) return;
queuedFirstMessageRef.current = null;
sendMessage({ text: firstMessage });
queuedFirstMessageResolverRef.current?.();
queuedFirstMessageResolverRef.current = null;
}, [sendMessage, sessionId]);
async function onSend(message: string) {
const trimmed = message.trim();
if (!trimmed) return;
if (sessionId) {
sendMessage({ text: trimmed });
return;
}
queuedFirstMessageRef.current = trimmed;
const sentPromise = new Promise<void>((resolve) => {
queuedFirstMessageResolverRef.current = resolve;
});
await createSession();
await sentPromise;
}
// Sessions list for mobile drawer
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
// Drawer handlers
const handleOpenDrawer = useCallback(() => {
setIsDrawerOpen(true);
}, []);
const handleCloseDrawer = useCallback(() => {
setIsDrawerOpen(false);
}, []);
const handleDrawerOpenChange = useCallback((open: boolean) => {
setIsDrawerOpen(open);
}, []);
const handleSelectSession = useCallback(
(id: string) => {
setSessionId(id);
if (isMobile) setIsDrawerOpen(false);
},
[setSessionId, isMobile],
);
const handleNewChat = useCallback(() => {
setSessionId(null);
if (isMobile) setIsDrawerOpen(false);
}, [setSessionId, isMobile]);
return {
sessionId,
messages,
status,
error,
isCreatingSession,
createSession,
onSend,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
};
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@source "../node_modules/streamdown/dist/*.js";
@layer base {
:root {
@@ -30,14 +29,6 @@
--chart-3: 197 37% 24%;
--chart-4: 43 74% 66%;
--chart-5: 27 87% 67%;
--sidebar-background: 0 0% 98%;
--sidebar-foreground: 240 5.3% 26.1%;
--sidebar-primary: 240 5.9% 10%;
--sidebar-primary-foreground: 0 0% 98%;
--sidebar-accent: 240 4.8% 95.9%;
--sidebar-accent-foreground: 240 5.9% 10%;
--sidebar-border: 220 13% 91%;
--sidebar-ring: 217.2 91.2% 59.8%;
}
.dark {
@@ -65,14 +56,6 @@
--chart-3: 30 80% 55%;
--chart-4: 280 65% 60%;
--chart-5: 340 75% 55%;
--sidebar-background: 240 5.9% 10%;
--sidebar-foreground: 240 4.8% 95.9%;
--sidebar-primary: 224.3 76.3% 48%;
--sidebar-primary-foreground: 0 0% 100%;
--sidebar-accent: 240 3.7% 15.9%;
--sidebar-accent-foreground: 240 4.8% 95.9%;
--sidebar-border: 240 3.7% 15.9%;
--sidebar-ring: 217.2 91.2% 59.8%;
}
* {

View File

@@ -22,7 +22,7 @@ const isValidVideoUrl = (url: string): boolean => {
if (url.startsWith("data:video")) {
return true;
}
const videoExtensions = /\.(mp4|webm|ogg)$/i;
const videoExtensions = /\.(mp4|webm|ogg|mov|avi|mkv|m4v)$/i;
const youtubeRegex = /^(https?:\/\/)?(www\.)?(youtube\.com|youtu\.?be)\/.+$/;
const cleanedUrl = url.split("?")[0];
return (
@@ -44,11 +44,29 @@ const isValidAudioUrl = (url: string): boolean => {
if (url.startsWith("data:audio")) {
return true;
}
const audioExtensions = /\.(mp3|wav)$/i;
const audioExtensions = /\.(mp3|wav|ogg|m4a|aac|flac)$/i;
const cleanedUrl = url.split("?")[0];
return isValidMediaUri(url) && audioExtensions.test(cleanedUrl);
};
const getVideoMimeType = (url: string): string => {
if (url.startsWith("data:video/")) {
const match = url.match(/^data:(video\/[^;]+)/);
return match?.[1] || "video/mp4";
}
const extension = url.split("?")[0].split(".").pop()?.toLowerCase();
const mimeMap: Record<string, string> = {
mp4: "video/mp4",
webm: "video/webm",
ogg: "video/ogg",
mov: "video/quicktime",
avi: "video/x-msvideo",
mkv: "video/x-matroska",
m4v: "video/mp4",
};
return mimeMap[extension || ""] || "video/mp4";
};
const VideoRenderer: React.FC<{ videoUrl: string }> = ({ videoUrl }) => {
const videoId = getYouTubeVideoId(videoUrl);
return (
@@ -63,7 +81,7 @@ const VideoRenderer: React.FC<{ videoUrl: string }> = ({ videoUrl }) => {
></iframe>
) : (
<video controls width="100%" height="315">
<source src={videoUrl} type="video/mp4" />
<source src={videoUrl} type={getVideoMimeType(videoUrl)} />
Your browser does not support the video tag.
</video>
)}

View File

@@ -1,105 +0,0 @@
"use client";
import { Button } from "@/components/ui/button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { ArrowDownIcon } from "lucide-react";
import type { ComponentProps } from "react";
import { useCallback } from "react";
import { StickToBottom, useStickToBottomContext } from "use-stick-to-bottom";
export type ConversationProps = ComponentProps<typeof StickToBottom>;
export const Conversation = ({ className, ...props }: ConversationProps) => (
<StickToBottom
className={cn("relative flex-1 overflow-y-hidden", scrollbarStyles, className)}
initial="smooth"
resize="smooth"
role="log"
{...props}
/>
);
export type ConversationContentProps = ComponentProps<
typeof StickToBottom.Content
>;
export const ConversationContent = ({
className,
...props
}: ConversationContentProps) => (
<StickToBottom.Content
className={cn("flex flex-col gap-8 p-4", className)}
{...props}
/>
);
export type ConversationEmptyStateProps = ComponentProps<"div"> & {
title?: string;
description?: string;
icon?: React.ReactNode;
};
export const ConversationEmptyState = ({
className,
title = "No messages yet",
description = "Start a conversation to see messages here",
icon,
children,
...props
}: ConversationEmptyStateProps) => (
<div
className={cn(
"flex size-full flex-col items-center justify-center gap-3 p-8 text-center",
className,
)}
{...props}
>
{children ?? (
<>
{icon && (
<div className="text-neutral-500 dark:text-neutral-400">{icon}</div>
)}
<div className="space-y-1">
<h3 className="text-sm font-medium">{title}</h3>
{description && (
<p className="text-sm text-neutral-500 dark:text-neutral-400">
{description}
</p>
)}
</div>
</>
)}
</div>
);
export type ConversationScrollButtonProps = ComponentProps<typeof Button>;
export const ConversationScrollButton = ({
className,
...props
}: ConversationScrollButtonProps) => {
const { isAtBottom, scrollToBottom } = useStickToBottomContext();
const handleScrollToBottom = useCallback(() => {
scrollToBottom();
}, [scrollToBottom]);
return (
!isAtBottom && (
<Button
className={cn(
"absolute bottom-4 left-[50%] translate-x-[-50%] rounded-full dark:bg-white dark:dark:bg-neutral-950 dark:dark:hover:bg-neutral-800 dark:hover:bg-neutral-100",
className,
)}
onClick={handleScrollToBottom}
size="icon"
type="button"
variant="outline"
{...props}
>
<ArrowDownIcon className="size-4" />
</Button>
)
);
};

View File

@@ -1,338 +0,0 @@
"use client";
import { Button } from "@/components/ui/button";
import { ButtonGroup, ButtonGroupText } from "@/components/ui/button-group";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { cn } from "@/lib/utils";
import { cjk } from "@streamdown/cjk";
import { code } from "@streamdown/code";
import { math } from "@streamdown/math";
import { mermaid } from "@streamdown/mermaid";
import type { UIMessage } from "ai";
import { ChevronLeftIcon, ChevronRightIcon } from "lucide-react";
import type { ComponentProps, HTMLAttributes, ReactElement } from "react";
import { createContext, memo, useContext, useEffect, useState } from "react";
import { Streamdown } from "streamdown";
export type MessageProps = HTMLAttributes<HTMLDivElement> & {
from: UIMessage["role"];
};
export const Message = ({ className, from, ...props }: MessageProps) => (
<div
className={cn(
"group flex w-full max-w-[95%] flex-col gap-2",
from === "user" ? "is-user ml-auto justify-end" : "is-assistant",
className,
)}
{...props}
/>
);
export type MessageContentProps = HTMLAttributes<HTMLDivElement>;
export const MessageContent = ({
children,
className,
...props
}: MessageContentProps) => (
<div
className={cn(
"is-user:dark flex w-full min-w-0 max-w-full flex-col gap-2 overflow-hidden text-sm",
"group-[.is-user]:w-fit",
"group-[.is-user]:ml-auto group-[.is-user]:rounded-lg group-[.is-user]:bg-neutral-100 group-[.is-user]:px-4 group-[.is-user]:py-3 group-[.is-user]:text-neutral-950 dark:group-[.is-user]:bg-neutral-800 dark:group-[.is-user]:text-neutral-50",
"group-[.is-assistant]:text-neutral-950 dark:group-[.is-assistant]:text-neutral-50",
className,
)}
{...props}
>
{children}
</div>
);
export type MessageActionsProps = ComponentProps<"div">;
export const MessageActions = ({
className,
children,
...props
}: MessageActionsProps) => (
<div className={cn("flex items-center gap-1", className)} {...props}>
{children}
</div>
);
export type MessageActionProps = ComponentProps<typeof Button> & {
tooltip?: string;
label?: string;
};
export const MessageAction = ({
tooltip,
children,
label,
variant = "ghost",
size = "icon-sm",
...props
}: MessageActionProps) => {
const button = (
<Button size={size} type="button" variant={variant} {...props}>
{children}
<span className="sr-only">{label || tooltip}</span>
</Button>
);
if (tooltip) {
return (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>{button}</TooltipTrigger>
<TooltipContent>
<p>{tooltip}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
);
}
return button;
};
interface MessageBranchContextType {
currentBranch: number;
totalBranches: number;
goToPrevious: () => void;
goToNext: () => void;
branches: ReactElement[];
setBranches: (branches: ReactElement[]) => void;
}
const MessageBranchContext = createContext<MessageBranchContextType | null>(
null,
);
const useMessageBranch = () => {
const context = useContext(MessageBranchContext);
if (!context) {
throw new Error("MessageBranch components must be used within");
}
return context;
};
export type MessageBranchProps = HTMLAttributes<HTMLDivElement> & {
defaultBranch?: number;
onBranchChange?: (branchIndex: number) => void;
};
export const MessageBranch = ({
defaultBranch = 0,
onBranchChange,
className,
...props
}: MessageBranchProps) => {
const [currentBranch, setCurrentBranch] = useState(defaultBranch);
const [branches, setBranches] = useState<ReactElement[]>([]);
const handleBranchChange = (newBranch: number) => {
setCurrentBranch(newBranch);
onBranchChange?.(newBranch);
};
const goToPrevious = () => {
const newBranch =
currentBranch > 0 ? currentBranch - 1 : branches.length - 1;
handleBranchChange(newBranch);
};
const goToNext = () => {
const newBranch =
currentBranch < branches.length - 1 ? currentBranch + 1 : 0;
handleBranchChange(newBranch);
};
const contextValue: MessageBranchContextType = {
currentBranch,
totalBranches: branches.length,
goToPrevious,
goToNext,
branches,
setBranches,
};
return (
<MessageBranchContext.Provider value={contextValue}>
<div
className={cn("grid w-full gap-2 [&>div]:pb-0", className)}
{...props}
/>
</MessageBranchContext.Provider>
);
};
export type MessageBranchContentProps = HTMLAttributes<HTMLDivElement>;
export const MessageBranchContent = ({
children,
...props
}: MessageBranchContentProps) => {
const { currentBranch, setBranches, branches } = useMessageBranch();
const childrenArray = Array.isArray(children) ? children : [children];
// Use useEffect to update branches when they change
useEffect(() => {
if (branches.length !== childrenArray.length) {
setBranches(childrenArray);
}
}, [childrenArray, branches, setBranches]);
return childrenArray.map((branch, index) => (
<div
className={cn(
"grid gap-2 overflow-hidden [&>div]:pb-0",
index === currentBranch ? "block" : "hidden",
)}
key={branch.key}
{...props}
>
{branch}
</div>
));
};
export type MessageBranchSelectorProps = HTMLAttributes<HTMLDivElement> & {
from: UIMessage["role"];
};
export const MessageBranchSelector = ({
className,
from: _from,
...props
}: MessageBranchSelectorProps) => {
const { totalBranches } = useMessageBranch();
// Don't render if there's only one branch
if (totalBranches <= 1) {
return null;
}
return (
<ButtonGroup
className={cn(
"[&>*:not(:first-child)]:rounded-l-md [&>*:not(:last-child)]:rounded-r-md",
className,
)}
orientation="horizontal"
{...props}
/>
);
};
export type MessageBranchPreviousProps = ComponentProps<typeof Button>;
export const MessageBranchPrevious = ({
children,
...props
}: MessageBranchPreviousProps) => {
const { goToPrevious, totalBranches } = useMessageBranch();
return (
<Button
aria-label="Previous branch"
disabled={totalBranches <= 1}
onClick={goToPrevious}
size="icon-sm"
type="button"
variant="ghost"
{...props}
>
{children ?? <ChevronLeftIcon size={14} />}
</Button>
);
};
export type MessageBranchNextProps = ComponentProps<typeof Button>;
export const MessageBranchNext = ({
children,
...props
}: MessageBranchNextProps) => {
const { goToNext, totalBranches } = useMessageBranch();
return (
<Button
aria-label="Next branch"
disabled={totalBranches <= 1}
onClick={goToNext}
size="icon-sm"
type="button"
variant="ghost"
{...props}
>
{children ?? <ChevronRightIcon size={14} />}
</Button>
);
};
export type MessageBranchPageProps = HTMLAttributes<HTMLSpanElement>;
export const MessageBranchPage = ({
className,
...props
}: MessageBranchPageProps) => {
const { currentBranch, totalBranches } = useMessageBranch();
return (
<ButtonGroupText
className={cn(
"border-none bg-transparent text-neutral-500 shadow-none dark:text-neutral-400",
className,
)}
{...props}
>
{currentBranch + 1} of {totalBranches}
</ButtonGroupText>
);
};
export type MessageResponseProps = ComponentProps<typeof Streamdown>;
export const MessageResponse = memo(
({ className, ...props }: MessageResponseProps) => (
<Streamdown
className={cn(
"size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0",
className,
)}
plugins={{ code, mermaid, math, cjk }}
{...props}
/>
),
(prevProps, nextProps) => prevProps.children === nextProps.children,
);
MessageResponse.displayName = "MessageResponse";
export type MessageToolbarProps = ComponentProps<"div">;
export const MessageToolbar = ({
className,
children,
...props
}: MessageToolbarProps) => (
<div
className={cn(
"mt-4 flex w-full items-center justify-between gap-4",
className,
)}
{...props}
>
{children}
</div>
);

View File

@@ -77,7 +77,7 @@ export function OverflowText(props: Props) {
"block min-w-0 overflow-hidden text-ellipsis whitespace-nowrap",
)}
>
<Text variant={variant} as="span" className={className} {...restProps}>
<Text variant={variant} className={className} {...restProps}>
{value}
</Text>
</span>

View File

@@ -6,19 +6,17 @@ import {
MicrophoneIcon,
StopIcon,
} from "@phosphor-icons/react";
import { ChangeEvent, useCallback } from "react";
import { RecordingIndicator } from "./components/RecordingIndicator";
import { useChatInput } from "./useChatInput";
import { useVoiceRecording } from "./useVoiceRecording";
export interface Props {
onSend: (message: string) => void | Promise<void>;
onSend: (message: string) => void;
disabled?: boolean;
isStreaming?: boolean;
onStop?: () => void;
placeholder?: string;
className?: string;
inputId?: string;
}
export function ChatInput({
@@ -28,14 +26,14 @@ export function ChatInput({
onStop,
placeholder = "Type your message...",
className,
inputId = "chat-input",
}: Props) {
const inputId = "chat-input";
const {
value,
setValue,
handleKeyDown: baseHandleKeyDown,
handleSubmit,
handleChange: baseHandleChange,
handleChange,
hasMultipleLines,
} = useChatInput({
onSend,
@@ -62,15 +60,6 @@ export function ChatInput({
inputId,
});
// Block text changes when recording
const handleChange = useCallback(
(e: ChangeEvent<HTMLTextAreaElement>) => {
if (isRecording) return;
baseHandleChange(e);
},
[isRecording, baseHandleChange],
);
return (
<form onSubmit={handleSubmit} className={cn("relative flex-1", className)}>
<div className="relative">

View File

@@ -21,7 +21,6 @@ export function useChatInput({
}: Args) {
const [value, setValue] = useState("");
const [hasMultipleLines, setHasMultipleLines] = useState(false);
const [isSending, setIsSending] = useState(false);
useEffect(
function focusOnMount() {
@@ -101,40 +100,34 @@ export function useChatInput({
}
}, [value, maxRows, inputId]);
async function handleSend() {
if (disabled || isSending || !value.trim()) return;
setIsSending(true);
try {
await onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
} finally {
setIsSending(false);
const handleSend = () => {
if (disabled || !value.trim()) return;
onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
};
function handleKeyDown(event: KeyboardEvent<HTMLTextAreaElement>) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
void handleSend();
handleSend();
}
}
function handleSubmit(e: FormEvent<HTMLFormElement>) {
e.preventDefault();
void handleSend();
handleSend();
}
function handleChange(e: ChangeEvent<HTMLTextAreaElement>) {
@@ -149,6 +142,5 @@ export function useChatInput({
handleSubmit,
handleChange,
hasMultipleLines,
isSending,
};
}

View File

@@ -214,33 +214,17 @@ export function useVoiceRecording({
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
// Allow space to toggle recording (start when empty, stop when recording)
if (event.key === " " && !isTranscribing) {
if (isRecordingRef.current) {
// Stop recording on space
event.preventDefault();
stopRecording();
return;
} else if (!value.trim()) {
// Start recording on space when input is empty
event.preventDefault();
void startRecording();
return;
}
}
// Block all key events when recording (except space handled above)
if (isRecordingRef.current) {
if (event.key === " " && !value.trim() && !isTranscribing) {
event.preventDefault();
toggleRecording();
return;
}
baseHandleKeyDown(event);
},
[value, isTranscribing, stopRecording, startRecording, baseHandleKeyDown],
[value, isTranscribing, toggleRecording, baseHandleKeyDown],
);
const showMicButton = isSupported;
// Don't include isRecording in disabled state - we need key events to work
// Text input is blocked via handleKeyDown instead
const isInputDisabled = disabled || isStreaming || isTranscribing;
// Cleanup on unmount

View File

@@ -102,18 +102,6 @@ export function ChatMessage({
}
}
function handleClarificationAnswers(answers: Record<string, string>) {
if (onSendMessage) {
const contextMessage = Object.entries(answers)
.map(([keyword, answer]) => `${keyword}: ${answer}`)
.join("\n");
onSendMessage(
`I have the answers to your questions:\n\n${contextMessage}\n\nPlease proceed with creating the agent.`,
);
}
}
const handleCopy = useCallback(
async function handleCopy() {
if (message.type !== "message") return;
@@ -162,6 +150,22 @@ export function ChatMessage({
.slice(index + 1)
.some((m) => m.type === "message" && m.role === "user");
const handleClarificationAnswers = (answers: Record<string, string>) => {
if (onSendMessage) {
// Iterate over questions (preserves original order) instead of answers
const contextMessage = message.questions
.map((q) => {
const answer = answers[q.keyword] || "";
return `> ${q.question}\n\n${answer}`;
})
.join("\n\n");
onSendMessage(
`**Here are my answers:**\n\n${contextMessage}\n\nPlease proceed with creating the agent.`,
);
}
};
return (
<ClarificationQuestionsWidget
questions={message.questions}
@@ -346,6 +350,7 @@ export function ChatMessage({
toolId={message.toolId}
toolName={message.toolName}
result={message.result}
onSendMessage={onSendMessage}
/>
</div>
);

View File

@@ -3,7 +3,7 @@
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import { cn } from "@/lib/utils";
import { EyeSlash } from "@phosphor-icons/react";
import React from "react";
import React, { useState } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
@@ -48,7 +48,9 @@ interface InputProps extends React.InputHTMLAttributes<HTMLInputElement> {
*/
function resolveWorkspaceUrl(src: string): string {
if (src.startsWith("workspace://")) {
const fileId = src.replace("workspace://", "");
// Strip MIME type fragment if present (e.g., workspace://abc123#video/mp4 → abc123)
const withoutPrefix = src.replace("workspace://", "");
const fileId = withoutPrefix.split("#")[0];
// Use the generated API URL helper to get the correct path
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
// Route through the Next.js proxy (same pattern as customMutator for client-side)
@@ -65,13 +67,49 @@ function isWorkspaceImage(src: string | undefined): boolean {
return src?.includes("/workspace/files/") ?? false;
}
/**
* Renders a workspace video with controls and an optional "AI cannot see" badge.
*/
function WorkspaceVideo({
src,
aiCannotSee,
}: {
src: string;
aiCannotSee: boolean;
}) {
return (
<span className="relative my-2 inline-block">
<video
controls
className="h-auto max-w-full rounded-md border border-zinc-200"
preload="metadata"
>
<source src={src} />
Your browser does not support the video tag.
</video>
{aiCannotSee && (
<span
className="absolute bottom-2 right-2 flex items-center gap-1 rounded bg-black/70 px-2 py-1 text-xs text-white"
title="The AI cannot see this video"
>
<EyeSlash size={14} />
<span>AI cannot see this video</span>
</span>
)}
</span>
);
}
/**
* Custom image component that shows an indicator when the AI cannot see the image.
* Also handles the "video:" alt-text prefix convention to render <video> elements.
* For workspace files with unknown types, falls back to <video> if <img> fails.
* Note: src is already transformed by urlTransform, so workspace:// is now /api/workspace/...
*/
function MarkdownImage(props: Record<string, unknown>) {
const src = props.src as string | undefined;
const alt = props.alt as string | undefined;
const [imgFailed, setImgFailed] = useState(false);
const aiCannotSee = isWorkspaceImage(src);
@@ -84,6 +122,18 @@ function MarkdownImage(props: Record<string, unknown>) {
);
}
// Detect video: prefix in alt text (set by formatOutputValue in helpers.ts)
if (alt?.startsWith("video:")) {
return <WorkspaceVideo src={src} aiCannotSee={aiCannotSee} />;
}
// If the <img> failed to load and this is a workspace file, try as video.
// This handles generic output keys like "file_out" where the MIME type
// isn't known from the key name alone.
if (imgFailed && aiCannotSee) {
return <WorkspaceVideo src={src} aiCannotSee={aiCannotSee} />;
}
return (
<span className="relative my-2 inline-block">
{/* eslint-disable-next-line @next/next/no-img-element */}
@@ -92,6 +142,9 @@ function MarkdownImage(props: Record<string, unknown>) {
alt={alt || "Image"}
className="h-auto max-w-full rounded-md border border-zinc-200"
loading="lazy"
onError={() => {
if (aiCannotSee) setImgFailed(true);
}}
/>
{aiCannotSee && (
<span

View File

@@ -73,6 +73,7 @@ export function MessageList({
key={index}
message={message}
prevMessage={messages[index - 1]}
onSendMessage={onSendMessage}
/>
);
}

View File

@@ -5,11 +5,13 @@ import { shouldSkipAgentOutput } from "../../helpers";
export interface LastToolResponseProps {
message: ChatMessageData;
prevMessage: ChatMessageData | undefined;
onSendMessage?: (content: string) => void;
}
export function LastToolResponse({
message,
prevMessage,
onSendMessage,
}: LastToolResponseProps) {
if (message.type !== "tool_response") return null;
@@ -21,6 +23,7 @@ export function LastToolResponse({
toolId={message.toolId}
toolName={message.toolName}
result={message.result}
onSendMessage={onSendMessage}
/>
</div>
);

View File

@@ -1,6 +1,8 @@
import { Progress } from "@/components/atoms/Progress/Progress";
import { cn } from "@/lib/utils";
import { useEffect, useRef, useState } from "react";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { useAsymptoticProgress } from "../ToolCallMessage/useAsymptoticProgress";
export interface ThinkingMessageProps {
className?: string;
@@ -11,18 +13,19 @@ export function ThinkingMessage({ className }: ThinkingMessageProps) {
const [showCoffeeMessage, setShowCoffeeMessage] = useState(false);
const timerRef = useRef<NodeJS.Timeout | null>(null);
const coffeeTimerRef = useRef<NodeJS.Timeout | null>(null);
const progress = useAsymptoticProgress(showCoffeeMessage);
useEffect(() => {
if (timerRef.current === null) {
timerRef.current = setTimeout(() => {
setShowSlowLoader(true);
}, 8000);
}, 3000);
}
if (coffeeTimerRef.current === null) {
coffeeTimerRef.current = setTimeout(() => {
setShowCoffeeMessage(true);
}, 10000);
}, 8000);
}
return () => {
@@ -49,9 +52,18 @@ export function ThinkingMessage({ className }: ThinkingMessageProps) {
<AIChatBubble>
<div className="transition-all duration-500 ease-in-out">
{showCoffeeMessage ? (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
This could take a few minutes, grab a coffee
</span>
<div className="flex flex-col items-center gap-3">
<div className="flex w-full max-w-[280px] flex-col gap-1.5">
<div className="flex items-center justify-between text-xs text-neutral-500">
<span>Working on it...</span>
<span>{Math.round(progress)}%</span>
</div>
<Progress value={progress} className="h-2 w-full" />
</div>
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
This could take a few minutes, grab a coffee
</span>
</div>
) : showSlowLoader ? (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
Taking a bit more time...

View File

@@ -0,0 +1,50 @@
import { useEffect, useRef, useState } from "react";
/**
* Hook that returns a progress value that starts fast and slows down,
* asymptotically approaching but never reaching the max value.
*
* Uses a half-life formula: progress = max * (1 - 0.5^(time/halfLife))
* This creates the "game loading bar" effect where:
* - 50% is reached at halfLifeSeconds
* - 75% is reached at 2 * halfLifeSeconds
* - 87.5% is reached at 3 * halfLifeSeconds
* - and so on...
*
* @param isActive - Whether the progress should be animating
* @param halfLifeSeconds - Time in seconds to reach 50% progress (default: 30)
* @param maxProgress - Maximum progress value to approach (default: 100)
* @param intervalMs - Update interval in milliseconds (default: 100)
* @returns Current progress value (0-maxProgress)
*/
export function useAsymptoticProgress(
isActive: boolean,
halfLifeSeconds = 30,
maxProgress = 100,
intervalMs = 100,
) {
const [progress, setProgress] = useState(0);
const elapsedTimeRef = useRef(0);
useEffect(() => {
if (!isActive) {
setProgress(0);
elapsedTimeRef.current = 0;
return;
}
const interval = setInterval(() => {
elapsedTimeRef.current += intervalMs / 1000;
// Half-life approach: progress = max * (1 - 0.5^(time/halfLife))
// At t=halfLife: 50%, at t=2*halfLife: 75%, at t=3*halfLife: 87.5%, etc.
const newProgress =
maxProgress *
(1 - Math.pow(0.5, elapsedTimeRef.current / halfLifeSeconds));
setProgress(newProgress);
}, intervalMs);
return () => clearInterval(interval);
}, [isActive, halfLifeSeconds, maxProgress, intervalMs]);
return progress;
}

View File

@@ -0,0 +1,128 @@
"use client";
import { useGetV2GetLibraryAgent } from "@/app/api/__generated__/endpoints/library/library";
import { GraphExecutionJobInfo } from "@/app/api/__generated__/models/graphExecutionJobInfo";
import { GraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
import { RunAgentModal } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/RunAgentModal";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
CheckCircleIcon,
PencilLineIcon,
PlayIcon,
} from "@phosphor-icons/react";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
interface Props {
agentName: string;
libraryAgentId: string;
onSendMessage?: (content: string) => void;
}
export function AgentCreatedPrompt({
agentName,
libraryAgentId,
onSendMessage,
}: Props) {
// Fetch library agent eagerly so modal is ready when user clicks
const { data: libraryAgentResponse, isLoading } = useGetV2GetLibraryAgent(
libraryAgentId,
{
query: {
enabled: !!libraryAgentId,
},
},
);
const libraryAgent =
libraryAgentResponse?.status === 200 ? libraryAgentResponse.data : null;
function handleRunWithPlaceholders() {
onSendMessage?.(
`Run the agent "${agentName}" with placeholder/example values so I can test it.`,
);
}
function handleRunCreated(execution: GraphExecutionMeta) {
onSendMessage?.(
`I've started the agent "${agentName}". The execution ID is ${execution.id}. Please monitor its progress and let me know when it completes.`,
);
}
function handleScheduleCreated(schedule: GraphExecutionJobInfo) {
const scheduleInfo = schedule.cron
? `with cron schedule "${schedule.cron}"`
: "to run on the specified schedule";
onSendMessage?.(
`I've scheduled the agent "${agentName}" ${scheduleInfo}. The schedule ID is ${schedule.id}.`,
);
}
return (
<AIChatBubble>
<div className="flex flex-col gap-4">
<div className="flex items-center gap-2">
<div className="flex h-8 w-8 items-center justify-center rounded-full bg-green-100">
<CheckCircleIcon
size={18}
weight="fill"
className="text-green-600"
/>
</div>
<div>
<Text variant="body-medium" className="text-neutral-900">
Agent Created Successfully
</Text>
<Text variant="small" className="text-neutral-500">
&quot;{agentName}&quot; is ready to test
</Text>
</div>
</div>
<div className="flex flex-col gap-2">
<Text variant="small-medium" className="text-neutral-700">
Ready to test?
</Text>
<div className="flex flex-wrap gap-2">
<Button
variant="outline"
size="small"
onClick={handleRunWithPlaceholders}
className="gap-2"
>
<PlayIcon size={16} />
Run with example values
</Button>
{libraryAgent ? (
<RunAgentModal
triggerSlot={
<Button variant="outline" size="small" className="gap-2">
<PencilLineIcon size={16} />
Run with my inputs
</Button>
}
agent={libraryAgent}
onRunCreated={handleRunCreated}
onScheduleCreated={handleScheduleCreated}
/>
) : (
<Button
variant="outline"
size="small"
loading={isLoading}
disabled
className="gap-2"
>
<PencilLineIcon size={16} />
Run with my inputs
</Button>
)}
</div>
<Text variant="small" className="text-neutral-500">
or just ask me
</Text>
</div>
</div>
</AIChatBubble>
);
}

View File

@@ -2,11 +2,13 @@ import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import type { ToolResult } from "@/types/chat";
import { WarningCircleIcon } from "@phosphor-icons/react";
import { AgentCreatedPrompt } from "./AgentCreatedPrompt";
import { AIChatBubble } from "../AIChatBubble/AIChatBubble";
import { MarkdownContent } from "../MarkdownContent/MarkdownContent";
import {
formatToolResponse,
getErrorMessage,
isAgentSavedResponse,
isErrorResponse,
} from "./helpers";
@@ -16,6 +18,7 @@ export interface ToolResponseMessageProps {
result?: ToolResult;
success?: boolean;
className?: string;
onSendMessage?: (content: string) => void;
}
export function ToolResponseMessage({
@@ -24,6 +27,7 @@ export function ToolResponseMessage({
result,
success: _success,
className,
onSendMessage,
}: ToolResponseMessageProps) {
if (isErrorResponse(result)) {
const errorMessage = getErrorMessage(result);
@@ -43,6 +47,18 @@ export function ToolResponseMessage({
);
}
// Check for agent_saved response - show special prompt
const agentSavedData = isAgentSavedResponse(result);
if (agentSavedData.isSaved) {
return (
<AgentCreatedPrompt
agentName={agentSavedData.agentName}
libraryAgentId={agentSavedData.libraryAgentId}
onSendMessage={onSendMessage}
/>
);
}
const formattedText = formatToolResponse(result, toolName);
return (

View File

@@ -6,6 +6,43 @@ function stripInternalReasoning(content: string): string {
.trim();
}
export interface AgentSavedData {
isSaved: boolean;
agentName: string;
agentId: string;
libraryAgentId: string;
libraryAgentLink: string;
}
export function isAgentSavedResponse(result: unknown): AgentSavedData {
if (typeof result !== "object" || result === null) {
return {
isSaved: false,
agentName: "",
agentId: "",
libraryAgentId: "",
libraryAgentLink: "",
};
}
const response = result as Record<string, unknown>;
if (response.type === "agent_saved") {
return {
isSaved: true,
agentName: (response.agent_name as string) || "Agent",
agentId: (response.agent_id as string) || "",
libraryAgentId: (response.library_agent_id as string) || "",
libraryAgentLink: (response.library_agent_link as string) || "",
};
}
return {
isSaved: false,
agentName: "",
agentId: "",
libraryAgentId: "",
libraryAgentLink: "",
};
}
export function isErrorResponse(result: unknown): boolean {
if (typeof result === "string") {
const lower = result.toLowerCase();
@@ -39,69 +76,101 @@ export function getErrorMessage(result: unknown): string {
/**
* Check if a value is a workspace file reference.
* Format: workspace://{fileId} or workspace://{fileId}#{mimeType}
*/
function isWorkspaceRef(value: unknown): value is string {
return typeof value === "string" && value.startsWith("workspace://");
}
/**
* Check if a workspace reference appears to be an image based on common patterns.
* Since workspace refs don't have extensions, we check the context or assume image
* for certain block types.
*
* TODO: Replace keyword matching with MIME type encoded in workspace ref.
* e.g., workspace://abc123#image/png or workspace://abc123#video/mp4
* This would let frontend render correctly without fragile keyword matching.
* Extract MIME type from a workspace reference fragment.
* e.g., "workspace://abc123#video/mp4" → "video/mp4"
* Returns undefined if no fragment is present.
*/
function isLikelyImageRef(value: string, outputKey?: string): boolean {
if (!isWorkspaceRef(value)) return false;
// Check output key name for video-related hints (these are NOT images)
const videoKeywords = ["video", "mp4", "mov", "avi", "webm", "movie", "clip"];
if (outputKey) {
const lowerKey = outputKey.toLowerCase();
if (videoKeywords.some((kw) => lowerKey.includes(kw))) {
return false;
}
}
// Check output key name for image-related hints
const imageKeywords = [
"image",
"img",
"photo",
"picture",
"thumbnail",
"avatar",
"icon",
"screenshot",
];
if (outputKey) {
const lowerKey = outputKey.toLowerCase();
if (imageKeywords.some((kw) => lowerKey.includes(kw))) {
return true;
}
}
// Default to treating workspace refs as potential images
// since that's the most common case for generated content
return true;
function getWorkspaceMimeType(value: string): string | undefined {
const hashIndex = value.indexOf("#");
if (hashIndex === -1) return undefined;
return value.slice(hashIndex + 1) || undefined;
}
/**
* Format a single output value, converting workspace refs to markdown images.
* Determine the media category of a workspace ref or data URI.
* Uses the MIME type fragment on workspace refs when available,
* falls back to output key keyword matching for older refs without it.
*/
function formatOutputValue(value: unknown, outputKey?: string): string {
if (isWorkspaceRef(value) && isLikelyImageRef(value, outputKey)) {
// Format as markdown image
return `![${outputKey || "Generated image"}](${value})`;
function getMediaCategory(
value: string,
outputKey?: string,
): "video" | "image" | "audio" | "unknown" {
// Data URIs carry their own MIME type
if (value.startsWith("data:video/")) return "video";
if (value.startsWith("data:image/")) return "image";
if (value.startsWith("data:audio/")) return "audio";
// Workspace refs: prefer MIME type fragment
if (isWorkspaceRef(value)) {
const mime = getWorkspaceMimeType(value);
if (mime) {
if (mime.startsWith("video/")) return "video";
if (mime.startsWith("image/")) return "image";
if (mime.startsWith("audio/")) return "audio";
return "unknown";
}
// Fallback: keyword matching on output key for older refs without fragment
if (outputKey) {
const lowerKey = outputKey.toLowerCase();
const videoKeywords = [
"video",
"mp4",
"mov",
"avi",
"webm",
"movie",
"clip",
];
if (videoKeywords.some((kw) => lowerKey.includes(kw))) return "video";
const imageKeywords = [
"image",
"img",
"photo",
"picture",
"thumbnail",
"avatar",
"icon",
"screenshot",
];
if (imageKeywords.some((kw) => lowerKey.includes(kw))) return "image";
}
// Default to image for backward compatibility
return "image";
}
return "unknown";
}
/**
* Format a single output value, converting workspace refs to markdown images/videos.
* Videos use a "video:" alt-text prefix so the MarkdownContent renderer can
* distinguish them from images and render a <video> element.
*/
function formatOutputValue(value: unknown, outputKey?: string): string {
if (typeof value === "string") {
// Check for data URIs (images)
if (value.startsWith("data:image/")) {
const category = getMediaCategory(value, outputKey);
if (category === "video") {
// Format with "video:" prefix so MarkdownContent renders <video>
return `![video:${outputKey || "Video"}](${value})`;
}
if (category === "image") {
return `![${outputKey || "Generated image"}](${value})`;
}
// For audio, unknown workspace refs, data URIs, etc. - return as-is
return value;
}

View File

@@ -26,6 +26,7 @@ export const providerIcons: Partial<
nvidia: fallbackIcon,
discord: FaDiscord,
d_id: fallbackIcon,
elevenlabs: fallbackIcon,
google_maps: FaGoogle,
jina: fallbackIcon,
ideogram: fallbackIcon,

View File

@@ -47,7 +47,7 @@ export function Navbar() {
const actualLoggedInLinks = [
{ name: "Home", href: homeHref },
...(isChatEnabled === true ? [{ name: "Tasks", href: "/library" }] : []),
...(isChatEnabled === true ? [{ name: "Agents", href: "/library" }] : []),
...loggedInLinks,
];
@@ -62,7 +62,7 @@ export function Navbar() {
<PreviewBanner branchName={previewBranchName} />
) : null}
<nav
className="inline-flex w-full items-center border border-none bg-[#FAFAFA] p-3 backdrop-blur-[26px]"
className="border-zinc-[#EFEFF0] inline-flex w-full items-center border border-[#EFEFF0] bg-[#F3F4F6]/20 p-3 backdrop-blur-[26px]"
style={{ height: NAVBAR_HEIGHT_PX }}
>
{/* Left section */}

View File

@@ -1,83 +0,0 @@
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";
import { Separator } from "@/components/ui/separator";
const buttonGroupVariants = cva(
"flex w-fit items-stretch has-[>[data-slot=button-group]]:gap-2 [&>*]:focus-visible:relative [&>*]:focus-visible:z-10 has-[select[aria-hidden=true]:last-child]:[&>[data-slot=select-trigger]:last-of-type]:rounded-r-md [&>[data-slot=select-trigger]:not([class*='w-'])]:w-fit [&>input]:flex-1",
{
variants: {
orientation: {
horizontal:
"[&>*:not(:first-child)]:rounded-l-none [&>*:not(:first-child)]:border-l-0 [&>*:not(:last-child)]:rounded-r-none",
vertical:
"flex-col [&>*:not(:first-child)]:rounded-t-none [&>*:not(:first-child)]:border-t-0 [&>*:not(:last-child)]:rounded-b-none",
},
},
defaultVariants: {
orientation: "horizontal",
},
},
);
function ButtonGroup({
className,
orientation,
...props
}: React.ComponentProps<"div"> & VariantProps<typeof buttonGroupVariants>) {
return (
<div
role="group"
data-slot="button-group"
data-orientation={orientation}
className={cn(buttonGroupVariants({ orientation }), className)}
{...props}
/>
);
}
function ButtonGroupText({
className,
asChild = false,
...props
}: React.ComponentProps<"div"> & {
asChild?: boolean;
}) {
const Comp = asChild ? Slot : "div";
return (
<Comp
className={cn(
"shadow-xs flex items-center gap-2 rounded-md border border-neutral-200 bg-neutral-100 px-4 text-sm font-medium dark:border-neutral-800 dark:bg-neutral-800 [&_svg:not([class*='size-'])]:size-4 [&_svg]:pointer-events-none",
className,
)}
{...props}
/>
);
}
function ButtonGroupSeparator({
className,
orientation = "vertical",
...props
}: React.ComponentProps<typeof Separator>) {
return (
<Separator
data-slot="button-group-separator"
orientation={orientation}
className={cn(
"relative !m-0 self-stretch bg-neutral-200 data-[orientation=vertical]:h-auto dark:bg-neutral-800",
className,
)}
{...props}
/>
);
}
export {
ButtonGroup,
ButtonGroupSeparator,
ButtonGroupText,
buttonGroupVariants,
};

View File

@@ -1,59 +0,0 @@
import * as React from "react";
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";
const buttonVariants = cva(
"inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-neutral-950 disabled:pointer-events-none disabled:opacity-50 [&_svg]:pointer-events-none [&_svg]:size-4 [&_svg]:shrink-0 dark:focus-visible:ring-neutral-300",
{
variants: {
variant: {
default:
"bg-neutral-900 text-neutral-50 shadow hover:bg-neutral-900/90 dark:bg-neutral-50 dark:text-neutral-900 dark:hover:bg-neutral-50/90",
destructive:
"bg-red-500 text-neutral-50 shadow-sm hover:bg-red-500/90 dark:bg-red-900 dark:text-neutral-50 dark:hover:bg-red-900/90",
outline:
"border border-neutral-200 bg-white shadow-sm hover:bg-neutral-100 hover:text-neutral-900 dark:border-neutral-800 dark:bg-neutral-950 dark:hover:bg-neutral-800 dark:hover:text-neutral-50",
secondary:
"bg-neutral-100 text-neutral-900 shadow-sm hover:bg-neutral-100/80 dark:bg-neutral-800 dark:text-neutral-50 dark:hover:bg-neutral-800/80",
ghost:
"hover:bg-neutral-100 hover:text-neutral-900 dark:hover:bg-neutral-800 dark:hover:text-neutral-50",
link: "text-neutral-900 underline-offset-4 hover:underline dark:text-neutral-50",
},
size: {
default: "h-9 px-4 py-2",
sm: "h-8 rounded-md px-3 text-xs",
lg: "h-10 rounded-md px-8",
icon: "h-9 w-9",
"icon-sm": "h-8 w-8",
},
},
defaultVariants: {
variant: "default",
size: "default",
},
},
);
export interface ButtonProps
extends React.ButtonHTMLAttributes<HTMLButtonElement>,
VariantProps<typeof buttonVariants> {
asChild?: boolean;
}
const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
({ className, variant, size, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
<Comp
className={cn(buttonVariants({ variant, size, className }))}
ref={ref}
{...props}
/>
);
},
);
Button.displayName = "Button";
export { Button, buttonVariants };

View File

@@ -1,22 +0,0 @@
import * as React from "react";
import { cn } from "@/lib/utils";
const Input = React.forwardRef<HTMLInputElement, React.ComponentProps<"input">>(
({ className, type, ...props }, ref) => {
return (
<input
type={type}
className={cn(
"flex h-9 w-full rounded-md border border-neutral-200 bg-transparent px-3 py-1 text-base shadow-sm transition-colors file:border-0 file:bg-transparent file:text-sm file:font-medium file:text-neutral-950 placeholder:text-neutral-500 focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-neutral-950 disabled:cursor-not-allowed disabled:opacity-50 dark:border-neutral-800 dark:file:text-neutral-50 dark:placeholder:text-neutral-400 dark:focus-visible:ring-neutral-300 md:text-sm",
className,
)}
ref={ref}
{...props}
/>
);
},
);
Input.displayName = "Input";
export { Input };

View File

@@ -1,31 +0,0 @@
"use client";
import * as React from "react";
import * as SeparatorPrimitive from "@radix-ui/react-separator";
import { cn } from "@/lib/utils";
const Separator = React.forwardRef<
React.ElementRef<typeof SeparatorPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof SeparatorPrimitive.Root>
>(
(
{ className, orientation = "horizontal", decorative = true, ...props },
ref,
) => (
<SeparatorPrimitive.Root
ref={ref}
decorative={decorative}
orientation={orientation}
className={cn(
"shrink-0 bg-neutral-200 dark:bg-neutral-800",
orientation === "horizontal" ? "h-[1px] w-full" : "h-full w-[1px]",
className,
)}
{...props}
/>
),
);
Separator.displayName = SeparatorPrimitive.Root.displayName;
export { Separator };

View File

@@ -1,143 +0,0 @@
"use client";
import * as React from "react";
import * as SheetPrimitive from "@radix-ui/react-dialog";
import { cva, type VariantProps } from "class-variance-authority";
import { X } from "lucide-react";
import { cn } from "@/lib/utils";
const Sheet = SheetPrimitive.Root;
const SheetTrigger = SheetPrimitive.Trigger;
const SheetClose = SheetPrimitive.Close;
const SheetPortal = SheetPrimitive.Portal;
const SheetOverlay = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Overlay>,
React.ComponentPropsWithoutRef<typeof SheetPrimitive.Overlay>
>(({ className, ...props }, ref) => (
<SheetPrimitive.Overlay
className={cn(
"fixed inset-0 z-50 bg-black/80 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0",
className,
)}
{...props}
ref={ref}
/>
));
SheetOverlay.displayName = SheetPrimitive.Overlay.displayName;
const sheetVariants = cva(
"fixed z-50 gap-4 bg-white p-6 shadow-lg transition ease-in-out data-[state=closed]:duration-300 data-[state=open]:duration-500 data-[state=open]:animate-in data-[state=closed]:animate-out dark:bg-neutral-950",
{
variants: {
side: {
top: "inset-x-0 top-0 border-b data-[state=closed]:slide-out-to-top data-[state=open]:slide-in-from-top",
bottom:
"inset-x-0 bottom-0 border-t data-[state=closed]:slide-out-to-bottom data-[state=open]:slide-in-from-bottom",
left: "inset-y-0 left-0 h-full w-3/4 border-r data-[state=closed]:slide-out-to-left data-[state=open]:slide-in-from-left sm:max-w-sm",
right:
"inset-y-0 right-0 h-full w-3/4 border-l data-[state=closed]:slide-out-to-right data-[state=open]:slide-in-from-right sm:max-w-sm",
},
},
defaultVariants: {
side: "right",
},
},
);
interface SheetContentProps
extends React.ComponentPropsWithoutRef<typeof SheetPrimitive.Content>,
VariantProps<typeof sheetVariants> {}
const SheetContent = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Content>,
SheetContentProps
>(({ side = "right", className, children, ...props }, ref) => (
<SheetPortal>
<SheetOverlay />
<SheetPrimitive.Content
ref={ref}
className={cn(sheetVariants({ side }), className)}
{...props}
>
<SheetPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-white transition-opacity data-[state=open]:bg-neutral-100 hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-neutral-950 focus:ring-offset-2 disabled:pointer-events-none dark:ring-offset-neutral-950 dark:data-[state=open]:bg-neutral-800 dark:focus:ring-neutral-300">
<X className="h-4 w-4" />
<span className="sr-only">Close</span>
</SheetPrimitive.Close>
{children}
</SheetPrimitive.Content>
</SheetPortal>
));
SheetContent.displayName = SheetPrimitive.Content.displayName;
const SheetHeader = ({
className,
...props
}: React.HTMLAttributes<HTMLDivElement>) => (
<div
className={cn(
"flex flex-col space-y-2 text-center sm:text-left",
className,
)}
{...props}
/>
);
SheetHeader.displayName = "SheetHeader";
const SheetFooter = ({
className,
...props
}: React.HTMLAttributes<HTMLDivElement>) => (
<div
className={cn(
"flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2",
className,
)}
{...props}
/>
);
SheetFooter.displayName = "SheetFooter";
const SheetTitle = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Title>,
React.ComponentPropsWithoutRef<typeof SheetPrimitive.Title>
>(({ className, ...props }, ref) => (
<SheetPrimitive.Title
ref={ref}
className={cn(
"text-lg font-semibold text-neutral-950 dark:text-neutral-50",
className,
)}
{...props}
/>
));
SheetTitle.displayName = SheetPrimitive.Title.displayName;
const SheetDescription = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Description>,
React.ComponentPropsWithoutRef<typeof SheetPrimitive.Description>
>(({ className, ...props }, ref) => (
<SheetPrimitive.Description
ref={ref}
className={cn("text-sm text-neutral-500 dark:text-neutral-400", className)}
{...props}
/>
));
SheetDescription.displayName = SheetPrimitive.Description.displayName;
export {
Sheet,
SheetPortal,
SheetOverlay,
SheetTrigger,
SheetClose,
SheetContent,
SheetHeader,
SheetFooter,
SheetTitle,
SheetDescription,
};

View File

@@ -1,778 +0,0 @@
"use client";
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import * as React from "react";
import { Button } from "@/components/ui/button";
import { Input } from "@/components/ui/input";
import { Separator } from "@/components/ui/separator";
import {
Sheet,
SheetContent,
SheetDescription,
SheetHeader,
SheetTitle,
} from "@/components/ui/sheet";
import { Skeleton } from "@/components/ui/skeleton";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { useIsMobile } from "@/hooks/use-mobile";
import { cn } from "@/lib/utils";
import { SidebarSimpleIcon } from "@phosphor-icons/react";
const SIDEBAR_COOKIE_NAME = "sidebar_state";
const SIDEBAR_COOKIE_MAX_AGE = 60 * 60 * 24 * 7;
const SIDEBAR_WIDTH = "20rem";
const SIDEBAR_WIDTH_MOBILE = "20rem";
const SIDEBAR_WIDTH_ICON = "3rem";
const SIDEBAR_KEYBOARD_SHORTCUT = "b";
type SidebarContextProps = {
state: "expanded" | "collapsed";
open: boolean;
setOpen: (open: boolean) => void;
openMobile: boolean;
setOpenMobile: (open: boolean) => void;
isMobile: boolean;
toggleSidebar: () => void;
};
const SidebarContext = React.createContext<SidebarContextProps | null>(null);
function useSidebar() {
const context = React.useContext(SidebarContext);
if (!context) {
throw new Error("useSidebar must be used within a SidebarProvider.");
}
return context;
}
const SidebarProvider = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div"> & {
defaultOpen?: boolean;
open?: boolean;
onOpenChange?: (open: boolean) => void;
}
>(
(
{
defaultOpen = true,
open: openProp,
onOpenChange: setOpenProp,
className,
style,
children,
...props
},
ref,
) => {
const isMobile = useIsMobile();
const [openMobile, setOpenMobile] = React.useState(false);
// This is the internal state of the sidebar.
// We use openProp and setOpenProp for control from outside the component.
const [_open, _setOpen] = React.useState(defaultOpen);
const open = openProp ?? _open;
const setOpen = React.useCallback(
(value: boolean | ((value: boolean) => boolean)) => {
const openState = typeof value === "function" ? value(open) : value;
if (setOpenProp) {
setOpenProp(openState);
} else {
_setOpen(openState);
}
// This sets the cookie to keep the sidebar state.
document.cookie = `${SIDEBAR_COOKIE_NAME}=${openState}; path=/; max-age=${SIDEBAR_COOKIE_MAX_AGE}`;
},
[setOpenProp, open],
);
// Helper to toggle the sidebar.
const toggleSidebar = React.useCallback(() => {
return isMobile
? setOpenMobile((open) => !open)
: setOpen((open) => !open);
}, [isMobile, setOpen, setOpenMobile]);
// Adds a keyboard shortcut to toggle the sidebar.
React.useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
if (
event.key === SIDEBAR_KEYBOARD_SHORTCUT &&
(event.metaKey || event.ctrlKey)
) {
event.preventDefault();
toggleSidebar();
}
};
window.addEventListener("keydown", handleKeyDown);
return () => window.removeEventListener("keydown", handleKeyDown);
}, [toggleSidebar]);
// We add a state so that we can do data-state="expanded" or "collapsed".
// This makes it easier to style the sidebar with Tailwind classes.
const state = open ? "expanded" : "collapsed";
const contextValue = React.useMemo<SidebarContextProps>(
() => ({
state,
open,
setOpen,
isMobile,
openMobile,
setOpenMobile,
toggleSidebar,
}),
[
state,
open,
setOpen,
isMobile,
openMobile,
setOpenMobile,
toggleSidebar,
],
);
return (
<SidebarContext.Provider value={contextValue}>
<TooltipProvider delayDuration={0}>
<div
style={
{
"--sidebar-width": SIDEBAR_WIDTH,
"--sidebar-width-icon": SIDEBAR_WIDTH_ICON,
...style,
} as React.CSSProperties
}
className={cn(
"group/sidebar-wrapper flex min-h-svh w-full has-[[data-variant=inset]]:bg-sidebar",
className,
)}
ref={ref}
{...props}
>
{children}
</div>
</TooltipProvider>
</SidebarContext.Provider>
);
},
);
SidebarProvider.displayName = "SidebarProvider";
const Sidebar = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div"> & {
side?: "left" | "right";
variant?: "sidebar" | "floating" | "inset";
collapsible?: "offcanvas" | "icon" | "none";
}
>(
(
{
side = "left",
variant = "sidebar",
collapsible = "offcanvas",
className,
children,
...props
},
ref,
) => {
const { isMobile, state, openMobile, setOpenMobile } = useSidebar();
if (collapsible === "none") {
return (
<div
className={cn(
"flex h-full w-[--sidebar-width] flex-col bg-sidebar text-sidebar-foreground",
className,
)}
ref={ref}
{...props}
>
{children}
</div>
);
}
if (isMobile) {
return (
<Sheet open={openMobile} onOpenChange={setOpenMobile} {...props}>
<SheetContent
data-sidebar="sidebar"
data-mobile="true"
className="w-[--sidebar-width] bg-sidebar p-0 text-sidebar-foreground [&>button]:hidden"
style={
{
"--sidebar-width": SIDEBAR_WIDTH_MOBILE,
} as React.CSSProperties
}
side={side}
>
<SheetHeader className="sr-only">
<SheetTitle>Sidebar</SheetTitle>
<SheetDescription>Displays the mobile sidebar.</SheetDescription>
</SheetHeader>
<div className="flex h-full w-full flex-col">{children}</div>
</SheetContent>
</Sheet>
);
}
return (
<div
ref={ref}
className="group peer hidden text-sidebar-foreground md:block"
data-state={state}
data-collapsible={state === "collapsed" ? collapsible : ""}
data-variant={variant}
data-side={side}
>
{/* This is what handles the sidebar gap on desktop */}
<div
className={cn(
"relative w-[--sidebar-width] bg-transparent transition-[width] duration-200 ease-linear",
"group-data-[collapsible=offcanvas]:w-0",
"group-data-[side=right]:rotate-180",
variant === "floating" || variant === "inset"
? "group-data-[collapsible=icon]:w-[calc(var(--sidebar-width-icon)_+_theme(spacing.4))]"
: "group-data-[collapsible=icon]:w-[--sidebar-width-icon]",
)}
/>
<div
className={cn(
"fixed inset-y-0 z-10 hidden h-svh w-[--sidebar-width] transition-[left,right,width] duration-200 ease-linear md:flex",
side === "left"
? "left-0 group-data-[collapsible=offcanvas]:left-[calc(var(--sidebar-width)*-1)]"
: "right-0 group-data-[collapsible=offcanvas]:right-[calc(var(--sidebar-width)*-1)]",
// Adjust the padding for floating and inset variants.
variant === "floating" || variant === "inset"
? "p-2 group-data-[collapsible=icon]:w-[calc(var(--sidebar-width-icon)_+_theme(spacing.4)_+2px)]"
: "group-data-[collapsible=icon]:w-[--sidebar-width-icon] group-data-[side=left]:border-r group-data-[side=right]:border-l",
className,
)}
{...props}
>
<div
data-sidebar="sidebar"
className="flex h-full w-full flex-col bg-sidebar group-data-[variant=floating]:rounded-lg group-data-[variant=floating]:border group-data-[variant=floating]:border-sidebar-border group-data-[variant=floating]:shadow"
>
{children}
</div>
</div>
</div>
);
},
);
Sidebar.displayName = "Sidebar";
const SidebarTrigger = React.forwardRef<
React.ElementRef<typeof Button>,
React.ComponentProps<typeof Button>
>(({ onClick }, ref) => {
const { toggleSidebar } = useSidebar();
return (
<Button
ref={ref}
data-sidebar="trigger"
variant="ghost"
onClick={(event) => {
onClick?.(event);
toggleSidebar();
}}
>
<SidebarSimpleIcon className="!size-5" />
<span className="sr-only">Toggle Sidebar</span>
</Button>
);
});
SidebarTrigger.displayName = "SidebarTrigger";
const SidebarRail = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<"button">
>(({ className, ...props }, ref) => {
const { toggleSidebar } = useSidebar();
return (
<button
ref={ref}
data-sidebar="rail"
aria-label="Toggle Sidebar"
tabIndex={-1}
onClick={toggleSidebar}
title="Toggle Sidebar"
className={cn(
"absolute inset-y-0 z-20 hidden w-4 -translate-x-1/2 transition-all ease-linear after:absolute after:inset-y-0 after:left-1/2 after:w-[2px] group-data-[side=left]:-right-4 group-data-[side=right]:left-0 hover:after:bg-sidebar-border sm:flex",
"[[data-side=left]_&]:cursor-w-resize [[data-side=right]_&]:cursor-e-resize",
"[[data-side=left][data-state=collapsed]_&]:cursor-e-resize [[data-side=right][data-state=collapsed]_&]:cursor-w-resize",
"group-data-[collapsible=offcanvas]:translate-x-0 group-data-[collapsible=offcanvas]:after:left-full group-data-[collapsible=offcanvas]:hover:bg-sidebar",
"[[data-side=left][data-collapsible=offcanvas]_&]:-right-2",
"[[data-side=right][data-collapsible=offcanvas]_&]:-left-2",
className,
)}
{...props}
/>
);
});
SidebarRail.displayName = "SidebarRail";
const SidebarInset = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"main">
>(({ className, ...props }, ref) => {
return (
<main
ref={ref}
className={cn(
"relative flex w-full flex-1 flex-col bg-white dark:bg-neutral-950",
"md:peer-data-[variant=inset]:m-2 md:peer-data-[state=collapsed]:peer-data-[variant=inset]:ml-2 md:peer-data-[variant=inset]:ml-0 md:peer-data-[variant=inset]:rounded-xl md:peer-data-[variant=inset]:shadow",
className,
)}
{...props}
/>
);
});
SidebarInset.displayName = "SidebarInset";
const SidebarInput = React.forwardRef<
React.ElementRef<typeof Input>,
React.ComponentProps<typeof Input>
>(({ className, ...props }, ref) => {
return (
<Input
ref={ref}
data-sidebar="input"
className={cn(
"h-8 w-full bg-white shadow-none focus-visible:ring-2 focus-visible:ring-sidebar-ring dark:bg-neutral-950",
className,
)}
{...props}
/>
);
});
SidebarInput.displayName = "SidebarInput";
const SidebarHeader = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div">
>(({ className, ...props }, ref) => {
return (
<div
ref={ref}
data-sidebar="header"
className={cn("flex flex-col gap-2 p-2", className)}
{...props}
/>
);
});
SidebarHeader.displayName = "SidebarHeader";
const SidebarFooter = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div">
>(({ className, ...props }, ref) => {
return (
<div
ref={ref}
data-sidebar="footer"
className={cn("flex flex-col gap-2 p-2", className)}
{...props}
/>
);
});
SidebarFooter.displayName = "SidebarFooter";
const SidebarSeparator = React.forwardRef<
React.ElementRef<typeof Separator>,
React.ComponentProps<typeof Separator>
>(({ className, ...props }, ref) => {
return (
<Separator
ref={ref}
data-sidebar="separator"
className={cn("mx-2 w-auto bg-sidebar-border", className)}
{...props}
/>
);
});
SidebarSeparator.displayName = "SidebarSeparator";
const SidebarContent = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div">
>(({ className, ...props }, ref) => {
return (
<div
ref={ref}
data-sidebar="content"
className={cn(
"flex min-h-0 flex-1 flex-col gap-2 overflow-auto group-data-[collapsible=icon]:overflow-hidden",
className,
)}
{...props}
/>
);
});
SidebarContent.displayName = "SidebarContent";
const SidebarGroup = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div">
>(({ className, ...props }, ref) => {
return (
<div
ref={ref}
data-sidebar="group"
className={cn("relative flex w-full min-w-0 flex-col p-2", className)}
{...props}
/>
);
});
SidebarGroup.displayName = "SidebarGroup";
const SidebarGroupLabel = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div"> & { asChild?: boolean }
>(({ className, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "div";
return (
<Comp
ref={ref}
data-sidebar="group-label"
className={cn(
"flex h-8 shrink-0 items-center rounded-md px-2 text-xs font-medium text-sidebar-foreground/70 outline-none ring-sidebar-ring transition-[margin,opacity] duration-200 ease-linear focus-visible:ring-2 [&>svg]:size-4 [&>svg]:shrink-0",
"group-data-[collapsible=icon]:-mt-8 group-data-[collapsible=icon]:opacity-0",
className,
)}
{...props}
/>
);
});
SidebarGroupLabel.displayName = "SidebarGroupLabel";
const SidebarGroupAction = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<"button"> & { asChild?: boolean }
>(({ className, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
<Comp
ref={ref}
data-sidebar="group-action"
className={cn(
"absolute right-3 top-3.5 flex aspect-square w-5 items-center justify-center rounded-md p-0 text-sidebar-foreground outline-none ring-sidebar-ring transition-transform hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 [&>svg]:size-4 [&>svg]:shrink-0",
// Increases the hit area of the button on mobile.
"after:absolute after:-inset-2 after:md:hidden",
"group-data-[collapsible=icon]:hidden",
className,
)}
{...props}
/>
);
});
SidebarGroupAction.displayName = "SidebarGroupAction";
const SidebarGroupContent = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div">
>(({ className, ...props }, ref) => (
<div
ref={ref}
data-sidebar="group-content"
className={cn("w-full text-sm", className)}
{...props}
/>
));
SidebarGroupContent.displayName = "SidebarGroupContent";
const SidebarMenu = React.forwardRef<
HTMLUListElement,
React.ComponentProps<"ul">
>(({ className, ...props }, ref) => (
<ul
ref={ref}
data-sidebar="menu"
className={cn("flex w-full min-w-0 flex-col gap-1", className)}
{...props}
/>
));
SidebarMenu.displayName = "SidebarMenu";
const SidebarMenuItem = React.forwardRef<
HTMLLIElement,
React.ComponentProps<"li">
>(({ className, ...props }, ref) => (
<li
ref={ref}
data-sidebar="menu-item"
className={cn("group/menu-item relative", className)}
{...props}
/>
));
SidebarMenuItem.displayName = "SidebarMenuItem";
const sidebarMenuButtonVariants = cva(
"peer/menu-button flex w-full items-center gap-2 overflow-hidden rounded-md p-2 text-left text-sm outline-none ring-sidebar-ring transition-[width,height,padding] hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 active:bg-sidebar-accent active:text-sidebar-accent-foreground disabled:pointer-events-none disabled:opacity-50 group-has-[[data-sidebar=menu-action]]/menu-item:pr-8 aria-disabled:pointer-events-none aria-disabled:opacity-50 data-[active=true]:bg-sidebar-accent data-[active=true]:font-medium data-[active=true]:text-sidebar-accent-foreground data-[state=open]:hover:bg-sidebar-accent data-[state=open]:hover:text-sidebar-accent-foreground group-data-[collapsible=icon]:!size-8 group-data-[collapsible=icon]:!p-2 [&>span:last-child]:truncate [&>svg]:size-4 [&>svg]:shrink-0",
{
variants: {
variant: {
default: "hover:bg-sidebar-accent hover:text-sidebar-accent-foreground",
outline:
"bg-white shadow-[0_0_0_1px_hsl(var(--sidebar-border))] hover:bg-sidebar-accent hover:text-sidebar-accent-foreground hover:shadow-[0_0_0_1px_hsl(var(--sidebar-accent))] dark:bg-neutral-950",
},
size: {
default: "h-8 text-sm",
sm: "h-7 text-xs",
lg: "h-12 text-sm group-data-[collapsible=icon]:!p-0",
},
},
defaultVariants: {
variant: "default",
size: "default",
},
},
);
const SidebarMenuButton = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<"button"> & {
asChild?: boolean;
isActive?: boolean;
tooltip?: string | React.ComponentProps<typeof TooltipContent>;
} & VariantProps<typeof sidebarMenuButtonVariants>
>(
(
{
asChild = false,
isActive = false,
variant = "default",
size = "default",
tooltip,
className,
...props
},
ref,
) => {
const Comp = asChild ? Slot : "button";
const { isMobile, state } = useSidebar();
const button = (
<Comp
ref={ref}
data-sidebar="menu-button"
data-size={size}
data-active={isActive}
className={cn(sidebarMenuButtonVariants({ variant, size }), className)}
{...props}
/>
);
if (!tooltip) {
return button;
}
if (typeof tooltip === "string") {
tooltip = {
children: tooltip,
};
}
return (
<Tooltip>
<TooltipTrigger asChild>{button}</TooltipTrigger>
<TooltipContent
side="right"
align="center"
hidden={state !== "collapsed" || isMobile}
{...tooltip}
/>
</Tooltip>
);
},
);
SidebarMenuButton.displayName = "SidebarMenuButton";
const SidebarMenuAction = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<"button"> & {
asChild?: boolean;
showOnHover?: boolean;
}
>(({ className, asChild = false, showOnHover = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
<Comp
ref={ref}
data-sidebar="menu-action"
className={cn(
"absolute right-1 top-1.5 flex aspect-square w-5 items-center justify-center rounded-md p-0 text-sidebar-foreground outline-none ring-sidebar-ring transition-transform peer-hover/menu-button:text-sidebar-accent-foreground hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 [&>svg]:size-4 [&>svg]:shrink-0",
// Increases the hit area of the button on mobile.
"after:absolute after:-inset-2 after:md:hidden",
"peer-data-[size=sm]/menu-button:top-1",
"peer-data-[size=default]/menu-button:top-1.5",
"peer-data-[size=lg]/menu-button:top-2.5",
"group-data-[collapsible=icon]:hidden",
showOnHover &&
"group-focus-within/menu-item:opacity-100 group-hover/menu-item:opacity-100 data-[state=open]:opacity-100 peer-data-[active=true]/menu-button:text-sidebar-accent-foreground md:opacity-0",
className,
)}
{...props}
/>
);
});
SidebarMenuAction.displayName = "SidebarMenuAction";
const SidebarMenuBadge = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div">
>(({ className, ...props }, ref) => (
<div
ref={ref}
data-sidebar="menu-badge"
className={cn(
"pointer-events-none absolute right-1 flex h-5 min-w-5 select-none items-center justify-center rounded-md px-1 text-xs font-medium tabular-nums text-sidebar-foreground",
"peer-hover/menu-button:text-sidebar-accent-foreground peer-data-[active=true]/menu-button:text-sidebar-accent-foreground",
"peer-data-[size=sm]/menu-button:top-1",
"peer-data-[size=default]/menu-button:top-1.5",
"peer-data-[size=lg]/menu-button:top-2.5",
"group-data-[collapsible=icon]:hidden",
className,
)}
{...props}
/>
));
SidebarMenuBadge.displayName = "SidebarMenuBadge";
const SidebarMenuSkeleton = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div"> & {
showIcon?: boolean;
}
>(({ className, showIcon = false, ...props }, ref) => {
// Random width between 50 to 90%.
const width = React.useMemo(() => {
return `${Math.floor(Math.random() * 40) + 50}%`;
}, []);
return (
<div
ref={ref}
data-sidebar="menu-skeleton"
className={cn("flex h-8 items-center gap-2 rounded-md px-2", className)}
{...props}
>
{showIcon && (
<Skeleton
className="size-4 rounded-md"
data-sidebar="menu-skeleton-icon"
/>
)}
<Skeleton
className="h-4 max-w-[--skeleton-width] flex-1"
data-sidebar="menu-skeleton-text"
style={
{
"--skeleton-width": width,
} as React.CSSProperties
}
/>
</div>
);
});
SidebarMenuSkeleton.displayName = "SidebarMenuSkeleton";
const SidebarMenuSub = React.forwardRef<
HTMLUListElement,
React.ComponentProps<"ul">
>(({ className, ...props }, ref) => (
<ul
ref={ref}
data-sidebar="menu-sub"
className={cn(
"mx-3.5 flex min-w-0 translate-x-px flex-col gap-1 border-l border-sidebar-border px-2.5 py-0.5",
"group-data-[collapsible=icon]:hidden",
className,
)}
{...props}
/>
));
SidebarMenuSub.displayName = "SidebarMenuSub";
const SidebarMenuSubItem = React.forwardRef<
HTMLLIElement,
React.ComponentProps<"li">
>(({ ...props }, ref) => <li ref={ref} {...props} />);
SidebarMenuSubItem.displayName = "SidebarMenuSubItem";
const SidebarMenuSubButton = React.forwardRef<
HTMLAnchorElement,
React.ComponentProps<"a"> & {
asChild?: boolean;
size?: "sm" | "md";
isActive?: boolean;
}
>(({ asChild = false, size = "md", isActive, className, ...props }, ref) => {
const Comp = asChild ? Slot : "a";
return (
<Comp
ref={ref}
data-sidebar="menu-sub-button"
data-size={size}
data-active={isActive}
className={cn(
"flex h-7 min-w-0 -translate-x-px items-center gap-2 overflow-hidden rounded-md px-2 text-sidebar-foreground outline-none ring-sidebar-ring aria-disabled:pointer-events-none aria-disabled:opacity-50 hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 active:bg-sidebar-accent active:text-sidebar-accent-foreground disabled:pointer-events-none disabled:opacity-50 [&>span:last-child]:truncate [&>svg]:size-4 [&>svg]:shrink-0 [&>svg]:text-sidebar-accent-foreground",
"data-[active=true]:bg-sidebar-accent data-[active=true]:text-sidebar-accent-foreground",
size === "sm" && "text-xs",
size === "md" && "text-sm",
"group-data-[collapsible=icon]:hidden",
className,
)}
{...props}
/>
);
});
SidebarMenuSubButton.displayName = "SidebarMenuSubButton";
export {
Sidebar,
SidebarContent,
SidebarFooter,
SidebarGroup,
SidebarGroupAction,
SidebarGroupContent,
SidebarGroupLabel,
SidebarHeader,
SidebarInput,
SidebarInset,
SidebarMenu,
SidebarMenuAction,
SidebarMenuBadge,
SidebarMenuButton,
SidebarMenuItem,
SidebarMenuSkeleton,
SidebarMenuSub,
SidebarMenuSubButton,
SidebarMenuSubItem,
SidebarProvider,
SidebarRail,
SidebarSeparator,
SidebarTrigger,
useSidebar,
};

View File

@@ -1,18 +0,0 @@
import { cn } from "@/lib/utils";
function Skeleton({
className,
...props
}: React.HTMLAttributes<HTMLDivElement>) {
return (
<div
className={cn(
"animate-pulse rounded-md bg-neutral-900/10 dark:bg-neutral-50/10",
className,
)}
{...props}
/>
);
}
export { Skeleton };

View File

@@ -1,32 +0,0 @@
"use client";
import * as React from "react";
import * as TooltipPrimitive from "@radix-ui/react-tooltip";
import { cn } from "@/lib/utils";
const TooltipProvider = TooltipPrimitive.Provider;
const Tooltip = TooltipPrimitive.Root;
const TooltipTrigger = TooltipPrimitive.Trigger;
const TooltipContent = React.forwardRef<
React.ElementRef<typeof TooltipPrimitive.Content>,
React.ComponentPropsWithoutRef<typeof TooltipPrimitive.Content>
>(({ className, sideOffset = 4, ...props }, ref) => (
<TooltipPrimitive.Portal>
<TooltipPrimitive.Content
ref={ref}
sideOffset={sideOffset}
className={cn(
"z-50 origin-[--radix-tooltip-content-transform-origin] overflow-hidden rounded-md bg-neutral-900 px-3 py-1.5 text-xs text-neutral-50 animate-in fade-in-0 zoom-in-95 data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=closed]:zoom-out-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 dark:bg-neutral-50 dark:text-neutral-900",
className,
)}
{...props}
/>
</TooltipPrimitive.Portal>
));
TooltipContent.displayName = TooltipPrimitive.Content.displayName;
export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider };

View File

@@ -1,21 +0,0 @@
import * as React from "react";
const MOBILE_BREAKPOINT = 768;
export function useIsMobile() {
const [isMobile, setIsMobile] = React.useState<boolean | undefined>(
undefined,
);
React.useEffect(() => {
const mql = window.matchMedia(`(max-width: ${MOBILE_BREAKPOINT - 1}px)`);
const onChange = () => {
setIsMobile(window.innerWidth < MOBILE_BREAKPOINT);
};
mql.addEventListener("change", onChange);
setIsMobile(window.innerWidth < MOBILE_BREAKPOINT);
return () => mql.removeEventListener("change", onChange);
}, []);
return !!isMobile;
}

View File

@@ -33,7 +33,7 @@ const defaultFlags = {
[Flag.AGENT_FAVORITING]: false,
[Flag.MARKETPLACE_SEARCH_TERMS]: DEFAULT_SEARCH_TERMS,
[Flag.ENABLE_PLATFORM_PAYMENT]: false,
[Flag.CHAT]: true,
[Flag.CHAT]: false,
};
type FlagValues = typeof defaultFlags;

View File

@@ -65,16 +65,6 @@ const config = {
"600": "#282828",
"700": "#272727",
},
sidebar: {
DEFAULT: "hsl(var(--sidebar-background))",
foreground: "hsl(var(--sidebar-foreground))",
primary: "hsl(var(--sidebar-primary))",
"primary-foreground": "hsl(var(--sidebar-primary-foreground))",
accent: "hsl(var(--sidebar-accent))",
"accent-foreground": "hsl(var(--sidebar-accent-foreground))",
border: "hsl(var(--sidebar-border))",
ring: "hsl(var(--sidebar-ring))",
},
},
spacing: {
"0": "0rem",

View File

@@ -192,6 +192,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [Get Current Time](block-integrations/text.md#get-current-time) | This block outputs the current time |
| [Match Text Pattern](block-integrations/text.md#match-text-pattern) | Matches text against a regex pattern and forwards data to positive or negative output based on the match |
| [Text Decoder](block-integrations/text.md#text-decoder) | Decodes a string containing escape sequences into actual text |
| [Text Encoder](block-integrations/text.md#text-encoder) | Encodes a string by converting special characters into escape sequences |
| [Text Replace](block-integrations/text.md#text-replace) | This block is used to replace a text with a new text |
| [Text Split](block-integrations/text.md#text-split) | This block is used to split a text into a list of strings |
| [Word Character Count](block-integrations/text.md#word-character-count) | Counts the number of words and characters in a given text |
@@ -232,6 +233,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [Stagehand Extract](block-integrations/stagehand/blocks.md#stagehand-extract) | Extract structured data from a webpage |
| [Stagehand Observe](block-integrations/stagehand/blocks.md#stagehand-observe) | Find suggested actions for your workflows |
| [Unreal Text To Speech](block-integrations/llm.md#unreal-text-to-speech) | Converts text to speech using the Unreal Speech API |
| [Video Narration](block-integrations/video/narration.md#video-narration) | Generate AI narration and add to video |
## Search and Information Retrieval
@@ -471,9 +473,13 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| Block Name | Description |
|------------|-------------|
| [Add Audio To Video](block-integrations/multimedia.md#add-audio-to-video) | Block to attach an audio file to a video file using moviepy |
| [Loop Video](block-integrations/multimedia.md#loop-video) | Block to loop a video to a given duration or number of repeats |
| [Media Duration](block-integrations/multimedia.md#media-duration) | Block to get the duration of a media file |
| [Add Audio To Video](block-integrations/video/add_audio.md#add-audio-to-video) | Block to attach an audio file to a video file using moviepy |
| [Loop Video](block-integrations/video/loop.md#loop-video) | Block to loop a video to a given duration or number of repeats |
| [Media Duration](block-integrations/video/duration.md#media-duration) | Block to get the duration of a media file |
| [Video Clip](block-integrations/video/clip.md#video-clip) | Extract a time segment from a video |
| [Video Concat](block-integrations/video/concat.md#video-concat) | Merge multiple video clips into one continuous video |
| [Video Download](block-integrations/video/download.md#video-download) | Download video from URL (YouTube, Vimeo, news sites, direct links) |
| [Video Text Overlay](block-integrations/video/text_overlay.md#video-text-overlay) | Add text overlay/caption to video |
## Productivity

Some files were not shown because too many files have changed in this diff Show More