Compare commits

..

55 Commits

Author SHA1 Message Date
Lluis Agusti
e862a1241a chore: docker 2 2026-02-10 00:00:43 +08:00
Lluis Agusti
67d51d80c6 chore: docker 2026-02-09 23:32:13 +08:00
Lluis Agusti
d380c7f9d2 Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui 2026-02-09 23:04:51 +08:00
Lluis Agusti
0c82af2a3d chore: more fixes 2026-02-09 22:47:25 +08:00
Reinier van der Leer
6467f6734f debug(backend/chat): Add timing logging to chat stream generation mechanism (#12019)
[SECRT-1912: Investigate & eliminate chat session start
latency](https://linear.app/autogpt/issue/SECRT-1912)

### Changes 🏗️

- Add timing logs to `backend.api.features.chat` in `routes.py`,
`service.py`, and `stream_registry.py`
- Remove unneeded DB join in `create_chat_session`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI checks
2026-02-09 14:05:29 +00:00
Otto
5a30d11416 refactor(copilot): Code cleanup and deduplication (#11950)
## Summary

Code cleanup of the AI Copilot codebase - rebased onto latest dev.

## Changes

### New Files
- `backend/util/validation.py` - UUID validation helpers
- `backend/api/features/chat/tools/helpers.py` - Shared tool utilities

### Credential Matching Consolidation  
- Added shared utilities to `utils.py`
- Refactored `run_block._check_block_credentials()` with discriminator
support
- Extracted `_resolve_discriminated_credentials()` for multi-provider
handling

### Routes Cleanup
- Extracted `_create_stream_generator()` and `SSE_RESPONSE_HEADERS`

### Tool Files Cleanup
- Updated `run_agent.py` and `run_block.py` to use shared helpers

**WIP** - This PR will be updated incrementally.
2026-02-09 13:43:55 +00:00
Lluis Agusti
8fe88046fd Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui 2026-02-09 21:39:04 +08:00
Lluis Agusti
3121911bd1 chore: re-arrange 2026-02-09 21:37:25 +08:00
Lluis Agusti
76bad41ca6 chore: further changes 2026-02-09 21:24:16 +08:00
Lluis Agusti
20d680d8ee chore: further refinements 2026-02-09 21:19:37 +08:00
Lluis Agusti
fba027b7a4 feat: refine copilot tools styles 2026-02-09 18:44:25 +08:00
Bently
1f4105e8f9 fix(frontend): Handle object values in FileInput component (#11948)
Fixes
[#11800](https://github.com/Significant-Gravitas/AutoGPT/issues/11800)

## Problem
The FileInput component crashed with `TypeError: e.startsWith is not a
function` when the value was an object (from external API) instead of a
string.

## Example Input Object
When using the external API
(`/external-api/v1/graphs/{id}/execute/{version}`), file inputs can be
passed as objects:

```json
{
  "node_input": {
    "input_image": {
      "name": "image.jpeg",
      "type": "image/jpeg",
      "size": 131147,
      "data": "/9j/4QAW..."
    }
  }
}
```

## Changes
- Updated `getFileLabelFromValue()` to handle object format: `{ name,
type, size, data }`
- Added type guards for string vs object values
- Graceful fallback for edge cases (null, undefined, empty object)

## Test cases verified
- Object with name: returns filename
- Object with type only: extracts and formats MIME type
- String data URI: parses correctly
- String file path: extracts extension
- Edge cases: returns "File" fallback
2026-02-09 10:25:08 +00:00
Bently
caf9ff34e6 fix(backend): Handle stale RabbitMQ channels on connection drop (#11929)
### Changes 🏗️

Fixes
[**AUTOGPT-SERVER-1TN**](https://autoagpt.sentry.io/issues/?query=AUTOGPT-SERVER-1TN)
(~39K events since Feb 2025) and related connection issues
**6JC/6JD/6JE/6JF** (~6K combined).

#### Problem

When the RabbitMQ TCP connection drops (network blip, server restart,
etc.):

1. `connect_robust` (aio_pika) automatically reconnects the underlying
AMQP connection
2. But `AsyncRabbitMQ._channel` still references the **old dead
channel**
3. `is_ready` checks `not self._channel.is_closed` — but the channel
object doesn't know the transport is gone
4. `publish_message` tries to use the stale channel →
`ChannelInvalidStateError: No active transport in channel`
5. `@func_retry` retries 5 times, but each retry hits the same stale
channel (it passes `is_ready`)

This means every connection drop generates errors until the process is
restarted.

#### Fix

**New `_ensure_channel()` helper** that resets stale channels before
reconnecting, so `connect()` creates a fresh one instead of
short-circuiting on `is_connected`.

**Explicit `ChannelInvalidStateError` handling in `publish_message`:**
1. First attempt uses `_ensure_channel()` (handles normal staleness)
2. If publish throws `ChannelInvalidStateError`, does a full reconnect
(resets both `_channel` and `_connection`) and retries once
3. `@func_retry` provides additional retry resilience on top

**Simplified `get_channel()`** to use the same resilient helper.

**1 file changed, 62 insertions, 24 deletions.**

#### Impact
- Eliminates ~39K `ChannelInvalidStateError` Sentry events
- RabbitMQ operations self-heal after connection drops without process
restart
- Related transport EOF errors (6JC/6JD/6JE/6JF) should also reduce
2026-02-09 10:24:08 +00:00
Lluis Agusti
a350d6d1b7 Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui 2026-02-09 16:37:36 +08:00
abhi1992002
ee6eef66cf feat(frontend): Refactor RunBlock components for improved output handling
- Removed the `getBlockLabel` function from `helpers.tsx` to streamline code.
- Introduced `getAccordionMeta` function in `helpers.tsx` to enhance output metadata retrieval for RunBlock.
- Updated `RunBlock.tsx` to utilize the new `getAccordionMeta` function, improving the display logic for different output types.
- Added new components (`BlockOutputCard`, `SetupRequirementsCard`, `ErrorCard`) to encapsulate output rendering, enhancing code organization and readability.

These changes improve the clarity and maintainability of the RunBlock component, providing a better user experience through more structured output handling.
2026-02-09 11:44:31 +05:30
abhi1992002
1181dd9fae feat(frontend): Enhance RunAgent and ViewAgentOutput components
- Added `getAccordionMeta` function to `RunAgent/helpers.tsx` for improved output handling.
- Refactored `RunAgent.tsx` to utilize the new `getAccordionMeta` function, streamlining the component's logic.
- Introduced media rendering capabilities in `ViewAgentOutput.tsx` for workspace references, including support for images, audio, and video.
- Enhanced output display logic to handle various data types more effectively.

These changes improve the user experience by providing clearer status updates and better media handling in the application.
2026-02-09 10:54:11 +05:30
abhi1992002
c474c51cd0 refactor(frontend): Update styles and components in HorizontalScroll and ChatSidebar
- Changed gradient background colors in HorizontalScroll to use 'background' instead of 'white'.
- Replaced SpinnerGapIcon with LoadingSpinner in ChatSidebar for improved loading indication.
- Introduced BlockCard component in FindBlocks for better block representation.
- Integrated HorizontalScroll in FindBlocksTool to enhance block navigation.

These changes improve UI consistency and enhance user experience in the application.
2026-02-09 09:34:22 +05:30
abhi1992002
5df3422f8f fix backend format 2026-02-09 08:49:29 +05:30
Abhimanyu Yadav
8546bc9a3d Merge branch 'dev' into abhi/check-ai-sdk-ui 2026-02-09 08:48:01 +05:30
Abhimanyu Yadav
20ed8749d6 Merge branch 'dev' into abhi/check-ai-sdk-ui 2026-02-07 08:58:07 +05:30
abhi1992002
1c9680b6f2 feat(chat): implement session stream resumption endpoint
- Refactored the existing GET endpoint to allow resuming an active chat session stream without requiring a new message.
- Updated the backend logic to check for an active task and return the appropriate SSE stream or a 204 No Content response if no task is running.
- Modified the frontend to support the new resume functionality, enhancing user experience by allowing seamless continuation of chat sessions.
- Updated OpenAPI documentation to reflect changes in endpoint behavior and parameters.
2026-02-06 13:07:32 +05:30
abhi1992002
251d26a643 feat(chat): introduce step lifecycle events for LLM API calls
- Added `StreamStartStep` and `StreamFinishStep` classes to manage the lifecycle of individual LLM API calls within a message.
- Updated `stream_chat_completion` to yield step events, enhancing the ability to visually separate multiple LLM calls.
- Refactored the handling of start and finish events to accommodate the new step lifecycle, improving state management during streaming.
- Adjusted the `stream_registry` to recognize and process the new step events.
2026-02-06 11:50:20 +05:30
abhi1992002
090c576b3e fix lint on backend 2026-02-06 10:17:22 +05:30
abhi1992002
4b036bfe22 feat(copilot): add loading state to chat components
- Introduced `isLoadingSession` prop to manage loading states in `ChatContainer` and `ChatMessagesContainer`.
- Updated `useCopilotPage` to handle session loading state and improve user experience during session creation.
- Refactored session management logic to streamline message hydration and session handling.
- Enhanced UI feedback with loading indicators when messages are being fetched or sessions are being created.
2026-02-06 09:48:35 +05:30
Lluis Agusti
62edd73020 chore: further fixes 2026-02-06 00:23:43 +08:00
Lluis Agusti
5a878e0af0 chore: update styles + add mobile drawer 2026-02-06 00:07:08 +08:00
Lluis Agusti
321733360f chore: refactor hook 2026-02-05 22:43:28 +08:00
Lluis Agusti
1f2fc1ba6f Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui 2026-02-05 22:38:07 +08:00
Lluis Agusti
3805995b09 Merge remote-tracking branch 'origin/dev' into abhi/check-ai-sdk-ui 2026-02-05 18:44:10 +08:00
abhi1992002
e317a9c18a feat(chat): Add tool response schema endpoint for OpenAPI code generation
- Introduced a new endpoint `/api/chat/schema/tool-responses` to expose tool response models for frontend code generation.
- Defined a `ToolResponseUnion` type that aggregates various response models, enhancing type safety and clarity in API responses.
- Updated OpenAPI schema to include detailed descriptions and response structures for the new endpoint.
- Added `AgentDetailsResponse` and other related schemas to improve agent information handling.
2026-02-05 16:10:09 +05:30
abhi1992002
b45e1bc79c feat(chat): Add SSE format conversion method to StreamStart response model
- Implemented `to_sse` method in `StreamStart` class to convert response data into SSE format, excluding non-protocol fields.
- Removed redundant inputId declaration in ChatInput component for cleaner code.
2026-02-04 16:35:33 +05:30
abhi1992002
6fce1f6084 Enhance chat session management in copilot-2 by implementing session creation and hydration logic. Refactor ChatContainer and EmptySession components to streamline user interactions and improve UI responsiveness. Update ChatInput to handle message sending with loading states, ensuring a smoother user experience. 2026-02-04 11:44:04 +05:30
Abhimanyu Yadav
df21b96fed Merge branch 'dev' into abhi/check-ai-sdk-ui 2026-02-04 09:35:46 +05:30
abhi1992002
2502fd6391 Refactor tools in copilot-2 to utilize generated response types for improved type safety and clarity. Updated FindBlocks, FindAgents, CreateAgent, EditAgent, and RunAgent tools to leverage new API response models, enhancing maintainability and reducing redundancy in output handling. 2026-02-04 09:30:30 +05:30
abhi1992002
640b894405 Integrate CopilotChatActionsProvider into ChatContainer and enhance RunAgent and RunBlock tools with ChatCredentialsSetup for improved credential management and user interaction. 2026-02-03 14:38:19 +05:30
abhi1992002
ea9f289647 Update ToolAccordion and MessageContent components for improved layout and responsiveness 2026-02-03 14:17:24 +05:30
abhi1992002
d3018cc8ea Add RunBlock, RunAgent, and ViewAgentOutput tools to ChatMessagesContainer for expanded functionality 2026-02-03 13:57:30 +05:30
abhi1992002
b06868f453 Refactor FindAgents and SearchDocs tools to use ToolAccordion for improved UI/UX
- Replaced custom expandable sections with ToolAccordion component in both FindAgents and SearchDocs tools.
- Simplified state management by removing unnecessary useState and useReducedMotion hooks.
- Enhanced accessibility and readability of agent and document search results with clearer descriptions and structured layouts.
2026-02-03 13:37:31 +05:30
abhi1992002
7772c71a15 add SearchDocsTool integration in ChatMessagesContainer for enhanced document search functionality 2026-02-03 13:19:35 +05:30
abhi1992002
8c381faa06 add find agent tool in coiplot-2 2026-02-03 13:04:04 +05:30
abhi1992002
d2a1abe3f8 basic animation 2026-02-03 12:18:28 +05:30
abhi1992002
15464786c3 fix lint 2026-02-03 12:10:31 +05:30
abhi1992002
1b0e1f6e72 Update ChatSidebar component to enhance spinner icon styling 2026-02-03 12:09:31 +05:30
abhi1992002
6730293036 another ui/ux polishing in chat sidebar 2026-02-03 12:07:12 +05:30
abhi1992002
432bda5c70 add finishing touch in sidebar 2026-02-03 11:53:43 +05:30
abhi1992002
e434b59003 basic sidebar 2026-02-03 11:23:09 +05:30
abhi1992002
31ec5f5c17 Add Chat input 2026-02-03 10:52:22 +05:30
abhi1992002
6e0fbdea3c refactor(components): enhance FindBlocksTool and MorphingTextAnimation
- Updated the `FindBlocksTool` to utilize the new `MorphingTextAnimation` for improved visual feedback.
- Refactored `MorphingTextAnimation` to accept a `text` prop, simplifying its usage and enhancing flexibility.
- Improved the rendering logic in `ChatMessagesContainer` to ensure proper key assignment for dynamic elements.

These changes aim to enhance the user experience by providing better visual transitions and cleaner component interactions.
2026-02-02 12:23:43 +05:30
abhi1992002
b5d6853223 refactor(chat): enhance chat components and improve message handling
- Simplified the `handleMessageSubmit` function in the chat page for better readability.
- Refactored the `ChatMessagesContainer` to improve message rendering logic, including the addition of the `FindBlocksTool` for tool call outputs.
- Updated the `ChatSidebar` component for better organization and clarity in props definition.
- Introduced a new `MorphingTextAnimation` component to enhance visual feedback during message transitions.
- Removed the obsolete `chat-store.ts` file to streamline the codebase.

These changes aim to improve the overall functionality and user experience of the chat interface.
2026-02-02 12:23:30 +05:30
abhi1992002
afb74a8ff1 fix session changing issue 2026-02-02 09:47:36 +05:30
abhi1992002
4c9957dc26 arranging messages code 2026-02-02 09:37:11 +05:30
abhi1992002
26add35418 feat(frontend): update dependencies and enhance chat page functionality
- Added new dependencies for Streamdown components to improve rendering capabilities.
- Updated the chat page layout to utilize new conversation components, enhancing user experience.
- Refactored message handling to streamline input submission and improve message rendering logic.

These changes aim to enhance the overall functionality and usability of the chat interface.
2026-01-30 16:04:56 +05:30
abhi1992002
c6e5f83de8 feat(chat): update chat page layout and enhance message handling
- Refactored the chat page to utilize a new `ChatSidebar` component for better organization and user experience.
- Improved message handling by simplifying session creation logic and ensuring proper state management.
- Updated UI elements for consistency, including button labels and input handling.
- Enhanced message rendering to support tool call outputs, improving the chat interaction flow.

These changes aim to streamline the chat interface and improve overall usability.
2026-01-30 15:02:33 +05:30
abhi1992002
c3a126e705 feat(chat): implement message ID reuse for tool call continuations
- Added `_continuation_message_id` parameter to `stream_chat_completion` to allow reuse of message IDs for tool call follow-ups.
- Modified message yielding logic to prevent duplicate messages when reusing IDs.
- Ensured that the message start is only yielded for the initial call, improving message handling during continuations.

This change enhances the chat completion flow by maintaining message integrity and reducing redundancy in message handling.
2026-01-30 14:57:19 +05:30
abhi1992002
73d8323fe4 basic message handling 2026-01-29 18:11:42 +05:30
158 changed files with 11893 additions and 8945 deletions

View File

@@ -45,10 +45,7 @@ async def create_chat_session(
successfulAgentRuns=SafeJson({}),
successfulAgentSchedules=SafeJson({}),
)
return await PrismaChatSession.prisma().create(
data=data,
include={"Messages": True},
)
return await PrismaChatSession.prisma().create(data=data)
async def update_chat_session(

View File

@@ -18,6 +18,10 @@ class ResponseType(str, Enum):
START = "start"
FINISH = "finish"
# Step lifecycle (one LLM API call within a message)
START_STEP = "start-step"
FINISH_STEP = "finish-step"
# Text streaming
TEXT_START = "text-start"
TEXT_DELTA = "text-delta"
@@ -57,6 +61,16 @@ class StreamStart(StreamBaseResponse):
description="Task ID for SSE reconnection. Clients can reconnect using GET /tasks/{taskId}/stream",
)
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-protocol fields like taskId."""
import json
data: dict[str, Any] = {
"type": self.type.value,
"messageId": self.messageId,
}
return f"data: {json.dumps(data)}\n\n"
class StreamFinish(StreamBaseResponse):
"""End of message/stream."""
@@ -64,6 +78,26 @@ class StreamFinish(StreamBaseResponse):
type: ResponseType = ResponseType.FINISH
class StreamStartStep(StreamBaseResponse):
"""Start of a step (one LLM API call within a message).
The AI SDK uses this to add a step-start boundary to message.parts,
enabling visual separation between multiple LLM calls in a single message.
"""
type: ResponseType = ResponseType.START_STEP
class StreamFinishStep(StreamBaseResponse):
"""End of a step (one LLM API call within a message).
The AI SDK uses this to reset activeTextParts and activeReasoningParts,
so the next LLM call in a tool-call continuation starts with clean state.
"""
type: ResponseType = ResponseType.FINISH_STEP
# ========== Text Streaming ==========
@@ -117,7 +151,7 @@ class StreamToolOutputAvailable(StreamBaseResponse):
type: ResponseType = ResponseType.TOOL_OUTPUT_AVAILABLE
toolCallId: str = Field(..., description="Tool call ID this responds to")
output: str | dict[str, Any] = Field(..., description="Tool execution output")
# Additional fields for internal use (not part of AI SDK spec but useful)
# Keep these for internal backend use
toolName: str | None = Field(
default=None, description="Name of the tool that was executed"
)
@@ -125,6 +159,17 @@ class StreamToolOutputAvailable(StreamBaseResponse):
default=True, description="Whether the tool execution succeeded"
)
def to_sse(self) -> str:
"""Convert to SSE format, excluding non-spec fields."""
import json
data = {
"type": self.type.value,
"toolCallId": self.toolCallId,
"output": self.output,
}
return f"data: {json.dumps(data)}\n\n"
# ========== Other ==========

View File

@@ -6,7 +6,7 @@ from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Security
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Response, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
@@ -17,7 +17,29 @@ from . import stream_registry
from .completion_handler import process_operation_failure, process_operation_success
from .config import ChatConfig
from .model import ChatSession, create_chat_session, get_chat_session, get_user_sessions
from .response_model import StreamFinish, StreamHeartbeat, StreamStart
from .response_model import StreamFinish, StreamHeartbeat
from .tools.models import (
AgentDetailsResponse,
AgentOutputResponse,
AgentPreviewResponse,
AgentSavedResponse,
AgentsFoundResponse,
BlockListResponse,
BlockOutputResponse,
ClarificationNeededResponse,
DocPageResponse,
DocSearchResultsResponse,
ErrorResponse,
ExecutionStartedResponse,
InputValidationErrorResponse,
NeedLoginResponse,
NoResultsResponse,
OperationInProgressResponse,
OperationPendingResponse,
OperationStartedResponse,
SetupRequirementsResponse,
UnderstandingUpdatedResponse,
)
config = ChatConfig()
@@ -284,10 +306,6 @@ async def stream_chat_post(
# Background task that runs the AI generation independently of SSE connection
async def run_ai_generation():
try:
# Emit a start event with task_id for reconnection
start_chunk = StreamStart(messageId=task_id, taskId=task_id)
await stream_registry.publish_chunk(task_id, start_chunk)
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
@@ -295,6 +313,7 @@ async def stream_chat_post(
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context,
_task_id=task_id, # Pass task_id so service emits start with taskId for reconnection
):
# Write to Redis (subscribers will receive via XREAD)
await stream_registry.publish_chunk(task_id, chunk)
@@ -374,63 +393,69 @@ async def stream_chat_post(
@router.get(
"/sessions/{session_id}/stream",
)
async def stream_chat_get(
async def resume_session_stream(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Stream chat responses for a session (GET - legacy endpoint).
Resume an active stream for a session.
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Called by the AI SDK's ``useChat(resume: true)`` on page load.
Checks for an active (in-progress) task on the session and either replays
the full SSE stream or returns 204 No Content if nothing is running.
Args:
session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
session_id: The chat session identifier.
user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns:
StreamingResponse: SSE-formatted response chunks.
Returns:
StreamingResponse (SSE) when an active stream exists,
or 204 No Content when there is nothing to resume.
"""
session = await _validate_and_get_session(session_id, user_id)
import asyncio
active_task, _last_id = await stream_registry.get_active_task_for_session(
session_id, user_id
)
if not active_task:
return Response(status_code=204)
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=active_task.task_id,
user_id=user_id,
last_message_id="0-0", # Full replay so useChat rebuilds the message
)
if subscriber_queue is None:
return Response(status_code=204)
async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion(
session_id,
message,
is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
try:
while True:
try:
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
yield chunk.to_sse()
if isinstance(chunk, StreamFinish):
break
except asyncio.TimeoutError:
yield StreamHeartbeat().to_sse()
except GeneratorExit:
pass
except Exception as e:
logger.error(f"Error in resume stream for session {session_id}: {e}")
finally:
try:
await stream_registry.unsubscribe_from_task(
active_task.task_id, subscriber_queue
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
# AI SDK protocol termination
yield "data: [DONE]\n\n"
except Exception as unsub_err:
logger.error(
f"Error unsubscribing from task {active_task.task_id}: {unsub_err}",
exc_info=True,
)
yield "data: [DONE]\n\n"
return StreamingResponse(
event_generator(),
@@ -438,8 +463,8 @@ async def stream_chat_get(
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
"x-vercel-ai-ui-message-stream": "v1", # AI SDK protocol header
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
},
)
@@ -751,3 +776,42 @@ async def health_check() -> dict:
"service": "chat",
"version": "0.1.0",
}
# ========== Schema Export (for OpenAPI / Orval codegen) ==========
ToolResponseUnion = (
AgentsFoundResponse
| NoResultsResponse
| AgentDetailsResponse
| SetupRequirementsResponse
| ExecutionStartedResponse
| NeedLoginResponse
| ErrorResponse
| InputValidationErrorResponse
| AgentOutputResponse
| UnderstandingUpdatedResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| BlockListResponse
| BlockOutputResponse
| DocSearchResultsResponse
| DocPageResponse
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
)
@router.get(
"/schema/tool-responses",
response_model=ToolResponseUnion,
include_in_schema=True,
summary="[Dummy] Tool response type export for codegen",
description="This endpoint is not meant to be called. It exists solely to "
"expose tool response models in the OpenAPI schema for frontend codegen.",
)
async def _tool_response_schema() -> ToolResponseUnion: # type: ignore[return]
"""Never called at runtime. Exists only so Orval generates TS types."""
raise HTTPException(status_code=501, detail="Schema-only endpoint")

View File

@@ -52,8 +52,10 @@ from .response_model import (
StreamBaseResponse,
StreamError,
StreamFinish,
StreamFinishStep,
StreamHeartbeat,
StreamStart,
StreamStartStep,
StreamTextDelta,
StreamTextEnd,
StreamTextStart,
@@ -351,6 +353,10 @@ async def stream_chat_completion(
retry_count: int = 0,
session: ChatSession | None = None,
context: dict[str, str] | None = None, # {url: str, content: str}
_continuation_message_id: (
str | None
) = None, # Internal: reuse message ID for tool call continuations
_task_id: str | None = None, # Internal: task ID for SSE reconnection support
) -> AsyncGenerator[StreamBaseResponse, None]:
"""Main entry point for streaming chat completions with database handling.
@@ -371,21 +377,45 @@ async def stream_chat_completion(
ValueError: If max_context_messages is exceeded
"""
completion_start = time.monotonic()
# Build log metadata for structured logging
log_meta = {"component": "ChatService", "session_id": session_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"Streaming chat completion for session {session_id} for message {message} and user id {user_id}. Message is user message: {is_user_message}"
f"[TIMING] stream_chat_completion STARTED, session={session_id}, user={user_id}, "
f"message_len={len(message) if message else 0}, is_user={is_user_message}",
extra={
"json_fields": {
**log_meta,
"message_len": len(message) if message else 0,
"is_user_message": is_user_message,
}
},
)
# Only fetch from Redis if session not provided (initial call)
if session is None:
fetch_start = time.monotonic()
session = await get_chat_session(session_id, user_id)
fetch_time = (time.monotonic() - fetch_start) * 1000
logger.info(
f"Fetched session from Redis: {session.session_id if session else 'None'}, "
f"message_count={len(session.messages) if session else 0}"
f"[TIMING] get_chat_session took {fetch_time:.1f}ms, "
f"n_messages={len(session.messages) if session else 0}",
extra={
"json_fields": {
**log_meta,
"duration_ms": fetch_time,
"n_messages": len(session.messages) if session else 0,
}
},
)
else:
logger.info(
f"Using provided session object: {session.session_id}, "
f"message_count={len(session.messages)}"
f"[TIMING] Using provided session, messages={len(session.messages)}",
extra={"json_fields": {**log_meta, "n_messages": len(session.messages)}},
)
if not session:
@@ -406,17 +436,25 @@ async def stream_chat_completion(
# Track user message in PostHog
if is_user_message:
posthog_start = time.monotonic()
track_user_message(
user_id=user_id,
session_id=session_id,
message_length=len(message),
)
posthog_time = (time.monotonic() - posthog_start) * 1000
logger.info(
f"[TIMING] track_user_message took {posthog_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": posthog_time}},
)
logger.info(
f"Upserting session: {session.session_id} with user id {session.user_id}, "
f"message_count={len(session.messages)}"
)
upsert_start = time.monotonic()
session = await upsert_chat_session(session)
upsert_time = (time.monotonic() - upsert_start) * 1000
logger.info(
f"[TIMING] upsert_chat_session took {upsert_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": upsert_time}},
)
assert session, "Session not found"
# Generate title for new sessions on first user message (non-blocking)
@@ -454,7 +492,13 @@ async def stream_chat_completion(
asyncio.create_task(_update_title())
# Build system prompt with business understanding
prompt_start = time.monotonic()
system_prompt, understanding = await _build_system_prompt(user_id)
prompt_time = (time.monotonic() - prompt_start) * 1000
logger.info(
f"[TIMING] _build_system_prompt took {prompt_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": prompt_time}},
)
# Initialize variables for streaming
assistant_response = ChatMessage(
@@ -479,13 +523,27 @@ async def stream_chat_completion(
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
is_continuation = _continuation_message_id is not None
message_id = _continuation_message_id or str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
# Only yield message start for the initial call, not for continuations.
setup_time = (time.monotonic() - completion_start) * 1000
logger.info(
f"[TIMING] Setup complete, yielding StreamStart at {setup_time:.1f}ms",
extra={"json_fields": {**log_meta, "setup_time_ms": setup_time}},
)
if not is_continuation:
yield StreamStart(messageId=message_id, taskId=_task_id)
# Emit start-step before each LLM call (AI SDK uses this to add step boundaries)
yield StreamStartStep()
try:
logger.info(
"[TIMING] Calling _stream_chat_chunks",
extra={"json_fields": log_meta},
)
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
@@ -585,6 +643,10 @@ async def stream_chat_completion(
)
yield chunk
elif isinstance(chunk, StreamFinish):
if has_done_tool_call:
# Tool calls happened — close the step but don't send message-level finish.
# The continuation will open a new step, and finish will come at the end.
yield StreamFinishStep()
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
@@ -616,6 +678,8 @@ async def stream_chat_completion(
has_saved_assistant_message = True
has_yielded_end = True
# Emit finish-step before finish (resets AI SDK text/reasoning state)
yield StreamFinishStep()
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
@@ -700,6 +764,7 @@ async def stream_chat_completion(
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinishStep()
yield StreamFinish()
return
@@ -714,6 +779,8 @@ async def stream_chat_completion(
retry_count=retry_count + 1,
session=session,
context=context,
_continuation_message_id=message_id, # Reuse message ID since start was already sent
_task_id=_task_id,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
@@ -783,6 +850,8 @@ async def stream_chat_completion(
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
_continuation_message_id=message_id, # Reuse message ID to avoid duplicates
_task_id=_task_id,
):
yield chunk
@@ -893,9 +962,21 @@ async def _stream_chat_chunks(
SSE formatted JSON response objects
"""
import time as time_module
stream_chunks_start = time_module.perf_counter()
model = config.model
logger.info("Starting pure chat stream")
# Build log metadata for structured logging
log_meta = {"component": "ChatService", "session_id": session.session_id}
if session.user_id:
log_meta["user_id"] = session.user_id
logger.info(
f"[TIMING] _stream_chat_chunks STARTED, session={session.session_id}, "
f"user={session.user_id}, n_messages={len(session.messages)}",
extra={"json_fields": {**log_meta, "n_messages": len(session.messages)}},
)
messages = session.to_openai_messages()
if system_prompt:
@@ -906,12 +987,18 @@ async def _stream_chat_chunks(
messages = [system_message] + messages
# Apply context window management
context_start = time_module.perf_counter()
context_result = await _manage_context_window(
messages=messages,
model=model,
api_key=config.api_key,
base_url=config.base_url,
)
context_time = (time_module.perf_counter() - context_start) * 1000
logger.info(
f"[TIMING] _manage_context_window took {context_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": context_time}},
)
if context_result.error:
if "System prompt dropped" in context_result.error:
@@ -946,9 +1033,19 @@ async def _stream_chat_chunks(
while retry_count <= MAX_RETRIES:
try:
elapsed = (time_module.perf_counter() - stream_chunks_start) * 1000
retry_info = (
f" (retry {retry_count}/{MAX_RETRIES})" if retry_count > 0 else ""
)
logger.info(
f"Creating OpenAI chat completion stream..."
f"{f' (retry {retry_count}/{MAX_RETRIES})' if retry_count > 0 else ''}"
f"[TIMING] Creating OpenAI stream at {elapsed:.1f}ms{retry_info}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"retry_count": retry_count,
}
},
)
# Build extra_body for OpenRouter tracing and PostHog analytics
@@ -965,6 +1062,7 @@ async def _stream_chat_chunks(
:128
] # OpenRouter limit
api_call_start = time_module.perf_counter()
stream = await client.chat.completions.create(
model=model,
messages=cast(list[ChatCompletionMessageParam], messages),
@@ -974,6 +1072,11 @@ async def _stream_chat_chunks(
stream_options=ChatCompletionStreamOptionsParam(include_usage=True),
extra_body=extra_body,
)
api_init_time = (time_module.perf_counter() - api_call_start) * 1000
logger.info(
f"[TIMING] OpenAI stream object returned in {api_init_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": api_init_time}},
)
# Variables to accumulate tool calls
tool_calls: list[dict[str, Any]] = []
@@ -984,10 +1087,13 @@ async def _stream_chat_chunks(
# Track if we've started the text block
text_started = False
first_content_chunk = True
chunk_count = 0
# Process the stream
chunk: ChatCompletionChunk
async for chunk in stream:
chunk_count += 1
if chunk.usage:
yield StreamUsage(
promptTokens=chunk.usage.prompt_tokens,
@@ -1010,6 +1116,23 @@ async def _stream_chat_chunks(
if not text_started and text_block_id:
yield StreamTextStart(id=text_block_id)
text_started = True
# Log timing for first content chunk
if first_content_chunk:
first_content_chunk = False
ttfc = (
time_module.perf_counter() - api_call_start
) * 1000
logger.info(
f"[TIMING] FIRST CONTENT CHUNK at {ttfc:.1f}ms "
f"(since API call), n_chunks={chunk_count}",
extra={
"json_fields": {
**log_meta,
"time_to_first_chunk_ms": ttfc,
"n_chunks": chunk_count,
}
},
)
# Stream the text delta
text_response = StreamTextDelta(
id=text_block_id or "",
@@ -1066,7 +1189,21 @@ async def _stream_chat_chunks(
toolName=tool_calls[idx]["function"]["name"],
)
emitted_start_for_idx.add(idx)
logger.info(f"Stream complete. Finish reason: {finish_reason}")
stream_duration = time_module.perf_counter() - api_call_start
logger.info(
f"[TIMING] OpenAI stream COMPLETE, finish_reason={finish_reason}, "
f"duration={stream_duration:.2f}s, "
f"n_chunks={chunk_count}, n_tool_calls={len(tool_calls)}",
extra={
"json_fields": {
**log_meta,
"stream_duration_ms": stream_duration * 1000,
"finish_reason": finish_reason,
"n_chunks": chunk_count,
"n_tool_calls": len(tool_calls),
}
},
)
# Yield all accumulated tool calls after the stream is complete
# This ensures all tool call arguments have been fully received
@@ -1086,6 +1223,12 @@ async def _stream_chat_chunks(
# Re-raise to trigger retry logic in the parent function
raise
total_time = (time_module.perf_counter() - stream_chunks_start) * 1000
logger.info(
f"[TIMING] _stream_chat_chunks COMPLETED in {total_time/1000:.1f}s; "
f"session={session.session_id}, user={session.user_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
yield StreamFinish()
return
except Exception as e:
@@ -1565,6 +1708,7 @@ async def _execute_long_running_tool_with_streaming(
task_id,
StreamError(errorText=str(e)),
)
await stream_registry.publish_chunk(task_id, StreamFinishStep())
await stream_registry.publish_chunk(task_id, StreamFinish())
await _update_pending_operation(
@@ -1822,6 +1966,7 @@ async def _generate_llm_continuation_with_streaming(
# Publish start event
await stream_registry.publish_chunk(task_id, StreamStart(messageId=message_id))
await stream_registry.publish_chunk(task_id, StreamStartStep())
await stream_registry.publish_chunk(task_id, StreamTextStart(id=text_block_id))
# Stream the response
@@ -1845,6 +1990,7 @@ async def _generate_llm_continuation_with_streaming(
# Publish end events
await stream_registry.publish_chunk(task_id, StreamTextEnd(id=text_block_id))
await stream_registry.publish_chunk(task_id, StreamFinishStep())
if assistant_content:
# Reload session from DB to avoid race condition with user messages
@@ -1886,4 +2032,5 @@ async def _generate_llm_continuation_with_streaming(
task_id,
StreamError(errorText=f"Failed to generate response: {e}"),
)
await stream_registry.publish_chunk(task_id, StreamFinishStep())
await stream_registry.publish_chunk(task_id, StreamFinish())

View File

@@ -104,6 +104,24 @@ async def create_task(
Returns:
The created ActiveTask instance (metadata only)
"""
import time
start_time = time.perf_counter()
# Build log metadata for structured logging
log_meta = {
"component": "StreamRegistry",
"task_id": task_id,
"session_id": session_id,
}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] create_task STARTED, task={task_id}, session={session_id}, user={user_id}",
extra={"json_fields": log_meta},
)
task = ActiveTask(
task_id=task_id,
session_id=session_id,
@@ -114,10 +132,18 @@ async def create_task(
)
# Store metadata in Redis
redis_start = time.perf_counter()
redis = await get_redis_async()
redis_time = (time.perf_counter() - redis_start) * 1000
logger.info(
f"[TIMING] get_redis_async took {redis_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": redis_time}},
)
meta_key = _get_task_meta_key(task_id)
op_key = _get_operation_mapping_key(operation_id)
hset_start = time.perf_counter()
await redis.hset( # type: ignore[misc]
meta_key,
mapping={
@@ -131,12 +157,22 @@ async def create_task(
"created_at": task.created_at.isoformat(),
},
)
hset_time = (time.perf_counter() - hset_start) * 1000
logger.info(
f"[TIMING] redis.hset took {hset_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": hset_time}},
)
await redis.expire(meta_key, config.stream_ttl)
# Create operation_id -> task_id mapping for webhook lookups
await redis.set(op_key, task_id, ex=config.stream_ttl)
logger.debug(f"Created task {task_id} for session {session_id}")
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] create_task COMPLETED in {total_time:.1f}ms; task={task_id}, session={session_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
return task
@@ -156,26 +192,60 @@ async def publish_chunk(
Returns:
The Redis Stream message ID
"""
import time
start_time = time.perf_counter()
chunk_type = type(chunk).__name__
chunk_json = chunk.model_dump_json()
message_id = "0-0"
# Build log metadata
log_meta = {
"component": "StreamRegistry",
"task_id": task_id,
"chunk_type": chunk_type,
}
try:
redis = await get_redis_async()
stream_key = _get_task_stream_key(task_id)
# Write to Redis Stream for persistence and real-time delivery
xadd_start = time.perf_counter()
raw_id = await redis.xadd(
stream_key,
{"data": chunk_json},
maxlen=config.stream_max_length,
)
xadd_time = (time.perf_counter() - xadd_start) * 1000
message_id = raw_id if isinstance(raw_id, str) else raw_id.decode()
# Set TTL on stream to match task metadata TTL
await redis.expire(stream_key, config.stream_ttl)
total_time = (time.perf_counter() - start_time) * 1000
# Only log timing for significant chunks or slow operations
if (
chunk_type
in ("StreamStart", "StreamFinish", "StreamTextStart", "StreamTextEnd")
or total_time > 50
):
logger.info(
f"[TIMING] publish_chunk {chunk_type} in {total_time:.1f}ms (xadd={xadd_time:.1f}ms)",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"xadd_time_ms": xadd_time,
"message_id": message_id,
}
},
)
except Exception as e:
elapsed = (time.perf_counter() - start_time) * 1000
logger.error(
f"Failed to publish chunk for task {task_id}: {e}",
f"[TIMING] Failed to publish chunk {chunk_type} after {elapsed:.1f}ms: {e}",
extra={"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}},
exc_info=True,
)
@@ -200,24 +270,61 @@ async def subscribe_to_task(
An asyncio Queue that will receive stream chunks, or None if task not found
or user doesn't have access
"""
import time
start_time = time.perf_counter()
# Build log metadata
log_meta = {"component": "StreamRegistry", "task_id": task_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] subscribe_to_task STARTED, task={task_id}, user={user_id}, last_msg={last_message_id}",
extra={"json_fields": {**log_meta, "last_message_id": last_message_id}},
)
redis_start = time.perf_counter()
redis = await get_redis_async()
meta_key = _get_task_meta_key(task_id)
meta: dict[Any, Any] = await redis.hgetall(meta_key) # type: ignore[misc]
hgetall_time = (time.perf_counter() - redis_start) * 1000
logger.info(
f"[TIMING] Redis hgetall took {hgetall_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": hgetall_time}},
)
if not meta:
logger.debug(f"Task {task_id} not found in Redis")
elapsed = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] Task not found in Redis after {elapsed:.1f}ms",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"reason": "task_not_found",
}
},
)
return None
# Note: Redis client uses decode_responses=True, so keys are strings
task_status = meta.get("status", "")
task_user_id = meta.get("user_id", "") or None
log_meta["session_id"] = meta.get("session_id", "")
# Validate ownership - if task has an owner, requester must match
if task_user_id:
if user_id != task_user_id:
logger.warning(
f"User {user_id} denied access to task {task_id} "
f"owned by {task_user_id}"
f"[TIMING] Access denied: user {user_id} tried to access task owned by {task_user_id}",
extra={
"json_fields": {
**log_meta,
"task_owner": task_user_id,
"reason": "access_denied",
}
},
)
return None
@@ -225,7 +332,19 @@ async def subscribe_to_task(
stream_key = _get_task_stream_key(task_id)
# Step 1: Replay messages from Redis Stream
xread_start = time.perf_counter()
messages = await redis.xread({stream_key: last_message_id}, block=0, count=1000)
xread_time = (time.perf_counter() - xread_start) * 1000
logger.info(
f"[TIMING] Redis xread (replay) took {xread_time:.1f}ms, status={task_status}",
extra={
"json_fields": {
**log_meta,
"duration_ms": xread_time,
"task_status": task_status,
}
},
)
replayed_count = 0
replay_last_id = last_message_id
@@ -244,19 +363,48 @@ async def subscribe_to_task(
except Exception as e:
logger.warning(f"Failed to replay message: {e}")
logger.debug(f"Task {task_id}: replayed {replayed_count} messages")
logger.info(
f"[TIMING] Replayed {replayed_count} messages, last_id={replay_last_id}",
extra={
"json_fields": {
**log_meta,
"n_messages_replayed": replayed_count,
"replay_last_id": replay_last_id,
}
},
)
# Step 2: If task is still running, start stream listener for live updates
if task_status == "running":
logger.info(
"[TIMING] Task still running, starting _stream_listener",
extra={"json_fields": {**log_meta, "task_status": task_status}},
)
listener_task = asyncio.create_task(
_stream_listener(task_id, subscriber_queue, replay_last_id)
_stream_listener(task_id, subscriber_queue, replay_last_id, log_meta)
)
# Track listener task for cleanup on unsubscribe
_listener_tasks[id(subscriber_queue)] = (task_id, listener_task)
else:
# Task is completed/failed - add finish marker
logger.info(
f"[TIMING] Task already {task_status}, adding StreamFinish",
extra={"json_fields": {**log_meta, "task_status": task_status}},
)
await subscriber_queue.put(StreamFinish())
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] subscribe_to_task COMPLETED in {total_time:.1f}ms; task={task_id}, "
f"n_messages_replayed={replayed_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"n_messages_replayed": replayed_count,
}
},
)
return subscriber_queue
@@ -264,6 +412,7 @@ async def _stream_listener(
task_id: str,
subscriber_queue: asyncio.Queue[StreamBaseResponse],
last_replayed_id: str,
log_meta: dict | None = None,
) -> None:
"""Listen to Redis Stream for new messages using blocking XREAD.
@@ -274,10 +423,27 @@ async def _stream_listener(
task_id: Task ID to listen for
subscriber_queue: Queue to deliver messages to
last_replayed_id: Last message ID from replay (continue from here)
log_meta: Structured logging metadata
"""
import time
start_time = time.perf_counter()
# Use provided log_meta or build minimal one
if log_meta is None:
log_meta = {"component": "StreamRegistry", "task_id": task_id}
logger.info(
f"[TIMING] _stream_listener STARTED, task={task_id}, last_id={last_replayed_id}",
extra={"json_fields": {**log_meta, "last_replayed_id": last_replayed_id}},
)
queue_id = id(subscriber_queue)
# Track the last successfully delivered message ID for recovery hints
last_delivered_id = last_replayed_id
messages_delivered = 0
first_message_time = None
xread_count = 0
try:
redis = await get_redis_async()
@@ -287,9 +453,39 @@ async def _stream_listener(
while True:
# Block for up to 30 seconds waiting for new messages
# This allows periodic checking if task is still running
xread_start = time.perf_counter()
xread_count += 1
messages = await redis.xread(
{stream_key: current_id}, block=30000, count=100
)
xread_time = (time.perf_counter() - xread_start) * 1000
if messages:
msg_count = sum(len(msgs) for _, msgs in messages)
logger.info(
f"[TIMING] xread #{xread_count} returned {msg_count} messages in {xread_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"xread_count": xread_count,
"n_messages": msg_count,
"duration_ms": xread_time,
}
},
)
elif xread_time > 1000:
# Only log timeouts (30s blocking)
logger.info(
f"[TIMING] xread #{xread_count} timeout after {xread_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"xread_count": xread_count,
"duration_ms": xread_time,
"reason": "timeout",
}
},
)
if not messages:
# Timeout - check if task is still running
@@ -326,10 +522,30 @@ async def _stream_listener(
)
# Update last delivered ID on successful delivery
last_delivered_id = current_id
messages_delivered += 1
if first_message_time is None:
first_message_time = time.perf_counter()
elapsed = (first_message_time - start_time) * 1000
logger.info(
f"[TIMING] FIRST live message at {elapsed:.1f}ms, type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"chunk_type": type(chunk).__name__,
}
},
)
except asyncio.TimeoutError:
logger.warning(
f"Subscriber queue full for task {task_id}, "
f"message delivery timed out after {QUEUE_PUT_TIMEOUT}s"
f"[TIMING] Subscriber queue full, delivery timed out after {QUEUE_PUT_TIMEOUT}s",
extra={
"json_fields": {
**log_meta,
"timeout_s": QUEUE_PUT_TIMEOUT,
"reason": "queue_full",
}
},
)
# Send overflow error with recovery info
try:
@@ -351,15 +567,44 @@ async def _stream_listener(
# Stop listening on finish
if isinstance(chunk, StreamFinish):
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] StreamFinish received in {total_time/1000:.1f}s; delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"messages_delivered": messages_delivered,
}
},
)
return
except Exception as e:
logger.warning(f"Error processing stream message: {e}")
logger.warning(
f"Error processing stream message: {e}",
extra={"json_fields": {**log_meta, "error": str(e)}},
)
except asyncio.CancelledError:
logger.debug(f"Stream listener cancelled for task {task_id}")
elapsed = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] _stream_listener CANCELLED after {elapsed:.1f}ms, delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"messages_delivered": messages_delivered,
"reason": "cancelled",
}
},
)
raise # Re-raise to propagate cancellation
except Exception as e:
logger.error(f"Stream listener error for task {task_id}: {e}")
elapsed = (time.perf_counter() - start_time) * 1000
logger.error(
f"[TIMING] _stream_listener ERROR after {elapsed:.1f}ms: {e}",
extra={"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}},
)
# On error, send finish to unblock subscriber
try:
await asyncio.wait_for(
@@ -368,10 +613,24 @@ async def _stream_listener(
)
except (asyncio.TimeoutError, asyncio.QueueFull):
logger.warning(
f"Could not deliver finish event for task {task_id} after error"
"Could not deliver finish event after error",
extra={"json_fields": log_meta},
)
finally:
# Clean up listener task mapping on exit
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] _stream_listener FINISHED in {total_time/1000:.1f}s; task={task_id}, "
f"delivered={messages_delivered}, xread_count={xread_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"messages_delivered": messages_delivered,
"xread_count": xread_count,
}
},
)
_listener_tasks.pop(queue_id, None)
@@ -598,8 +857,10 @@ def _reconstruct_chunk(chunk_data: dict) -> StreamBaseResponse | None:
ResponseType,
StreamError,
StreamFinish,
StreamFinishStep,
StreamHeartbeat,
StreamStart,
StreamStartStep,
StreamTextDelta,
StreamTextEnd,
StreamTextStart,
@@ -613,6 +874,8 @@ def _reconstruct_chunk(chunk_data: dict) -> StreamBaseResponse | None:
type_to_class: dict[str, type[StreamBaseResponse]] = {
ResponseType.START.value: StreamStart,
ResponseType.FINISH.value: StreamFinish,
ResponseType.START_STEP.value: StreamStartStep,
ResponseType.FINISH_STEP.value: StreamFinishStep,
ResponseType.TEXT_START.value: StreamTextStart,
ResponseType.TEXT_DELTA.value: StreamTextDelta,
ResponseType.TEXT_END.value: StreamTextEnd,

View File

@@ -0,0 +1,29 @@
"""Shared helpers for chat tools."""
from typing import Any
def get_inputs_from_schema(
input_schema: dict[str, Any],
exclude_fields: set[str] | None = None,
) -> list[dict[str, Any]]:
"""Extract input field info from JSON schema."""
if not isinstance(input_schema, dict):
return []
exclude = exclude_fields or set()
properties = input_schema.get("properties", {})
required = set(input_schema.get("required", []))
return [
{
"name": name,
"title": schema.get("title", name),
"type": schema.get("type", "string"),
"description": schema.get("description", ""),
"required": name in required,
"default": schema.get("default"),
}
for name, schema in properties.items()
if name not in exclude
]

View File

@@ -24,6 +24,7 @@ from backend.util.timezone_utils import (
)
from .base import BaseTool
from .helpers import get_inputs_from_schema
from .models import (
AgentDetails,
AgentDetailsResponse,
@@ -261,7 +262,7 @@ class RunAgentTool(BaseTool):
),
requirements={
"credentials": requirements_creds_list,
"inputs": self._get_inputs_list(graph.input_schema),
"inputs": get_inputs_from_schema(graph.input_schema),
"execution_modes": self._get_execution_modes(graph),
},
),
@@ -369,22 +370,6 @@ class RunAgentTool(BaseTool):
session_id=session_id,
)
def _get_inputs_list(self, input_schema: dict[str, Any]) -> list[dict[str, Any]]:
"""Extract inputs list from schema."""
inputs_list = []
if isinstance(input_schema, dict) and "properties" in input_schema:
for field_name, field_schema in input_schema["properties"].items():
inputs_list.append(
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in input_schema.get("required", []),
}
)
return inputs_list
def _get_execution_modes(self, graph: GraphModel) -> list[str]:
"""Get available execution modes for the graph."""
trigger_info = graph.trigger_setup_info
@@ -398,7 +383,7 @@ class RunAgentTool(BaseTool):
suffix: str,
) -> str:
"""Build a message describing available inputs for an agent."""
inputs_list = self._get_inputs_list(graph.input_schema)
inputs_list = get_inputs_from_schema(graph.input_schema)
required_names = [i["name"] for i in inputs_list if i["required"]]
optional_names = [i["name"] for i in inputs_list if not i["required"]]

View File

@@ -12,14 +12,15 @@ from backend.api.features.chat.tools.find_block import (
COPILOT_EXCLUDED_BLOCK_IDS,
COPILOT_EXCLUDED_BLOCK_TYPES,
)
from backend.data.block import get_block
from backend.data.block import AnyBlockSchema, get_block
from backend.data.execution import ExecutionContext
from backend.data.model import CredentialsMetaInput
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.data.workspace import get_or_create_workspace
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import BlockError
from .base import BaseTool
from .helpers import get_inputs_from_schema
from .models import (
BlockOutputResponse,
ErrorResponse,
@@ -28,7 +29,10 @@ from .models import (
ToolResponseBase,
UserReadiness,
)
from .utils import build_missing_credentials_from_field_info
from .utils import (
build_missing_credentials_from_field_info,
match_credentials_to_requirements,
)
logger = logging.getLogger(__name__)
@@ -77,91 +81,6 @@ class RunBlockTool(BaseTool):
def requires_auth(self) -> bool:
return True
async def _check_block_credentials(
self,
user_id: str,
block: Any,
input_data: dict[str, Any] | None = None,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Check if user has required credentials for a block.
Args:
user_id: User ID
block: Block to check credentials for
input_data: Input data for the block (used to determine provider via discriminator)
Returns:
tuple[matched_credentials, missing_credentials]
"""
matched_credentials: dict[str, CredentialsMetaInput] = {}
missing_credentials: list[CredentialsMetaInput] = []
input_data = input_data or {}
# Get credential field info from block's input schema
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return matched_credentials, missing_credentials
# Get user's available credentials
creds_manager = IntegrationCredentialsManager()
available_creds = await creds_manager.store.get_all_creds(user_id)
for field_name, field_info in credentials_fields_info.items():
effective_field_info = field_info
if field_info.discriminator and field_info.discriminator_mapping:
# Get discriminator from input, falling back to schema default
discriminator_value = input_data.get(field_info.discriminator)
if discriminator_value is None:
field = block.input_schema.model_fields.get(
field_info.discriminator
)
if field and field.default is not PydanticUndefined:
discriminator_value = field.default
if (
discriminator_value
and discriminator_value in field_info.discriminator_mapping
):
effective_field_info = field_info.discriminate(discriminator_value)
logger.debug(
f"Discriminated provider for {field_name}: "
f"{discriminator_value} -> {effective_field_info.provider}"
)
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in effective_field_info.provider
and cred.type in effective_field_info.supported_types
),
None,
)
if matching_cred:
matched_credentials[field_name] = CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
else:
# Create a placeholder for the missing credential
provider = next(iter(effective_field_info.provider), "unknown")
cred_type = next(iter(effective_field_info.supported_types), "api_key")
missing_credentials.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched_credentials, missing_credentials
async def _execute(
self,
user_id: str | None,
@@ -232,8 +151,8 @@ class RunBlockTool(BaseTool):
logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}")
creds_manager = IntegrationCredentialsManager()
matched_credentials, missing_credentials = await self._check_block_credentials(
user_id, block, input_data
matched_credentials, missing_credentials = (
await self._resolve_block_credentials(user_id, block, input_data)
)
if missing_credentials:
@@ -362,29 +281,75 @@ class RunBlockTool(BaseTool):
session_id=session_id,
)
def _get_inputs_list(self, block: Any) -> list[dict[str, Any]]:
async def _resolve_block_credentials(
self,
user_id: str,
block: AnyBlockSchema,
input_data: dict[str, Any] | None = None,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Resolve credentials for a block by matching user's available credentials.
Args:
user_id: User ID
block: Block to resolve credentials for
input_data: Input data for the block (used to determine provider via discriminator)
Returns:
tuple of (matched_credentials, missing_credentials) - matched credentials
are used for block execution, missing ones indicate setup requirements.
"""
input_data = input_data or {}
requirements = self._resolve_discriminated_credentials(block, input_data)
if not requirements:
return {}, []
return await match_credentials_to_requirements(user_id, requirements)
def _get_inputs_list(self, block: AnyBlockSchema) -> list[dict[str, Any]]:
"""Extract non-credential inputs from block schema."""
inputs_list = []
schema = block.input_schema.jsonschema()
properties = schema.get("properties", {})
required_fields = set(schema.get("required", []))
# Get credential field names to exclude
credentials_fields = set(block.input_schema.get_credentials_fields().keys())
return get_inputs_from_schema(schema, exclude_fields=credentials_fields)
for field_name, field_schema in properties.items():
# Skip credential fields
if field_name in credentials_fields:
continue
def _resolve_discriminated_credentials(
self,
block: AnyBlockSchema,
input_data: dict[str, Any],
) -> dict[str, CredentialsFieldInfo]:
"""Resolve credential requirements, applying discriminator logic where needed."""
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return {}
inputs_list.append(
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in required_fields,
}
)
resolved: dict[str, CredentialsFieldInfo] = {}
return inputs_list
for field_name, field_info in credentials_fields_info.items():
effective_field_info = field_info
if field_info.discriminator and field_info.discriminator_mapping:
discriminator_value = input_data.get(field_info.discriminator)
if discriminator_value is None:
field = block.input_schema.model_fields.get(
field_info.discriminator
)
if field and field.default is not PydanticUndefined:
discriminator_value = field.default
if (
discriminator_value
and discriminator_value in field_info.discriminator_mapping
):
effective_field_info = field_info.discriminate(discriminator_value)
# For host-scoped credentials, add the discriminator value
# (e.g., URL) so _credential_is_for_host can match it
effective_field_info.discriminator_values.add(discriminator_value)
logger.debug(
f"Discriminated provider for {field_name}: "
f"{discriminator_value} -> {effective_field_info.provider}"
)
resolved[field_name] = effective_field_info
return resolved

View File

@@ -8,6 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data.graph import GraphModel
from backend.data.model import (
Credentials,
CredentialsFieldInfo,
CredentialsMetaInput,
HostScopedCredentials,
@@ -223,6 +224,99 @@ async def get_or_create_library_agent(
return library_agents[0]
async def match_credentials_to_requirements(
user_id: str,
requirements: dict[str, CredentialsFieldInfo],
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Match user's credentials against a dictionary of credential requirements.
This is the core matching logic shared by both graph and block credential matching.
"""
matched: dict[str, CredentialsMetaInput] = {}
missing: list[CredentialsMetaInput] = []
if not requirements:
return matched, missing
available_creds = await get_user_credentials(user_id)
for field_name, field_info in requirements.items():
matching_cred = find_matching_credential(available_creds, field_info)
if matching_cred:
try:
matched[field_name] = create_credential_meta_from_match(matching_cred)
except Exception as e:
logger.error(
f"Failed to create CredentialsMetaInput for field '{field_name}': "
f"provider={matching_cred.provider}, type={matching_cred.type}, "
f"credential_id={matching_cred.id}",
exc_info=True,
)
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=f"{field_name} (validation failed: {e})",
)
)
else:
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched, missing
async def get_user_credentials(user_id: str) -> list[Credentials]:
"""Get all available credentials for a user."""
creds_manager = IntegrationCredentialsManager()
return await creds_manager.store.get_all_creds(user_id)
def find_matching_credential(
available_creds: list[Credentials],
field_info: CredentialsFieldInfo,
) -> Credentials | None:
"""Find a credential that matches the required provider, type, scopes, and host."""
for cred in available_creds:
if cred.provider not in field_info.provider:
continue
if cred.type not in field_info.supported_types:
continue
if cred.type == "oauth2" and not _credential_has_required_scopes(
cred, field_info
):
continue
if cred.type == "host_scoped" and not _credential_is_for_host(cred, field_info):
continue
return cred
return None
def create_credential_meta_from_match(
matching_cred: Credentials,
) -> CredentialsMetaInput:
"""Create a CredentialsMetaInput from a matched credential."""
return CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
async def match_user_credentials_to_graph(
user_id: str,
graph: GraphModel,
@@ -331,8 +425,6 @@ def _credential_has_required_scopes(
# If no scopes are required, any credential matches
if not requirements.required_scopes:
return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes)

View File

@@ -8,7 +8,6 @@ Includes BM25 reranking for improved lexical relevance.
import logging
import re
import time
from dataclasses import dataclass
from typing import Any, Literal
@@ -363,11 +362,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
try:
results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0
# Apply BM25 reranking
@@ -691,11 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
try:
results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0
@@ -727,87 +718,6 @@ async def hybrid_search_simple(
return await hybrid_search(query=query, page=page, page_size=page_size)
# ============================================================================
# Diagnostics
# ============================================================================
# Rate limit: only log vector error diagnostics once per this interval
_VECTOR_DIAG_INTERVAL_SECONDS = 60
_last_vector_diag_time: float = 0
async def _log_vector_error_diagnostics(error: Exception) -> None:
"""Log diagnostic info when 'type vector does not exist' error occurs.
Note: Diagnostic queries use query_raw_with_schema which may run on a different
pooled connection than the one that failed. Session-level search_path can differ,
so these diagnostics show cluster-wide state, not necessarily the failed session.
Includes rate limiting to avoid log spam - only logs once per minute.
Caller should re-raise the error after calling this function.
"""
global _last_vector_diag_time
# Check if this is the vector type error
error_str = str(error).lower()
if not (
"type" in error_str and "vector" in error_str and "does not exist" in error_str
):
return
# Rate limit: only log once per interval
now = time.time()
if now - _last_vector_diag_time < _VECTOR_DIAG_INTERVAL_SECONDS:
return
_last_vector_diag_time = now
try:
diagnostics: dict[str, object] = {}
try:
search_path_result = await query_raw_with_schema("SHOW search_path")
diagnostics["search_path"] = search_path_result
except Exception as e:
diagnostics["search_path"] = f"Error: {e}"
try:
schema_result = await query_raw_with_schema("SELECT current_schema()")
diagnostics["current_schema"] = schema_result
except Exception as e:
diagnostics["current_schema"] = f"Error: {e}"
try:
user_result = await query_raw_with_schema(
"SELECT current_user, session_user, current_database()"
)
diagnostics["user_info"] = user_result
except Exception as e:
diagnostics["user_info"] = f"Error: {e}"
try:
# Check pgvector extension installation (cluster-wide, stable info)
ext_result = await query_raw_with_schema(
"SELECT extname, extversion, nspname as schema "
"FROM pg_extension e "
"JOIN pg_namespace n ON e.extnamespace = n.oid "
"WHERE extname = 'vector'"
)
diagnostics["pgvector_extension"] = ext_result
except Exception as e:
diagnostics["pgvector_extension"] = f"Error: {e}"
logger.error(
f"Vector type error diagnostics:\n"
f" Error: {error}\n"
f" search_path: {diagnostics.get('search_path')}\n"
f" current_schema: {diagnostics.get('current_schema')}\n"
f" user_info: {diagnostics.get('user_info')}\n"
f" pgvector_extension: {diagnostics.get('pgvector_extension')}"
)
except Exception as diag_error:
logger.error(f"Failed to collect vector error diagnostics: {diag_error}")
# Backward compatibility alias - HybridSearchWeights maps to StoreAgentSearchWeights
# for existing code that expects the popularity parameter
HybridSearchWeights = StoreAgentSearchWeights

View File

@@ -1,3 +1,4 @@
import asyncio
import logging
from abc import ABC, abstractmethod
from enum import Enum
@@ -225,6 +226,10 @@ class SyncRabbitMQ(RabbitMQBase):
class AsyncRabbitMQ(RabbitMQBase):
"""Asynchronous RabbitMQ client"""
def __init__(self, config: RabbitMQConfig):
super().__init__(config)
self._reconnect_lock: asyncio.Lock | None = None
@property
def is_connected(self) -> bool:
return bool(self._connection and not self._connection.is_closed)
@@ -235,7 +240,17 @@ class AsyncRabbitMQ(RabbitMQBase):
@conn_retry("AsyncRabbitMQ", "Acquiring async connection")
async def connect(self):
if self.is_connected:
if self.is_connected and self._channel and not self._channel.is_closed:
return
if (
self.is_connected
and self._connection
and (self._channel is None or self._channel.is_closed)
):
self._channel = await self._connection.channel()
await self._channel.set_qos(prefetch_count=1)
await self.declare_infrastructure()
return
self._connection = await aio_pika.connect_robust(
@@ -291,24 +306,46 @@ class AsyncRabbitMQ(RabbitMQBase):
exchange, routing_key=queue.routing_key or queue.name
)
@func_retry
async def publish_message(
@property
def _lock(self) -> asyncio.Lock:
if self._reconnect_lock is None:
self._reconnect_lock = asyncio.Lock()
return self._reconnect_lock
async def _ensure_channel(self) -> aio_pika.abc.AbstractChannel:
"""Get a valid channel, reconnecting if the current one is stale.
Uses a lock to prevent concurrent reconnection attempts from racing.
"""
if self.is_ready:
return self._channel # type: ignore # is_ready guarantees non-None
async with self._lock:
# Double-check after acquiring lock
if self.is_ready:
return self._channel # type: ignore
self._channel = None
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
return self._channel
async def _publish_once(
self,
routing_key: str,
message: str,
exchange: Optional[Exchange] = None,
persistent: bool = True,
) -> None:
if not self.is_ready:
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
channel = await self._ensure_channel()
if exchange:
exchange_obj = await self._channel.get_exchange(exchange.name)
exchange_obj = await channel.get_exchange(exchange.name)
else:
exchange_obj = self._channel.default_exchange
exchange_obj = channel.default_exchange
await exchange_obj.publish(
aio_pika.Message(
@@ -322,9 +359,23 @@ class AsyncRabbitMQ(RabbitMQBase):
routing_key=routing_key,
)
@func_retry
async def publish_message(
self,
routing_key: str,
message: str,
exchange: Optional[Exchange] = None,
persistent: bool = True,
) -> None:
try:
await self._publish_once(routing_key, message, exchange, persistent)
except aio_pika.exceptions.ChannelInvalidStateError:
logger.warning(
"RabbitMQ channel invalid, forcing reconnect and retrying publish"
)
async with self._lock:
self._channel = None
await self._publish_once(routing_key, message, exchange, persistent)
async def get_channel(self) -> aio_pika.abc.AbstractChannel:
if not self.is_ready:
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
return self._channel
return await self._ensure_channel()

View File

@@ -25,6 +25,10 @@ RUN if [ -f .env.production ]; then \
cp .env.default .env; \
fi
RUN pnpm run generate:api
# Disable source-map generation in Docker builds to halve webpack memory usage.
# Source maps are only useful when SENTRY_AUTH_TOKEN is set (Vercel deploys);
# the Docker image never uploads them, so generating them just wastes RAM.
ENV NEXT_PUBLIC_SOURCEMAPS="false"
# In CI, we want NEXT_PUBLIC_PW_TEST=true during build so Next.js inlines it
RUN if [ "$NEXT_PUBLIC_PW_TEST" = "true" ]; then NEXT_PUBLIC_PW_TEST=true NODE_OPTIONS="--max-old-space-size=4096" pnpm build; else NODE_OPTIONS="--max-old-space-size=4096" pnpm build; fi

View File

@@ -1,8 +1,12 @@
import { withSentryConfig } from "@sentry/nextjs";
// Allow Docker builds to skip source-map generation (halves memory usage).
// Defaults to true so Vercel/local builds are unaffected.
const enableSourceMaps = process.env.NEXT_PUBLIC_SOURCEMAPS !== "false";
/** @type {import('next').NextConfig} */
const nextConfig = {
productionBrowserSourceMaps: true,
productionBrowserSourceMaps: enableSourceMaps,
// Externalize OpenTelemetry packages to fix Turbopack HMR issues
serverExternalPackages: [
"@opentelemetry/instrumentation",
@@ -96,7 +100,7 @@ export default isDevelopmentBuild
// This helps Sentry with sourcemaps... https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/
sourcemaps: {
disable: false,
disable: !enableSourceMaps,
assets: [".next/**/*.js", ".next/**/*.js.map"],
ignore: ["**/node_modules/**"],
deleteSourcemapsAfterUpload: false, // Source is public anyway :)

View File

@@ -30,6 +30,7 @@
"defaults"
],
"dependencies": {
"@ai-sdk/react": "3.0.61",
"@faker-js/faker": "10.0.0",
"@hookform/resolvers": "5.2.2",
"@next/third-parties": "15.4.6",
@@ -60,6 +61,10 @@
"@rjsf/utils": "6.1.2",
"@rjsf/validator-ajv8": "6.1.2",
"@sentry/nextjs": "10.27.0",
"@streamdown/cjk": "1.0.1",
"@streamdown/code": "1.0.1",
"@streamdown/math": "1.0.1",
"@streamdown/mermaid": "1.0.1",
"@supabase/ssr": "0.7.0",
"@supabase/supabase-js": "2.78.0",
"@tanstack/react-query": "5.90.6",
@@ -68,6 +73,7 @@
"@vercel/analytics": "1.5.0",
"@vercel/speed-insights": "1.2.0",
"@xyflow/react": "12.9.2",
"ai": "6.0.59",
"boring-avatars": "1.11.2",
"class-variance-authority": "0.7.1",
"clsx": "2.1.1",
@@ -112,9 +118,11 @@
"remark-math": "6.0.0",
"shepherd.js": "14.5.1",
"sonner": "2.0.7",
"streamdown": "2.1.0",
"tailwind-merge": "2.6.0",
"tailwind-scrollbar": "3.1.0",
"tailwindcss-animate": "1.0.7",
"use-stick-to-bottom": "1.1.2",
"uuid": "11.1.0",
"vaul": "1.1.2",
"zod": "3.25.76",

File diff suppressed because it is too large Load Diff

View File

@@ -70,10 +70,10 @@ export const HorizontalScroll: React.FC<HorizontalScrollAreaProps> = ({
{children}
</div>
{canScrollLeft && (
<div className="pointer-events-none absolute inset-y-0 left-0 w-8 bg-gradient-to-r from-white via-white/80 to-white/0" />
<div className="pointer-events-none absolute inset-y-0 left-0 w-8 bg-gradient-to-r from-background via-background/80 to-background/0" />
)}
{canScrollRight && (
<div className="pointer-events-none absolute inset-y-0 right-0 w-8 bg-gradient-to-l from-white via-white/80 to-white/0" />
<div className="pointer-events-none absolute inset-y-0 right-0 w-8 bg-gradient-to-l from-background via-background/80 to-background/0" />
)}
{canScrollLeft && (
<button

View File

@@ -0,0 +1,74 @@
"use client";
import { ChatInput } from "@/app/(platform)/copilot/components/ChatInput/ChatInput";
import { UIDataTypes, UIMessage, UITools } from "ai";
import { LayoutGroup, motion } from "framer-motion";
import { ChatMessagesContainer } from "../ChatMessagesContainer/ChatMessagesContainer";
import { CopilotChatActionsProvider } from "../CopilotChatActionsProvider/CopilotChatActionsProvider";
import { EmptySession } from "../EmptySession/EmptySession";
export interface ChatContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
sessionId: string | null;
isLoadingSession: boolean;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
onStop: () => void;
}
export const ChatContainer = ({
messages,
status,
error,
sessionId,
isLoadingSession,
isCreatingSession,
onCreateSession,
onSend,
onStop,
}: ChatContainerProps) => {
const inputLayoutId = "copilot-2-chat-input";
return (
<CopilotChatActionsProvider onSend={onSend}>
<LayoutGroup id="copilot-2-chat-layout">
<div className="flex h-full min-h-0 w-full flex-col bg-[#f8f8f9] px-2 lg:px-0">
{sessionId ? (
<div className="mx-auto flex h-full min-h-0 w-full max-w-3xl flex-col">
<ChatMessagesContainer
messages={messages}
status={status}
error={error}
isLoading={isLoadingSession}
/>
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.3 }}
className="relative px-3 pb-2 pt-2"
>
<div className="pointer-events-none absolute left-0 right-0 top-[-18px] z-10 h-6 bg-gradient-to-b from-transparent to-[#f8f8f9]" />
<ChatInput
inputId="chat-input-session"
onSend={onSend}
disabled={status === "streaming"}
isStreaming={status === "streaming"}
onStop={onStop}
placeholder="What else can I help with?"
/>
</motion.div>
</div>
) : (
<EmptySession
inputLayoutId={inputLayoutId}
isCreatingSession={isCreatingSession}
onCreateSession={onCreateSession}
onSend={onSend}
/>
)}
</div>
</LayoutGroup>
</CopilotChatActionsProvider>
);
};

View File

@@ -6,17 +6,19 @@ import {
MicrophoneIcon,
StopIcon,
} from "@phosphor-icons/react";
import { ChangeEvent, useCallback } from "react";
import { RecordingIndicator } from "./components/RecordingIndicator";
import { useChatInput } from "./useChatInput";
import { useVoiceRecording } from "./useVoiceRecording";
export interface Props {
onSend: (message: string) => void;
onSend: (message: string) => void | Promise<void>;
disabled?: boolean;
isStreaming?: boolean;
onStop?: () => void;
placeholder?: string;
className?: string;
inputId?: string;
}
export function ChatInput({
@@ -26,14 +28,14 @@ export function ChatInput({
onStop,
placeholder = "Type your message...",
className,
inputId = "chat-input",
}: Props) {
const inputId = "chat-input";
const {
value,
setValue,
handleKeyDown: baseHandleKeyDown,
handleSubmit,
handleChange,
handleChange: baseHandleChange,
hasMultipleLines,
} = useChatInput({
onSend,
@@ -60,6 +62,15 @@ export function ChatInput({
inputId,
});
// Block text changes when recording
const handleChange = useCallback(
(e: ChangeEvent<HTMLTextAreaElement>) => {
if (isRecording) return;
baseHandleChange(e);
},
[isRecording, baseHandleChange],
);
return (
<form onSubmit={handleSubmit} className={cn("relative flex-1", className)}>
<div className="relative">

View File

@@ -21,6 +21,7 @@ export function useChatInput({
}: Args) {
const [value, setValue] = useState("");
const [hasMultipleLines, setHasMultipleLines] = useState(false);
const [isSending, setIsSending] = useState(false);
useEffect(
function focusOnMount() {
@@ -100,34 +101,40 @@ export function useChatInput({
}
}, [value, maxRows, inputId]);
const handleSend = () => {
if (disabled || !value.trim()) return;
onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
async function handleSend() {
if (disabled || isSending || !value.trim()) return;
setIsSending(true);
try {
await onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
} finally {
setIsSending(false);
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
};
}
function handleKeyDown(event: KeyboardEvent<HTMLTextAreaElement>) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handleSend();
void handleSend();
}
}
function handleSubmit(e: FormEvent<HTMLFormElement>) {
e.preventDefault();
handleSend();
void handleSend();
}
function handleChange(e: ChangeEvent<HTMLTextAreaElement>) {
@@ -142,5 +149,6 @@ export function useChatInput({
handleSubmit,
handleChange,
hasMultipleLines,
isSending,
};
}

View File

@@ -38,9 +38,13 @@ export function useVoiceRecording({
const streamRef = useRef<MediaStream | null>(null);
const isRecordingRef = useRef(false);
const isSupported =
typeof window !== "undefined" &&
!!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia);
const [isSupported, setIsSupported] = useState(false);
useEffect(() => {
setIsSupported(
!!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia),
);
}, []);
const clearTimer = useCallback(() => {
if (timerRef.current) {
@@ -214,17 +218,33 @@ export function useVoiceRecording({
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
if (event.key === " " && !value.trim() && !isTranscribing) {
// Allow space to toggle recording (start when empty, stop when recording)
if (event.key === " " && !isTranscribing) {
if (isRecordingRef.current) {
// Stop recording on space
event.preventDefault();
stopRecording();
return;
} else if (!value.trim()) {
// Start recording on space when input is empty
event.preventDefault();
void startRecording();
return;
}
}
// Block all key events when recording (except space handled above)
if (isRecordingRef.current) {
event.preventDefault();
toggleRecording();
return;
}
baseHandleKeyDown(event);
},
[value, isTranscribing, toggleRecording, baseHandleKeyDown],
[value, isTranscribing, stopRecording, startRecording, baseHandleKeyDown],
);
const showMicButton = isSupported;
// Don't include isRecording in disabled state - we need key events to work
// Text input is blocked via handleKeyDown instead
const isInputDisabled = disabled || isStreaming || isTranscribing;
// Cleanup on unmount

View File

@@ -0,0 +1,274 @@
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import {
Message,
MessageContent,
MessageResponse,
} from "@/components/ai-elements/message";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
import { useEffect, useState } from "react";
import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../tools/EditAgent/EditAgent";
import { FindAgentsTool } from "../../tools/FindAgents/FindAgents";
import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks";
import { RunAgentTool } from "../../tools/RunAgent/RunAgent";
import { RunBlockTool } from "../../tools/RunBlock/RunBlock";
import { SearchDocsTool } from "../../tools/SearchDocs/SearchDocs";
import { ViewAgentOutputTool } from "../../tools/ViewAgentOutput/ViewAgentOutput";
// ---------------------------------------------------------------------------
// Workspace media support
// ---------------------------------------------------------------------------
/**
* Resolve workspace:// URLs in markdown text to proxy download URLs.
* Detects MIME type from the hash fragment (e.g. workspace://id#video/mp4)
* and prefixes the alt text with "video:" so the custom img component can
* render a <video> element instead.
*/
function resolveWorkspaceUrls(text: string): string {
return text.replace(
/!\[([^\]]*)\]\(workspace:\/\/([^)#\s]+)(?:#([^)\s]*))?\)/g,
(_match, alt: string, fileId: string, mimeHint?: string) => {
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const url = `/api/proxy${apiPath}`;
if (mimeHint?.startsWith("video/")) {
return `![video:${alt || "Video"}](${url})`;
}
return `![${alt || "Image"}](${url})`;
},
);
}
/**
* Custom img component for Streamdown that renders <video> elements
* for workspace video files (detected via "video:" alt-text prefix).
* Falls back to <video> when an <img> fails to load for workspace files.
*/
function WorkspaceMediaImage(props: React.JSX.IntrinsicElements["img"]) {
const { src, alt, ...rest } = props;
const [imgFailed, setImgFailed] = useState(false);
const isWorkspace = src?.includes("/workspace/files/") ?? false;
if (!src) return null;
if (alt?.startsWith("video:") || (imgFailed && isWorkspace)) {
return (
<span className="my-2 inline-block">
<video
controls
className="h-auto max-w-full rounded-md border border-zinc-200"
preload="metadata"
>
<source src={src} />
Your browser does not support the video tag.
</video>
</span>
);
}
return (
// eslint-disable-next-line @next/next/no-img-element
<img
src={src}
alt={alt || "Image"}
className="h-auto max-w-full rounded-md border border-zinc-200"
loading="lazy"
onError={() => {
if (isWorkspace) setImgFailed(true);
}}
{...rest}
/>
);
}
/** Stable components override for Streamdown (avoids re-creating on every render). */
const STREAMDOWN_COMPONENTS = { img: WorkspaceMediaImage };
const THINKING_PHRASES = [
"Thinking...",
"Considering this...",
"Working through this...",
"Analyzing your request...",
"Reasoning...",
"Looking into it...",
"Processing your request...",
"Mulling this over...",
"Piecing it together...",
"On it...",
];
function getRandomPhrase() {
return THINKING_PHRASES[Math.floor(Math.random() * THINKING_PHRASES.length)];
}
interface ChatMessagesContainerProps {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
isLoading: boolean;
}
export const ChatMessagesContainer = ({
messages,
status,
error,
isLoading,
}: ChatMessagesContainerProps) => {
const [thinkingPhrase, setThinkingPhrase] = useState(getRandomPhrase);
useEffect(() => {
if (status === "submitted") {
setThinkingPhrase(getRandomPhrase());
}
}, [status]);
const lastMessage = messages[messages.length - 1];
const lastAssistantHasVisibleContent =
lastMessage?.role === "assistant" &&
lastMessage.parts.some(
(p) =>
(p.type === "text" && p.text.trim().length > 0) ||
p.type.startsWith("tool-"),
);
const showThinking =
status === "submitted" ||
(status === "streaming" && !lastAssistantHasVisibleContent);
return (
<Conversation className="min-h-0 flex-1">
<ConversationContent className="gap-6 px-3 py-6">
{isLoading && messages.length === 0 && (
<div className="flex flex-1 items-center justify-center">
<LoadingSpinner size="large" className="text-neutral-400" />
</div>
)}
{messages.map((message, messageIndex) => {
const isLastAssistant =
messageIndex === messages.length - 1 &&
message.role === "assistant";
const messageHasVisibleContent = message.parts.some(
(p) =>
(p.type === "text" && p.text.trim().length > 0) ||
p.type.startsWith("tool-"),
);
return (
<Message from={message.role} key={message.id}>
<MessageContent
className={
"text-[1rem] leading-relaxed " +
"group-[.is-user]:rounded-xl group-[.is-user]:bg-purple-100 group-[.is-user]:px-3 group-[.is-user]:py-2.5 group-[.is-user]:text-slate-900 group-[.is-user]:[border-bottom-right-radius:0] " +
"group-[.is-assistant]:bg-transparent group-[.is-assistant]:text-slate-900"
}
>
{message.parts.map((part, i) => {
switch (part.type) {
case "text":
return (
<MessageResponse
key={`${message.id}-${i}`}
components={STREAMDOWN_COMPONENTS}
>
{resolveWorkspaceUrls(part.text)}
</MessageResponse>
);
case "tool-find_block":
return (
<FindBlocksTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-find_agent":
case "tool-find_library_agent":
return (
<FindAgentsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-search_docs":
case "tool-get_doc_page":
return (
<SearchDocsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_block":
return (
<RunBlockTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_agent":
case "tool-schedule_agent":
return (
<RunAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-create_agent":
return (
<CreateAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-edit_agent":
return (
<EditAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-view_agent_output":
return (
<ViewAgentOutputTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
default:
return null;
}
})}
{isLastAssistant &&
!messageHasVisibleContent &&
showThinking && (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
{thinkingPhrase}
</span>
)}
</MessageContent>
</Message>
);
})}
{showThinking && lastMessage?.role !== "assistant" && (
<Message from="assistant">
<MessageContent className="text-[1rem] leading-relaxed">
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
{thinkingPhrase}
</span>
</MessageContent>
</Message>
)}
{error && (
<div className="rounded-lg bg-red-50 p-3 text-red-600">
Error: {error.message}
</div>
)}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
);
};

View File

@@ -0,0 +1,188 @@
"use client";
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { Button } from "@/components/atoms/Button/Button";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import {
Sidebar,
SidebarContent,
SidebarFooter,
SidebarHeader,
SidebarTrigger,
useSidebar,
} from "@/components/ui/sidebar";
import { cn } from "@/lib/utils";
import { PlusCircleIcon, PlusIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { parseAsString, useQueryState } from "nuqs";
export function ChatSidebar() {
const { state } = useSidebar();
const isCollapsed = state === "collapsed";
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
function handleNewChat() {
setSessionId(null);
}
function handleSelectSession(id: string) {
setSessionId(id);
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
return (
<Sidebar
variant="inset"
collapsible="icon"
className="!top-[50px] !h-[calc(100vh-50px)] border-r border-zinc-100 px-0"
>
{isCollapsed && (
<SidebarHeader
className={cn(
"flex",
isCollapsed
? "flex-row items-center justify-between gap-y-4 md:flex-col md:items-start md:justify-start"
: "flex-row items-center justify-between",
)}
>
<motion.div
key={isCollapsed ? "header-collapsed" : "header-expanded"}
className="flex flex-col items-center gap-3 pt-4"
initial={{ opacity: 0, filter: "blur(3px)" }}
animate={{ opacity: 1, filter: "blur(0px)" }}
transition={{ type: "spring", bounce: 0.2 }}
>
<div className="flex flex-col items-center gap-2">
<SidebarTrigger />
<Button
variant="ghost"
onClick={handleNewChat}
style={{ minWidth: "auto", width: "auto" }}
>
<PlusCircleIcon className="!size-5" />
<span className="sr-only">New Chat</span>
</Button>
</div>
</motion.div>
</SidebarHeader>
)}
<SidebarContent className="gap-4 overflow-y-auto px-4 py-4 [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.1 }}
className="flex items-center justify-between px-3"
>
<Text variant="h3" size="body-medium">
Your chats
</Text>
<div className="relative left-6">
<SidebarTrigger />
</div>
</motion.div>
)}
{!isCollapsed && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.15 }}
className="mt-4 flex flex-col gap-1"
>
{isLoadingSessions ? (
<div className="flex items-center justify-center py-4">
<LoadingSpinner size="small" className="text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => handleSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === sessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === sessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || `Untitled chat`}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</motion.div>
)}
</SidebarContent>
{!isCollapsed && sessionId && (
<SidebarFooter className="shrink-0 bg-zinc-50 p-3 pb-1 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.2, delay: 0.2 }}
>
<Button
variant="primary"
size="small"
onClick={handleNewChat}
className="w-full"
leftIcon={<PlusIcon className="h-4 w-4" weight="bold" />}
>
New Chat
</Button>
</motion.div>
</SidebarFooter>
)}
</Sidebar>
);
}

View File

@@ -0,0 +1,16 @@
"use client";
import { CopilotChatActionsContext } from "./useCopilotChatActions";
interface Props {
onSend: (message: string) => void | Promise<void>;
children: React.ReactNode;
}
export function CopilotChatActionsProvider({ onSend, children }: Props) {
return (
<CopilotChatActionsContext.Provider value={{ onSend }}>
{children}
</CopilotChatActionsContext.Provider>
);
}

View File

@@ -0,0 +1,23 @@
"use client";
import { createContext, useContext } from "react";
interface CopilotChatActions {
onSend: (message: string) => void | Promise<void>;
}
const CopilotChatActionsContext = createContext<CopilotChatActions | null>(
null,
);
export function useCopilotChatActions(): CopilotChatActions {
const ctx = useContext(CopilotChatActionsContext);
if (!ctx) {
throw new Error(
"useCopilotChatActions must be used within CopilotChatActionsProvider",
);
}
return ctx;
}
export { CopilotChatActionsContext };

View File

@@ -1,99 +0,0 @@
"use client";
import { ChatLoader } from "@/components/contextual/Chat/components/ChatLoader/ChatLoader";
import { Text } from "@/components/atoms/Text/Text";
import { NAVBAR_HEIGHT_PX } from "@/lib/constants";
import type { ReactNode } from "react";
import { DesktopSidebar } from "./components/DesktopSidebar/DesktopSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotShell } from "./useCopilotShell";
interface Props {
children: ReactNode;
}
export function CopilotShell({ children }: Props) {
const {
isMobile,
isDrawerOpen,
isLoading,
isCreatingSession,
isLoggedIn,
hasActiveSession,
sessions,
currentSessionId,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChatClick,
handleSessionClick,
hasNextPage,
isFetchingNextPage,
fetchNextPage,
} = useCopilotShell();
if (!isLoggedIn) {
return (
<div className="flex h-full items-center justify-center">
<ChatLoader />
</div>
);
}
return (
<div
className="flex overflow-hidden bg-[#EFEFF0]"
style={{ height: `calc(100vh - ${NAVBAR_HEIGHT_PX}px)` }}
>
{!isMobile && (
<DesktopSidebar
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSessionClick}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChatClick}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
<div className="relative flex min-h-0 flex-1 flex-col">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex min-h-0 flex-1 flex-col">
{isCreatingSession ? (
<div className="flex h-full flex-1 flex-col items-center justify-center bg-[#f8f8f9]">
<div className="flex flex-col items-center gap-4">
<ChatLoader />
<Text variant="body" className="text-zinc-500">
Creating your chat...
</Text>
</div>
</div>
) : (
children
)}
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={handleSessionClick}
onFetchNextPage={fetchNextPage}
onNewChat={handleNewChatClick}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
hasActiveSession={Boolean(hasActiveSession)}
/>
)}
</div>
);
}

View File

@@ -1,70 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { Plus } from "@phosphor-icons/react";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
hasActiveSession: boolean;
}
export function DesktopSidebar({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
hasActiveSession,
}: Props) {
return (
<aside className="flex h-full w-80 flex-col border-r border-zinc-100 bg-zinc-50">
<div className="shrink-0 px-6 py-4">
<Text variant="h3" size="body-medium">
Your chats
</Text>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-zinc-50 p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<Plus width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</aside>
);
}

View File

@@ -1,91 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
import { SessionsList } from "../SessionsList/SessionsList";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
hasActiveSession: boolean;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
onNewChat,
onClose,
onOpenChange,
hasActiveSession,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 p-4">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1.25rem" height="1.25rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
<SessionsList
sessions={sessions}
currentSessionId={currentSessionId}
isLoading={isLoading}
hasNextPage={hasNextPage}
isFetchingNextPage={isFetchingNextPage}
onSelectSession={onSelectSession}
onFetchNextPage={onFetchNextPage}
/>
</div>
{hasActiveSession && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -1,24 +0,0 @@
import { useState } from "react";
export function useMobileDrawer() {
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const handleOpenDrawer = () => {
setIsDrawerOpen(true);
};
const handleCloseDrawer = () => {
setIsDrawerOpen(false);
};
const handleDrawerOpenChange = (open: boolean) => {
setIsDrawerOpen(open);
};
return {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
};
}

View File

@@ -1,80 +0,0 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { InfiniteList } from "@/components/molecules/InfiniteList/InfiniteList";
import { cn } from "@/lib/utils";
import { getSessionTitle } from "../../helpers";
interface Props {
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
hasNextPage: boolean;
isFetchingNextPage: boolean;
onSelectSession: (sessionId: string) => void;
onFetchNextPage: () => void;
}
export function SessionsList({
sessions,
currentSessionId,
isLoading,
hasNextPage,
isFetchingNextPage,
onSelectSession,
onFetchNextPage,
}: Props) {
if (isLoading) {
return (
<div className="space-y-1">
{Array.from({ length: 5 }).map((_, i) => (
<div key={i} className="rounded-lg px-3 py-2.5">
<Skeleton className="h-5 w-full" />
</div>
))}
</div>
);
}
if (sessions.length === 0) {
return (
<div className="flex h-full items-center justify-center">
<Text variant="body" className="text-zinc-500">
You don&apos;t have previous chats
</Text>
</div>
);
}
return (
<InfiniteList
items={sessions}
hasMore={hasNextPage}
isFetchingMore={isFetchingNextPage}
onEndReached={onFetchNextPage}
className="space-y-1"
renderItem={(session) => {
const isActive = session.id === currentSessionId;
return (
<button
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
isActive ? "bg-zinc-100" : "hover:bg-zinc-50",
)}
>
<Text
variant="body"
className={cn(
"font-normal",
isActive ? "text-zinc-600" : "text-zinc-800",
)}
>
{getSessionTitle(session)}
</Text>
</button>
);
}}
/>
);
}

View File

@@ -1,91 +0,0 @@
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { okData } from "@/app/api/helpers";
import { useEffect, useState } from "react";
const PAGE_SIZE = 50;
export interface UseSessionsPaginationArgs {
enabled: boolean;
}
export function useSessionsPagination({ enabled }: UseSessionsPaginationArgs) {
const [offset, setOffset] = useState(0);
const [accumulatedSessions, setAccumulatedSessions] = useState<
SessionSummaryResponse[]
>([]);
const [totalCount, setTotalCount] = useState<number | null>(null);
const { data, isLoading, isFetching, isError } = useGetV2ListSessions(
{ limit: PAGE_SIZE, offset },
{
query: {
enabled: enabled && offset >= 0,
},
},
);
useEffect(() => {
const responseData = okData(data);
if (responseData) {
const newSessions = responseData.sessions;
const total = responseData.total;
setTotalCount(total);
if (offset === 0) {
setAccumulatedSessions(newSessions);
} else {
setAccumulatedSessions((prev) => [...prev, ...newSessions]);
}
} else if (!enabled) {
setAccumulatedSessions([]);
setTotalCount(null);
}
}, [data, offset, enabled]);
const hasNextPage =
totalCount !== null && accumulatedSessions.length < totalCount;
const areAllSessionsLoaded =
totalCount !== null &&
accumulatedSessions.length >= totalCount &&
!isFetching &&
!isLoading;
useEffect(() => {
if (
hasNextPage &&
!isFetching &&
!isLoading &&
!isError &&
totalCount !== null
) {
setOffset((prev) => prev + PAGE_SIZE);
}
}, [hasNextPage, isFetching, isLoading, isError, totalCount]);
const fetchNextPage = () => {
if (hasNextPage && !isFetching) {
setOffset((prev) => prev + PAGE_SIZE);
}
};
const reset = () => {
// Only reset the offset - keep existing sessions visible during refetch
// The effect will replace sessions when new data arrives at offset 0
setOffset(0);
};
return {
sessions: accumulatedSessions,
isLoading,
isFetching,
hasNextPage,
areAllSessionsLoaded,
totalCount,
fetchNextPage,
reset,
};
}

View File

@@ -1,106 +0,0 @@
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { format, formatDistanceToNow, isToday } from "date-fns";
export function convertSessionDetailToSummary(session: SessionDetailResponse) {
return {
id: session.id,
created_at: session.created_at,
updated_at: session.updated_at,
title: undefined,
};
}
export function filterVisibleSessions(sessions: SessionSummaryResponse[]) {
const fiveMinutesAgo = Date.now() - 5 * 60 * 1000;
return sessions.filter((session) => {
const hasBeenUpdated = session.updated_at !== session.created_at;
if (hasBeenUpdated) return true;
const isRecentlyCreated =
new Date(session.created_at).getTime() > fiveMinutesAgo;
return isRecentlyCreated;
});
}
export function getSessionTitle(session: SessionSummaryResponse) {
if (session.title) return session.title;
const isNewSession = session.updated_at === session.created_at;
if (isNewSession) {
const createdDate = new Date(session.created_at);
if (isToday(createdDate)) {
return "Today";
}
return format(createdDate, "MMM d, yyyy");
}
return "Untitled Chat";
}
export function getSessionUpdatedLabel(session: SessionSummaryResponse) {
if (!session.updated_at) return "";
return formatDistanceToNow(new Date(session.updated_at), { addSuffix: true });
}
export function mergeCurrentSessionIntoList(
accumulatedSessions: SessionSummaryResponse[],
currentSessionId: string | null,
currentSessionData: SessionDetailResponse | null | undefined,
recentlyCreatedSessions?: Map<string, SessionSummaryResponse>,
) {
const filteredSessions: SessionSummaryResponse[] = [];
const addedIds = new Set<string>();
if (accumulatedSessions.length > 0) {
const visibleSessions = filterVisibleSessions(accumulatedSessions);
if (currentSessionId) {
const currentInAll = accumulatedSessions.find(
(s) => s.id === currentSessionId,
);
if (currentInAll) {
const isInVisible = visibleSessions.some(
(s) => s.id === currentSessionId,
);
if (!isInVisible) {
filteredSessions.push(currentInAll);
addedIds.add(currentInAll.id);
}
}
}
for (const session of visibleSessions) {
if (!addedIds.has(session.id)) {
filteredSessions.push(session);
addedIds.add(session.id);
}
}
}
if (currentSessionId && currentSessionData) {
if (!addedIds.has(currentSessionId)) {
const summarySession = convertSessionDetailToSummary(currentSessionData);
filteredSessions.unshift(summarySession);
addedIds.add(currentSessionId);
}
}
if (recentlyCreatedSessions) {
for (const [sessionId, sessionData] of recentlyCreatedSessions) {
if (!addedIds.has(sessionId)) {
filteredSessions.unshift(sessionData);
addedIds.add(sessionId);
}
}
}
return filteredSessions;
}
export function getCurrentSessionId(searchParams: URLSearchParams) {
return searchParams.get("sessionId");
}

View File

@@ -1,124 +0,0 @@
"use client";
import {
getGetV2GetSessionQueryKey,
getGetV2ListSessionsQueryKey,
useGetV2GetSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { okData } from "@/app/api/helpers";
import { useChatStore } from "@/components/contextual/Chat/chat-store";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useQueryClient } from "@tanstack/react-query";
import { usePathname, useSearchParams } from "next/navigation";
import { useCopilotStore } from "../../copilot-page-store";
import { useCopilotSessionId } from "../../useCopilotSessionId";
import { useMobileDrawer } from "./components/MobileDrawer/useMobileDrawer";
import { getCurrentSessionId } from "./helpers";
import { useShellSessionList } from "./useShellSessionList";
export function useCopilotShell() {
const pathname = usePathname();
const searchParams = useSearchParams();
const queryClient = useQueryClient();
const breakpoint = useBreakpoint();
const { isLoggedIn } = useSupabase();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const { urlSessionId, setUrlSessionId } = useCopilotSessionId();
const isOnHomepage = pathname === "/copilot";
const paramSessionId = searchParams.get("sessionId");
const {
isDrawerOpen,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
} = useMobileDrawer();
const paginationEnabled = !isMobile || isDrawerOpen || !!paramSessionId;
const currentSessionId = getCurrentSessionId(searchParams);
const { data: currentSessionData } = useGetV2GetSession(
currentSessionId || "",
{
query: {
enabled: !!currentSessionId,
select: okData,
},
},
);
const {
sessions,
isLoading,
isSessionsFetching,
hasNextPage,
fetchNextPage,
resetPagination,
recentlyCreatedSessionsRef,
} = useShellSessionList({
paginationEnabled,
currentSessionId,
currentSessionData,
isOnHomepage,
paramSessionId,
});
const stopStream = useChatStore((s) => s.stopStream);
const isCreatingSession = useCopilotStore((s) => s.isCreatingSession);
function handleSessionClick(sessionId: string) {
if (sessionId === currentSessionId) return;
// Stop current stream - SSE reconnection allows resuming later
if (currentSessionId) {
stopStream(currentSessionId);
}
if (recentlyCreatedSessionsRef.current.has(sessionId)) {
queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(sessionId),
});
}
setUrlSessionId(sessionId, { shallow: false });
if (isMobile) handleCloseDrawer();
}
function handleNewChatClick() {
// Stop current stream - SSE reconnection allows resuming later
if (currentSessionId) {
stopStream(currentSessionId);
}
resetPagination();
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
setUrlSessionId(null, { shallow: false });
if (isMobile) handleCloseDrawer();
}
return {
isMobile,
isDrawerOpen,
isLoggedIn,
hasActiveSession:
Boolean(currentSessionId) && (!isOnHomepage || Boolean(paramSessionId)),
isLoading: isLoading || isCreatingSession,
isCreatingSession,
sessions,
currentSessionId: urlSessionId,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleNewChatClick,
handleSessionClick,
hasNextPage,
isFetchingNextPage: isSessionsFetching,
fetchNextPage,
};
}

View File

@@ -1,113 +0,0 @@
import { getGetV2ListSessionsQueryKey } from "@/app/api/__generated__/endpoints/chat/chat";
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { useChatStore } from "@/components/contextual/Chat/chat-store";
import { useQueryClient } from "@tanstack/react-query";
import { useEffect, useMemo, useRef } from "react";
import { useSessionsPagination } from "./components/SessionsList/useSessionsPagination";
import {
convertSessionDetailToSummary,
filterVisibleSessions,
mergeCurrentSessionIntoList,
} from "./helpers";
interface UseShellSessionListArgs {
paginationEnabled: boolean;
currentSessionId: string | null;
currentSessionData: SessionDetailResponse | null | undefined;
isOnHomepage: boolean;
paramSessionId: string | null;
}
export function useShellSessionList({
paginationEnabled,
currentSessionId,
currentSessionData,
isOnHomepage,
paramSessionId,
}: UseShellSessionListArgs) {
const queryClient = useQueryClient();
const onStreamComplete = useChatStore((s) => s.onStreamComplete);
const {
sessions: accumulatedSessions,
isLoading: isSessionsLoading,
isFetching: isSessionsFetching,
hasNextPage,
fetchNextPage,
reset: resetPagination,
} = useSessionsPagination({
enabled: paginationEnabled,
});
const recentlyCreatedSessionsRef = useRef<
Map<string, SessionSummaryResponse>
>(new Map());
useEffect(() => {
if (isOnHomepage && !paramSessionId) {
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
}
}, [isOnHomepage, paramSessionId, queryClient]);
useEffect(() => {
if (currentSessionId && currentSessionData) {
const isNewSession =
currentSessionData.updated_at === currentSessionData.created_at;
const isNotInAccumulated = !accumulatedSessions.some(
(s) => s.id === currentSessionId,
);
if (isNewSession || isNotInAccumulated) {
const summary = convertSessionDetailToSummary(currentSessionData);
recentlyCreatedSessionsRef.current.set(currentSessionId, summary);
}
}
}, [currentSessionId, currentSessionData, accumulatedSessions]);
useEffect(() => {
for (const sessionId of recentlyCreatedSessionsRef.current.keys()) {
if (accumulatedSessions.some((s) => s.id === sessionId)) {
recentlyCreatedSessionsRef.current.delete(sessionId);
}
}
}, [accumulatedSessions]);
useEffect(() => {
const unsubscribe = onStreamComplete(() => {
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
});
return unsubscribe;
}, [onStreamComplete, queryClient]);
const sessions = useMemo(
() =>
mergeCurrentSessionIntoList(
accumulatedSessions,
currentSessionId,
currentSessionData,
recentlyCreatedSessionsRef.current,
),
[accumulatedSessions, currentSessionId, currentSessionData],
);
const visibleSessions = useMemo(
() => filterVisibleSessions(sessions),
[sessions],
);
const isLoading = isSessionsLoading && accumulatedSessions.length === 0;
return {
sessions: visibleSessions,
isLoading,
isSessionsFetching,
hasNextPage,
fetchNextPage,
resetPagination,
recentlyCreatedSessionsRef,
};
}

View File

@@ -0,0 +1,111 @@
"use client";
import { ChatInput } from "@/app/(platform)/copilot/components/ChatInput/ChatInput";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { SpinnerGapIcon } from "@phosphor-icons/react";
import { motion } from "framer-motion";
import { useEffect, useState } from "react";
import {
getGreetingName,
getInputPlaceholder,
getQuickActions,
} from "./helpers";
interface Props {
inputLayoutId: string;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
}
export function EmptySession({
inputLayoutId,
isCreatingSession,
onSend,
}: Props) {
const { user } = useSupabase();
const greetingName = getGreetingName(user);
const quickActions = getQuickActions();
const [loadingAction, setLoadingAction] = useState<string | null>(null);
const [inputPlaceholder, setInputPlaceholder] = useState(
getInputPlaceholder(),
);
useEffect(() => {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
}, [window.innerWidth]);
async function handleQuickActionClick(action: string) {
if (isCreatingSession || loadingAction) return;
setLoadingAction(action);
try {
await onSend(action);
} finally {
setLoadingAction(null);
}
}
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-0 py-5 md:px-6 md:py-10">
<motion.div
className="w-full max-w-3xl text-center"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.3 }}
>
<div className="mx-auto max-w-3xl">
<Text variant="h3" className="mb-1 !text-[1.375rem] text-zinc-700">
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 !font-normal">
Tell me about your work I&apos;ll find what to automate.
</Text>
<div className="mb-6">
<motion.div
layoutId={inputLayoutId}
transition={{ type: "spring", bounce: 0.2, duration: 0.65 }}
className="w-full px-2"
>
<ChatInput
inputId="chat-input-empty"
onSend={onSend}
disabled={isCreatingSession}
placeholder={inputPlaceholder}
className="w-full"
/>
</motion.div>
</div>
</div>
<div className="flex flex-wrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => void handleQuickActionClick(action)}
disabled={isCreatingSession || loadingAction !== null}
aria-busy={loadingAction === action}
leftIcon={
loadingAction === action ? (
<SpinnerGapIcon
className="h-4 w-4 animate-spin"
weight="bold"
/>
) : null
}
className="h-auto shrink-0 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
</motion.div>
</div>
);
}

View File

@@ -1,6 +1,26 @@
import type { User } from "@supabase/supabase-js";
import { User } from "@supabase/supabase-js";
export function getGreetingName(user?: User | null): string {
export function getInputPlaceholder(width?: number) {
if (!width) return "What's your role and what eats up most of your day?";
if (width < 500) {
return "I'm a chef and I hate...";
}
if (width <= 1080) {
return "What's your role and what eats up most of your day?";
}
return "What's your role and what eats up most of your day? e.g. 'I'm a recruiter and I hate...'";
}
export function getQuickActions() {
return [
"I don't know where to start, just ask me stuff",
"I do the same thing every week and it's killing me",
"Help me find where I'm wasting my time",
];
}
export function getGreetingName(user?: User | null) {
if (!user) return "there";
const metadata = user.user_metadata as Record<string, unknown> | undefined;
const fullName = metadata?.full_name;
@@ -16,30 +36,3 @@ export function getGreetingName(user?: User | null): string {
}
return "there";
}
export function buildCopilotChatUrl(prompt: string): string {
const trimmed = prompt.trim();
if (!trimmed) return "/copilot/chat";
const encoded = encodeURIComponent(trimmed);
return `/copilot/chat?prompt=${encoded}`;
}
export function getQuickActions(): string[] {
return [
"I don't know where to start, just ask me stuff",
"I do the same thing every week and it's killing me",
"Help me find where I'm wasting my time",
];
}
export function getInputPlaceholder(width?: number) {
if (!width) return "What's your role and what eats up most of your day?";
if (width < 500) {
return "I'm a chef and I hate...";
}
if (width <= 1080) {
return "What's your role and what eats up most of your day?";
}
return "What's your role and what eats up most of your day? e.g. 'I'm a recruiter and I hate...'";
}

View File

@@ -0,0 +1,140 @@
import type { SessionSummaryResponse } from "@/app/api/__generated__/models/sessionSummaryResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { PlusIcon, SpinnerGapIcon, X } from "@phosphor-icons/react";
import { Drawer } from "vaul";
interface Props {
isOpen: boolean;
sessions: SessionSummaryResponse[];
currentSessionId: string | null;
isLoading: boolean;
onSelectSession: (sessionId: string) => void;
onNewChat: () => void;
onClose: () => void;
onOpenChange: (open: boolean) => void;
}
function formatDate(dateString: string) {
const date = new Date(dateString);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
if (diffDays === 0) return "Today";
if (diffDays === 1) return "Yesterday";
if (diffDays < 7) return `${diffDays} days ago`;
const day = date.getDate();
const ordinal =
day % 10 === 1 && day !== 11
? "st"
: day % 10 === 2 && day !== 12
? "nd"
: day % 10 === 3 && day !== 13
? "rd"
: "th";
const month = date.toLocaleDateString("en-US", { month: "short" });
const year = date.getFullYear();
return `${day}${ordinal} ${month} ${year}`;
}
export function MobileDrawer({
isOpen,
sessions,
currentSessionId,
isLoading,
onSelectSession,
onNewChat,
onClose,
onOpenChange,
}: Props) {
return (
<Drawer.Root open={isOpen} onOpenChange={onOpenChange} direction="left">
<Drawer.Portal>
<Drawer.Overlay className="fixed inset-0 z-[60] bg-black/10 backdrop-blur-sm" />
<Drawer.Content className="fixed left-0 top-0 z-[70] flex h-full w-80 flex-col border-r border-zinc-200 bg-zinc-50">
<div className="shrink-0 border-b border-zinc-200 px-4 py-2">
<div className="flex items-center justify-between">
<Drawer.Title className="text-lg font-semibold text-zinc-800">
Your chats
</Drawer.Title>
<Button
variant="icon"
size="icon"
aria-label="Close sessions"
onClick={onClose}
>
<X width="1rem" height="1rem" />
</Button>
</div>
</div>
<div
className={cn(
"flex min-h-0 flex-1 flex-col gap-1 overflow-y-auto px-3 py-3",
scrollbarStyles,
)}
>
{isLoading ? (
<div className="flex items-center justify-center py-4">
<SpinnerGapIcon className="h-5 w-5 animate-spin text-neutral-400" />
</div>
) : sessions.length === 0 ? (
<p className="py-4 text-center text-sm text-neutral-500">
No conversations yet
</p>
) : (
sessions.map((session) => (
<button
key={session.id}
onClick={() => onSelectSession(session.id)}
className={cn(
"w-full rounded-lg px-3 py-2.5 text-left transition-colors",
session.id === currentSessionId
? "bg-zinc-100"
: "hover:bg-zinc-50",
)}
>
<div className="flex min-w-0 max-w-full flex-col overflow-hidden">
<div className="min-w-0 max-w-full">
<Text
variant="body"
className={cn(
"truncate font-normal",
session.id === currentSessionId
? "text-zinc-600"
: "text-zinc-800",
)}
>
{session.title || "Untitled chat"}
</Text>
</div>
<Text variant="small" className="text-neutral-400">
{formatDate(session.updated_at)}
</Text>
</div>
</button>
))
)}
</div>
{currentSessionId && (
<div className="shrink-0 bg-white p-3 shadow-[0_-4px_6px_-1px_rgba(0,0,0,0.05)]">
<Button
variant="primary"
size="small"
onClick={onNewChat}
className="w-full"
leftIcon={<PlusIcon width="1rem" height="1rem" />}
>
New Chat
</Button>
</div>
)}
</Drawer.Content>
</Drawer.Portal>
</Drawer.Root>
);
}

View File

@@ -0,0 +1,54 @@
import { cn } from "@/lib/utils";
import { AnimatePresence, motion } from "framer-motion";
interface Props {
text: string;
className?: string;
}
export function MorphingTextAnimation({ text, className }: Props) {
const letters = text.split("");
return (
<div className={cn(className)}>
<AnimatePresence mode="popLayout" initial={false}>
<motion.div key={text} className="whitespace-nowrap">
<motion.span className="inline-flex overflow-hidden">
{letters.map((char, index) => (
<motion.span
key={`${text}-${index}`}
initial={{
opacity: 0,
y: 8,
rotateX: "80deg",
filter: "blur(6px)",
}}
animate={{
opacity: 1,
y: 0,
rotateX: "0deg",
filter: "blur(0px)",
}}
exit={{
opacity: 0,
y: -8,
rotateX: "-80deg",
filter: "blur(6px)",
}}
style={{ willChange: "transform" }}
transition={{
delay: 0.015 * index,
type: "spring",
bounce: 0.5,
}}
className="inline-block"
>
{char === " " ? "\u00A0" : char}
</motion.span>
))}
</motion.span>
</motion.div>
</AnimatePresence>
</div>
);
}

View File

@@ -0,0 +1,69 @@
.loader {
position: relative;
animation: rotate 1s infinite;
}
.loader::before,
.loader::after {
border-radius: 50%;
content: "";
display: block;
/* 40% of container size */
height: 40%;
width: 40%;
}
.loader::before {
animation: ball1 1s infinite;
background-color: #a1a1aa; /* zinc-400 */
box-shadow: calc(var(--spacing)) 0 0 #18181b; /* zinc-900 */
margin-bottom: calc(var(--gap));
}
.loader::after {
animation: ball2 1s infinite;
background-color: #18181b; /* zinc-900 */
box-shadow: calc(var(--spacing)) 0 0 #a1a1aa; /* zinc-400 */
}
@keyframes rotate {
0% {
transform: rotate(0deg) scale(0.8);
}
50% {
transform: rotate(360deg) scale(1.2);
}
100% {
transform: rotate(720deg) scale(0.8);
}
}
@keyframes ball1 {
0% {
box-shadow: calc(var(--spacing)) 0 0 #18181b;
}
50% {
box-shadow: 0 0 0 #18181b;
margin-bottom: 0;
transform: translate(calc(var(--spacing) / 2), calc(var(--spacing) / 2));
}
100% {
box-shadow: calc(var(--spacing)) 0 0 #18181b;
margin-bottom: calc(var(--gap));
}
}
@keyframes ball2 {
0% {
box-shadow: calc(var(--spacing)) 0 0 #a1a1aa;
}
50% {
box-shadow: 0 0 0 #a1a1aa;
margin-top: calc(var(--ball-size) * -1);
transform: translate(calc(var(--spacing) / 2), calc(var(--spacing) / 2));
}
100% {
box-shadow: calc(var(--spacing)) 0 0 #a1a1aa;
margin-top: 0;
}
}

View File

@@ -0,0 +1,28 @@
import { cn } from "@/lib/utils";
import styles from "./OrbitLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function OrbitLoader({ size = 24, className }: Props) {
const ballSize = Math.round(size * 0.4);
const spacing = Math.round(size * 0.6);
const gap = Math.round(size * 0.2);
return (
<div
className={cn(styles.loader, className)}
style={
{
width: size,
height: size,
"--ball-size": `${ballSize}px`,
"--spacing": `${spacing}px`,
"--gap": `${gap}px`,
} as React.CSSProperties
}
/>
);
}

View File

@@ -0,0 +1,26 @@
import { cn } from "@/lib/utils";
interface Props {
value: number;
label?: string;
className?: string;
}
export function ProgressBar({ value, label, className }: Props) {
const clamped = Math.min(100, Math.max(0, value));
return (
<div className={cn("flex flex-col gap-1.5", className)}>
<div className="flex items-center justify-between text-xs text-neutral-500">
<span>{label ?? "Working on it..."}</span>
<span>{Math.round(clamped)}%</span>
</div>
<div className="h-2 w-full overflow-hidden rounded-full bg-neutral-200">
<div
className="h-full rounded-full bg-neutral-900 transition-[width] duration-300 ease-out"
style={{ width: `${clamped}%` }}
/>
</div>
</div>
);
}

View File

@@ -0,0 +1,34 @@
.loader {
position: relative;
display: inline-block;
flex-shrink: 0;
}
.loader::before,
.loader::after {
content: "";
box-sizing: border-box;
width: 100%;
height: 100%;
border-radius: 50%;
background: currentColor;
position: absolute;
left: 0;
top: 0;
animation: ripple 2s linear infinite;
}
.loader::after {
animation-delay: 1s;
}
@keyframes ripple {
0% {
transform: scale(0);
opacity: 1;
}
100% {
transform: scale(1);
opacity: 0;
}
}

View File

@@ -0,0 +1,16 @@
import { cn } from "@/lib/utils";
import styles from "./PulseLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function PulseLoader({ size = 24, className }: Props) {
return (
<div
className={cn(styles.loader, className)}
style={{ width: size, height: size }}
/>
);
}

View File

@@ -0,0 +1,57 @@
.loader {
position: relative;
display: inline-block;
flex-shrink: 0;
transform: rotateZ(45deg);
perspective: 1000px;
border-radius: 50%;
color: currentColor;
}
.loader::before,
.loader::after {
content: "";
display: block;
position: absolute;
top: 0;
left: 0;
width: inherit;
height: inherit;
border-radius: 50%;
transform: rotateX(70deg);
animation: spin 1s linear infinite;
}
.loader::after {
color: var(--spinner-accent, #a855f7);
transform: rotateY(70deg);
animation-delay: 0.4s;
}
@keyframes spin {
0%,
100% {
box-shadow: 0.2em 0 0 0 currentColor;
}
12% {
box-shadow: 0.2em 0.2em 0 0 currentColor;
}
25% {
box-shadow: 0 0.2em 0 0 currentColor;
}
37% {
box-shadow: -0.2em 0.2em 0 0 currentColor;
}
50% {
box-shadow: -0.2em 0 0 0 currentColor;
}
62% {
box-shadow: -0.2em -0.2em 0 0 currentColor;
}
75% {
box-shadow: 0 -0.2em 0 0 currentColor;
}
87% {
box-shadow: 0.2em -0.2em 0 0 currentColor;
}
}

View File

@@ -0,0 +1,16 @@
import { cn } from "@/lib/utils";
import styles from "./SpinnerLoader.module.css";
interface Props {
size?: number;
className?: string;
}
export function SpinnerLoader({ size = 24, className }: Props) {
return (
<div
className={cn(styles.loader, className)}
style={{ width: size, height: size }}
/>
);
}

View File

@@ -0,0 +1,235 @@
import { Link } from "@/components/atoms/Link/Link";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
/* ------------------------------------------------------------------ */
/* Layout */
/* ------------------------------------------------------------------ */
export function ContentGrid({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return <div className={cn("grid gap-2", className)}>{children}</div>;
}
/* ------------------------------------------------------------------ */
/* Card */
/* ------------------------------------------------------------------ */
export function ContentCard({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<div
className={cn(
"rounded-lg bg-gradient-to-r from-purple-500/30 to-blue-500/30 p-[1px]",
className,
)}
>
<div className="rounded-lg bg-neutral-100 p-3">{children}</div>
</div>
);
}
/** Flex row with a left content area (`children`) and an optional rightside `action`. */
export function ContentCardHeader({
children,
action,
className,
}: {
children: React.ReactNode;
action?: React.ReactNode;
className?: string;
}) {
return (
<div className={cn("flex items-start justify-between gap-2", className)}>
<div className="min-w-0">{children}</div>
{action}
</div>
);
}
export function ContentCardTitle({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text
variant="body-medium"
className={cn("truncate text-zinc-800", className)}
>
{children}
</Text>
);
}
export function ContentCardSubtitle({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text
variant="small"
className={cn("mt-0.5 truncate font-mono text-zinc-800", className)}
>
{children}
</Text>
);
}
export function ContentCardDescription({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text variant="body" className={cn("mt-2 text-zinc-800", className)}>
{children}
</Text>
);
}
/* ------------------------------------------------------------------ */
/* Text */
/* ------------------------------------------------------------------ */
export function ContentMessage({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text variant="body" className={cn("text-zinc-800", className)}>
{children}
</Text>
);
}
export function ContentHint({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text variant="small" className={cn("text-neutral-500", className)}>
{children}
</Text>
);
}
/* ------------------------------------------------------------------ */
/* Code / data */
/* ------------------------------------------------------------------ */
export function ContentCodeBlock({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<pre
className={cn(
"whitespace-pre-wrap rounded-lg border bg-black p-3 text-xs text-neutral-200",
className,
)}
>
{children}
</pre>
);
}
/* ------------------------------------------------------------------ */
/* Inline elements */
/* ------------------------------------------------------------------ */
export function ContentBadge({
children,
className,
}: {
children: React.ReactNode;
className?: string;
}) {
return (
<Text
variant="small"
as="span"
className={cn(
"shrink-0 rounded-full border bg-muted px-2 py-0.5 text-[11px] text-zinc-800",
className,
)}
>
{children}
</Text>
);
}
export function ContentLink({
href,
children,
className,
...rest
}: Omit<React.ComponentProps<typeof Link>, "className"> & {
className?: string;
}) {
return (
<Link
variant="primary"
isExternal
href={href}
className={cn("shrink-0 text-xs text-purple-500", className)}
{...rest}
>
{children}
</Link>
);
}
/* ------------------------------------------------------------------ */
/* Lists */
/* ------------------------------------------------------------------ */
export function ContentSuggestionsList({
items,
max = 5,
className,
}: {
items: string[];
max?: number;
className?: string;
}) {
if (items.length === 0) return null;
return (
<ul
className={cn(
"mt-2 list-disc space-y-1 pl-5 font-sans text-[0.75rem] leading-[1.125rem] text-zinc-800",
className,
)}
>
{items.slice(0, max).map((s) => (
<li key={s}>{s}</li>
))}
</ul>
);
}

View File

@@ -0,0 +1,102 @@
"use client";
import { cn } from "@/lib/utils";
import { CaretDownIcon } from "@phosphor-icons/react";
import { AnimatePresence, motion, useReducedMotion } from "framer-motion";
import { useId } from "react";
import { useToolAccordion } from "./useToolAccordion";
interface Props {
icon: React.ReactNode;
title: React.ReactNode;
titleClassName?: string;
description?: React.ReactNode;
children: React.ReactNode;
className?: string;
defaultExpanded?: boolean;
expanded?: boolean;
onExpandedChange?: (expanded: boolean) => void;
}
export function ToolAccordion({
icon,
title,
titleClassName,
description,
children,
className,
defaultExpanded,
expanded,
onExpandedChange,
}: Props) {
const shouldReduceMotion = useReducedMotion();
const contentId = useId();
const { isExpanded, toggle } = useToolAccordion({
expanded,
defaultExpanded,
onExpandedChange,
});
return (
<div
className={cn(
"mt-2 w-full rounded-lg border border-slate-200 bg-slate-100 px-3 py-2",
className,
)}
>
<button
type="button"
aria-expanded={isExpanded}
aria-controls={contentId}
onClick={toggle}
className="flex w-full items-center justify-between gap-3 py-1 text-left"
>
<div className="flex min-w-0 items-center gap-3">
<span className="flex shrink-0 items-center text-gray-800">
{icon}
</span>
<div className="min-w-0">
<p
className={cn(
"truncate text-sm font-medium text-gray-800",
titleClassName,
)}
>
{title}
</p>
{description && (
<p className="truncate text-xs text-slate-800">{description}</p>
)}
</div>
</div>
<CaretDownIcon
className={cn(
"h-4 w-4 shrink-0 text-slate-500 transition-transform",
isExpanded && "rotate-180",
)}
weight="bold"
/>
</button>
<AnimatePresence initial={false}>
{isExpanded && (
<motion.div
id={contentId}
initial={{ height: 0, opacity: 0, filter: "blur(10px)" }}
animate={{ height: "auto", opacity: 1, filter: "blur(0px)" }}
exit={{ height: 0, opacity: 0, filter: "blur(10px)" }}
transition={
shouldReduceMotion
? { duration: 0 }
: { type: "spring", bounce: 0.35, duration: 0.55 }
}
className="overflow-hidden"
style={{ willChange: "height, opacity, filter" }}
>
<div className="pb-2 pt-3">{children}</div>
</motion.div>
)}
</AnimatePresence>
</div>
);
}

View File

@@ -0,0 +1,32 @@
import { useState } from "react";
interface UseToolAccordionOptions {
expanded?: boolean;
defaultExpanded?: boolean;
onExpandedChange?: (expanded: boolean) => void;
}
interface UseToolAccordionResult {
isExpanded: boolean;
toggle: () => void;
}
export function useToolAccordion({
expanded,
defaultExpanded = false,
onExpandedChange,
}: UseToolAccordionOptions): UseToolAccordionResult {
const [uncontrolledExpanded, setUncontrolledExpanded] =
useState(defaultExpanded);
const isControlled = typeof expanded === "boolean";
const isExpanded = isControlled ? expanded : uncontrolledExpanded;
function toggle() {
const next = !isExpanded;
if (!isControlled) setUncontrolledExpanded(next);
onExpandedChange?.(next);
}
return { isExpanded, toggle };
}

View File

@@ -1,56 +0,0 @@
"use client";
import { create } from "zustand";
interface CopilotStoreState {
isStreaming: boolean;
isSwitchingSession: boolean;
isCreatingSession: boolean;
isInterruptModalOpen: boolean;
pendingAction: (() => void) | null;
}
interface CopilotStoreActions {
setIsStreaming: (isStreaming: boolean) => void;
setIsSwitchingSession: (isSwitchingSession: boolean) => void;
setIsCreatingSession: (isCreating: boolean) => void;
openInterruptModal: (onConfirm: () => void) => void;
confirmInterrupt: () => void;
cancelInterrupt: () => void;
}
type CopilotStore = CopilotStoreState & CopilotStoreActions;
export const useCopilotStore = create<CopilotStore>((set, get) => ({
isStreaming: false,
isSwitchingSession: false,
isCreatingSession: false,
isInterruptModalOpen: false,
pendingAction: null,
setIsStreaming(isStreaming) {
set({ isStreaming });
},
setIsSwitchingSession(isSwitchingSession) {
set({ isSwitchingSession });
},
setIsCreatingSession(isCreatingSession) {
set({ isCreatingSession });
},
openInterruptModal(onConfirm) {
set({ isInterruptModalOpen: true, pendingAction: onConfirm });
},
confirmInterrupt() {
const { pendingAction } = get();
set({ isInterruptModalOpen: false, pendingAction: null });
if (pendingAction) pendingAction();
},
cancelInterrupt() {
set({ isInterruptModalOpen: false, pendingAction: null });
},
}));

View File

@@ -0,0 +1,128 @@
import type { UIMessage, UIDataTypes, UITools } from "ai";
interface SessionChatMessage {
role: string;
content: string | null;
tool_call_id: string | null;
tool_calls: unknown[] | null;
}
function coerceSessionChatMessages(
rawMessages: unknown[],
): SessionChatMessage[] {
return rawMessages
.map((m) => {
if (!m || typeof m !== "object") return null;
const msg = m as Record<string, unknown>;
const role = typeof msg.role === "string" ? msg.role : null;
if (!role) return null;
return {
role,
content:
typeof msg.content === "string"
? msg.content
: msg.content == null
? null
: String(msg.content),
tool_call_id:
typeof msg.tool_call_id === "string"
? msg.tool_call_id
: msg.tool_call_id == null
? null
: String(msg.tool_call_id),
tool_calls: Array.isArray(msg.tool_calls) ? msg.tool_calls : null,
};
})
.filter((m): m is SessionChatMessage => m !== null);
}
function safeJsonParse(value: string): unknown {
try {
return JSON.parse(value) as unknown;
} catch {
return value;
}
}
function toToolInput(rawArguments: unknown): unknown {
if (typeof rawArguments === "string") {
const trimmed = rawArguments.trim();
return trimmed ? safeJsonParse(trimmed) : {};
}
if (rawArguments && typeof rawArguments === "object") return rawArguments;
return {};
}
export function convertChatSessionMessagesToUiMessages(
sessionId: string,
rawMessages: unknown[],
): UIMessage<unknown, UIDataTypes, UITools>[] {
const messages = coerceSessionChatMessages(rawMessages);
const toolOutputsByCallId = new Map<string, unknown>();
for (const msg of messages) {
if (msg.role !== "tool") continue;
if (!msg.tool_call_id) continue;
if (msg.content == null) continue;
toolOutputsByCallId.set(msg.tool_call_id, msg.content);
}
const uiMessages: UIMessage<unknown, UIDataTypes, UITools>[] = [];
messages.forEach((msg, index) => {
if (msg.role === "tool") return;
if (msg.role !== "user" && msg.role !== "assistant") return;
const parts: UIMessage<unknown, UIDataTypes, UITools>["parts"] = [];
if (typeof msg.content === "string" && msg.content.trim()) {
parts.push({ type: "text", text: msg.content, state: "done" });
}
if (msg.role === "assistant" && Array.isArray(msg.tool_calls)) {
for (const rawToolCall of msg.tool_calls) {
if (!rawToolCall || typeof rawToolCall !== "object") continue;
const toolCall = rawToolCall as {
id?: unknown;
function?: { name?: unknown; arguments?: unknown };
};
const toolCallId = String(toolCall.id ?? "").trim();
const toolName = String(toolCall.function?.name ?? "").trim();
if (!toolCallId || !toolName) continue;
const input = toToolInput(toolCall.function?.arguments);
const output = toolOutputsByCallId.get(toolCallId);
if (output !== undefined) {
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "output-available",
input,
output: typeof output === "string" ? safeJsonParse(output) : output,
});
} else {
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "input-available",
input,
});
}
}
}
if (parts.length === 0) return;
uiMessages.push({
id: `${sessionId}-${index}`,
role: msg.role,
parts,
});
});
return uiMessages;
}

View File

@@ -7,4 +7,4 @@ export function useCopilotSessionId() {
);
return { urlSessionId, setUrlSessionId };
}
}

View File

@@ -5,17 +5,16 @@ import { useEffect, useRef, useState } from "react";
* asymptotically approaching but never reaching the max value.
*
* Uses a half-life formula: progress = max * (1 - 0.5^(time/halfLife))
* This creates the "game loading bar" effect where:
* This creates a "loading bar" effect where:
* - 50% is reached at halfLifeSeconds
* - 75% is reached at 2 * halfLifeSeconds
* - 87.5% is reached at 3 * halfLifeSeconds
* - and so on...
*
* @param isActive - Whether the progress should be animating
* @param halfLifeSeconds - Time in seconds to reach 50% progress (default: 30)
* @param maxProgress - Maximum progress value to approach (default: 100)
* @param intervalMs - Update interval in milliseconds (default: 100)
* @returns Current progress value (0-maxProgress)
* @returns Current progress value (0maxProgress)
*/
export function useAsymptoticProgress(
isActive: boolean,
@@ -35,8 +34,6 @@ export function useAsymptoticProgress(
const interval = setInterval(() => {
elapsedTimeRef.current += intervalMs / 1000;
// Half-life approach: progress = max * (1 - 0.5^(time/halfLife))
// At t=halfLife: 50%, at t=2*halfLife: 75%, at t=3*halfLife: 87.5%, etc.
const newProgress =
maxProgress *
(1 - Math.pow(0.5, elapsedTimeRef.current / halfLifeSeconds));

View File

@@ -1,13 +0,0 @@
"use client";
import { FeatureFlagPage } from "@/services/feature-flags/FeatureFlagPage";
import { Flag } from "@/services/feature-flags/use-get-flag";
import { type ReactNode } from "react";
import { CopilotShell } from "./components/CopilotShell/CopilotShell";
export default function CopilotLayout({ children }: { children: ReactNode }) {
return (
<FeatureFlagPage flag={Flag.CHAT} whenDisabled="/library">
<CopilotShell>{children}</CopilotShell>
</FeatureFlagPage>
);
}

View File

@@ -1,149 +1,69 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { Skeleton } from "@/components/atoms/Skeleton/Skeleton";
import { Text } from "@/components/atoms/Text/Text";
import { Chat } from "@/components/contextual/Chat/Chat";
import { ChatInput } from "@/components/contextual/Chat/components/ChatInput/ChatInput";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { useEffect, useState } from "react";
import { useCopilotStore } from "./copilot-page-store";
import { getInputPlaceholder } from "./helpers";
import { SidebarProvider } from "@/components/ui/sidebar";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
import { MobileDrawer } from "./components/MobileDrawer/MobileDrawer";
import { MobileHeader } from "./components/MobileHeader/MobileHeader";
import { useCopilotPage } from "./useCopilotPage";
export default function CopilotPage() {
const { state, handlers } = useCopilotPage();
const isInterruptModalOpen = useCopilotStore((s) => s.isInterruptModalOpen);
const confirmInterrupt = useCopilotStore((s) => s.confirmInterrupt);
const cancelInterrupt = useCopilotStore((s) => s.cancelInterrupt);
const [inputPlaceholder, setInputPlaceholder] = useState(
getInputPlaceholder(),
);
useEffect(() => {
const handleResize = () => {
setInputPlaceholder(getInputPlaceholder(window.innerWidth));
};
handleResize();
window.addEventListener("resize", handleResize);
return () => window.removeEventListener("resize", handleResize);
}, []);
const { greetingName, quickActions, isLoading, hasSession, initialPrompt } =
state;
export default function Page() {
const {
handleQuickAction,
startChatWithPrompt,
handleSessionNotFound,
handleStreamingChange,
} = handlers;
if (hasSession) {
return (
<div className="flex h-full flex-col">
<Chat
className="flex-1"
initialPrompt={initialPrompt}
onSessionNotFound={handleSessionNotFound}
onStreamingChange={handleStreamingChange}
/>
<Dialog
title="Interrupt current chat?"
styling={{ maxWidth: 300, width: "100%" }}
controlled={{
isOpen: isInterruptModalOpen,
set: (open) => {
if (!open) cancelInterrupt();
},
}}
onClose={cancelInterrupt}
>
<Dialog.Content>
<div className="flex flex-col gap-4">
<Text variant="body">
The current chat response will be interrupted. Are you sure you
want to continue?
</Text>
<Dialog.Footer>
<Button
type="button"
variant="outline"
onClick={cancelInterrupt}
>
Cancel
</Button>
<Button
type="button"
variant="primary"
onClick={confirmInterrupt}
>
Continue
</Button>
</Dialog.Footer>
</div>
</Dialog.Content>
</Dialog>
</div>
);
}
sessionId,
messages,
status,
error,
stop,
isLoadingSession,
isCreatingSession,
createSession,
onSend,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
} = useCopilotPage();
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-3 py-5 md:px-6 md:py-10">
<div className="w-full text-center">
{isLoading ? (
<div className="mx-auto max-w-2xl">
<Skeleton className="mx-auto mb-3 h-8 w-64" />
<Skeleton className="mx-auto mb-8 h-6 w-80" />
<div className="mb-8">
<Skeleton className="mx-auto h-14 w-full rounded-lg" />
</div>
<div className="flex flex-wrap items-center justify-center gap-3">
{Array.from({ length: 4 }).map((_, i) => (
<Skeleton key={i} className="h-9 w-48 rounded-md" />
))}
</div>
</div>
) : (
<>
<div className="mx-auto max-w-3xl">
<Text
variant="h3"
className="mb-1 !text-[1.375rem] text-zinc-700"
>
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
<Text variant="h3" className="mb-8 !font-normal">
Tell me about your work I&apos;ll find what to automate.
</Text>
<div className="mb-6">
<ChatInput
onSend={startChatWithPrompt}
placeholder={inputPlaceholder}
/>
</div>
</div>
<div className="flex flex-wrap items-center justify-center gap-3 overflow-x-auto [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden">
{quickActions.map((action) => (
<Button
key={action}
type="button"
variant="outline"
size="small"
onClick={() => handleQuickAction(action)}
className="h-auto shrink-0 border-zinc-300 px-3 py-2 text-[.9rem] text-zinc-600"
>
{action}
</Button>
))}
</div>
</>
)}
<SidebarProvider
defaultOpen={true}
className="h-[calc(100vh-72px)] min-h-0"
>
{!isMobile && <ChatSidebar />}
<div className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0">
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
<div className="flex-1 overflow-hidden">
<ChatContainer
messages={messages}
status={status}
error={error}
sessionId={sessionId}
isLoadingSession={isLoadingSession}
isCreatingSession={isCreatingSession}
onCreateSession={createSession}
onSend={onSend}
onStop={stop}
/>
</div>
</div>
</div>
{isMobile && (
<MobileDrawer
isOpen={isDrawerOpen}
sessions={sessions}
currentSessionId={sessionId}
isLoading={isLoadingSessions}
onSelectSession={handleSelectSession}
onNewChat={handleNewChat}
onClose={handleCloseDrawer}
onOpenChange={handleDrawerOpenChange}
/>
)}
</SidebarProvider>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,237 @@
"use client";
import { WarningDiamondIcon } from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ProgressBar } from "../../components/ProgressBar/ProgressBar";
import {
ContentCardDescription,
ContentCodeBlock,
ContentGrid,
ContentHint,
ContentLink,
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useAsymptoticProgress } from "../../hooks/useAsymptoticProgress";
import {
ClarificationQuestionsCard,
ClarifyingQuestion,
} from "./components/ClarificationQuestionsCard";
import {
AccordionIcon,
formatMaybeJson,
getAnimationText,
getCreateAgentToolOutput,
isAgentPreviewOutput,
isAgentSavedOutput,
isClarificationNeededOutput,
isErrorOutput,
isOperationInProgressOutput,
isOperationPendingOutput,
isOperationStartedOutput,
ToolIcon,
truncateText,
type CreateAgentToolOutput,
} from "./helpers";
export interface CreateAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: CreateAgentToolPart;
}
function getAccordionMeta(output: CreateAgentToolOutput): {
icon: React.ReactNode;
title: React.ReactNode;
titleClassName?: string;
description?: string;
} {
const icon = <AccordionIcon />;
if (isAgentSavedOutput(output)) {
return { icon, title: output.agent_name };
}
if (isAgentPreviewOutput(output)) {
return {
icon,
title: output.agent_name,
description: `${output.node_count} block${output.node_count === 1 ? "" : "s"}`,
};
}
if (isClarificationNeededOutput(output)) {
const questions = output.questions ?? [];
return {
icon,
title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
};
}
if (
isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return {
icon: <OrbitLoader size={32} />,
title: "Creating agent, this may take a few minutes. Sit back and relax.",
};
}
return {
icon: (
<WarningDiamondIcon size={32} weight="light" className="text-red-500" />
),
title: "Error",
titleClassName: "text-red-500",
};
}
export function CreateAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getCreateAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const isOperating =
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output));
const progress = useAsymptoticProgress(isOperating);
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output) ||
isAgentPreviewOutput(output) ||
isAgentSavedOutput(output) ||
isClarificationNeededOutput(output) ||
isErrorOutput(output));
function handleClarificationAnswers(answers: Record<string, string>) {
const questions =
output && isClarificationNeededOutput(output)
? (output.questions ?? [])
: [];
const contextMessage = questions
.map((q) => {
const answer = answers[q.keyword] || "";
return `> ${q.question}\n\n${answer}`;
})
.join("\n\n");
onSend(
`**Here are my answers:**\n\n${contextMessage}\n\nPlease proceed with creating the agent.`,
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isOperating || isClarificationNeededOutput(output)}
>
{isOperating && (
<ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" />
<ContentHint>
This could take a few minutes, grab a coffee
</ContentHint>
</ContentGrid>
)}
{isAgentSavedOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
<div className="flex flex-wrap gap-2">
<ContentLink href={output.library_agent_link}>
Open in library
</ContentLink>
<ContentLink href={output.agent_page_link}>
Open in builder
</ContentLink>
</div>
<ContentCodeBlock>
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</ContentCodeBlock>
</ContentGrid>
)}
{isAgentPreviewOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.description?.trim() && (
<ContentCardDescription>
{output.description}
</ContentCardDescription>
)}
<ContentCodeBlock>
{truncateText(formatMaybeJson(output.agent_json), 1600)}
</ContentCodeBlock>
</ContentGrid>
)}
{isClarificationNeededOutput(output) && (
<ClarificationQuestionsCard
questions={(output.questions ?? []).map((q) => {
const item: ClarifyingQuestion = {
question: q.question,
keyword: q.keyword,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
})}
message={output.message}
onSubmitAnswers={handleClarificationAnswers}
/>
)}
{isErrorOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.error && (
<ContentCodeBlock>
{formatMaybeJson(output.error)}
</ContentCodeBlock>
)}
{output.details && (
<ContentCodeBlock>
{formatMaybeJson(output.details)}
</ContentCodeBlock>
)}
</ContentGrid>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -6,7 +6,7 @@ import { Input } from "@/components/atoms/Input/Input";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { CheckCircleIcon, QuestionIcon } from "@phosphor-icons/react";
import { useState, useEffect, useRef } from "react";
import { useEffect, useRef, useState } from "react";
export interface ClarifyingQuestion {
question: string;
@@ -24,12 +24,7 @@ interface Props {
className?: string;
}
function getStorageKey(sessionId?: string): string | null {
if (!sessionId) return null;
return `clarification_answers_${sessionId}`;
}
export function ClarificationQuestionsWidget({
export function ClarificationQuestionsCard({
questions,
message,
sessionId,
@@ -241,3 +236,8 @@ export function ClarificationQuestionsWidget({
</div>
);
}
function getStorageKey(sessionId?: string): string | null {
if (!sessionId) return null;
return `clarification_answers_${sessionId}`;
}

View File

@@ -0,0 +1,186 @@
import type { AgentPreviewResponse } from "@/app/api/__generated__/models/agentPreviewResponse";
import type { AgentSavedResponse } from "@/app/api/__generated__/models/agentSavedResponse";
import type { ClarificationNeededResponse } from "@/app/api/__generated__/models/clarificationNeededResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { OperationInProgressResponse } from "@/app/api/__generated__/models/operationInProgressResponse";
import type { OperationPendingResponse } from "@/app/api/__generated__/models/operationPendingResponse";
import type { OperationStartedResponse } from "@/app/api/__generated__/models/operationStartedResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import {
PlusCircleIcon,
PlusIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
export type CreateAgentToolOutput =
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| ErrorResponse;
function parseOutput(output: unknown): CreateAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.operation_started ||
type === ResponseType.operation_pending ||
type === ResponseType.operation_in_progress ||
type === ResponseType.agent_preview ||
type === ResponseType.agent_saved ||
type === ResponseType.clarification_needed ||
type === ResponseType.error
) {
return output as CreateAgentToolOutput;
}
if ("operation_id" in output && "tool_name" in output)
return output as OperationStartedResponse | OperationPendingResponse;
if ("tool_call_id" in output) return output as OperationInProgressResponse;
if ("agent_json" in output && "agent_name" in output)
return output as AgentPreviewResponse;
if ("agent_id" in output && "library_agent_id" in output)
return output as AgentSavedResponse;
if ("questions" in output) return output as ClarificationNeededResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getCreateAgentToolOutput(
part: unknown,
): CreateAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isOperationStartedOutput(
output: CreateAgentToolOutput,
): output is OperationStartedResponse {
return (
output.type === ResponseType.operation_started ||
("operation_id" in output && "tool_name" in output)
);
}
export function isOperationPendingOutput(
output: CreateAgentToolOutput,
): output is OperationPendingResponse {
return output.type === ResponseType.operation_pending;
}
export function isOperationInProgressOutput(
output: CreateAgentToolOutput,
): output is OperationInProgressResponse {
return (
output.type === ResponseType.operation_in_progress ||
"tool_call_id" in output
);
}
export function isAgentPreviewOutput(
output: CreateAgentToolOutput,
): output is AgentPreviewResponse {
return output.type === ResponseType.agent_preview || "agent_json" in output;
}
export function isAgentSavedOutput(
output: CreateAgentToolOutput,
): output is AgentSavedResponse {
return (
output.type === ResponseType.agent_saved || "agent_page_link" in output
);
}
export function isClarificationNeededOutput(
output: CreateAgentToolOutput,
): output is ClarificationNeededResponse {
return (
output.type === ResponseType.clarification_needed || "questions" in output
);
}
export function isErrorOutput(
output: CreateAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.state) {
case "input-streaming":
case "input-available":
return "Creating a new agent";
case "output-available": {
const output = parseOutput(part.output);
if (!output) return "Creating a new agent";
if (isOperationStartedOutput(output)) return "Agent creation started";
if (isOperationPendingOutput(output)) return "Agent creation in progress";
if (isOperationInProgressOutput(output))
return "Agent creation already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error creating agent";
}
case "output-error":
return "Error creating agent";
default:
return "Creating a new agent";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
if (isError) {
return (
<WarningDiamondIcon size={14} weight="regular" className="text-red-500" />
);
}
if (isStreaming) {
return <OrbitLoader size={24} />;
}
return <PlusIcon size={14} weight="regular" className="text-neutral-400" />;
}
export function AccordionIcon() {
return <PlusCircleIcon size={32} weight="light" />;
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function truncateText(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}

View File

@@ -0,0 +1,234 @@
"use client";
import { WarningDiamondIcon } from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ProgressBar } from "../../components/ProgressBar/ProgressBar";
import {
ContentCardDescription,
ContentCodeBlock,
ContentGrid,
ContentHint,
ContentLink,
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useAsymptoticProgress } from "../../hooks/useAsymptoticProgress";
import {
ClarificationQuestionsCard,
ClarifyingQuestion,
} from "../CreateAgent/components/ClarificationQuestionsCard";
import {
AccordionIcon,
formatMaybeJson,
getAnimationText,
getEditAgentToolOutput,
isAgentPreviewOutput,
isAgentSavedOutput,
isClarificationNeededOutput,
isErrorOutput,
isOperationInProgressOutput,
isOperationPendingOutput,
isOperationStartedOutput,
ToolIcon,
truncateText,
type EditAgentToolOutput,
} from "./helpers";
export interface EditAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: EditAgentToolPart;
}
function getAccordionMeta(output: EditAgentToolOutput): {
icon: React.ReactNode;
title: string;
titleClassName?: string;
description?: string;
} {
const icon = <AccordionIcon />;
if (isAgentSavedOutput(output)) {
return { icon, title: output.agent_name };
}
if (isAgentPreviewOutput(output)) {
return {
icon,
title: output.agent_name,
description: `${output.node_count} block${output.node_count === 1 ? "" : "s"}`,
};
}
if (isClarificationNeededOutput(output)) {
const questions = output.questions ?? [];
return {
icon,
title: "Needs clarification",
description: `${questions.length} question${questions.length === 1 ? "" : "s"}`,
};
}
if (
isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { icon: <OrbitLoader size={32} />, title: "Editing agent" };
}
return {
icon: (
<WarningDiamondIcon size={32} weight="light" className="text-red-500" />
),
title: "Error",
titleClassName: "text-red-500",
};
}
export function EditAgentTool({ part }: Props) {
const text = getAnimationText(part);
const { onSend } = useCopilotChatActions();
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getEditAgentToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const isOperating =
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output));
const progress = useAsymptoticProgress(isOperating);
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output) ||
isAgentPreviewOutput(output) ||
isAgentSavedOutput(output) ||
isClarificationNeededOutput(output) ||
isErrorOutput(output));
function handleClarificationAnswers(answers: Record<string, string>) {
const questions =
output && isClarificationNeededOutput(output)
? (output.questions ?? [])
: [];
const contextMessage = questions
.map((q) => {
const answer = answers[q.keyword] || "";
return `> ${q.question}\n\n${answer}`;
})
.join("\n\n");
onSend(
`**Here are my answers:**\n\n${contextMessage}\n\nPlease proceed with editing the agent.`,
);
}
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={isOperating || isClarificationNeededOutput(output)}
>
{isOperating && (
<ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" />
<ContentHint>
This could take a few minutes, grab a coffee
</ContentHint>
</ContentGrid>
)}
{isAgentSavedOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
<div className="flex flex-wrap gap-2">
<ContentLink href={output.library_agent_link}>
Open in library
</ContentLink>
<ContentLink href={output.agent_page_link}>
Open in builder
</ContentLink>
</div>
<ContentCodeBlock>
{truncateText(
formatMaybeJson({ agent_id: output.agent_id }),
800,
)}
</ContentCodeBlock>
</ContentGrid>
)}
{isAgentPreviewOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.description?.trim() && (
<ContentCardDescription>
{output.description}
</ContentCardDescription>
)}
<ContentCodeBlock>
{truncateText(formatMaybeJson(output.agent_json), 1600)}
</ContentCodeBlock>
</ContentGrid>
)}
{isClarificationNeededOutput(output) && (
<ClarificationQuestionsCard
questions={(output.questions ?? []).map((q) => {
const item: ClarifyingQuestion = {
question: q.question,
keyword: q.keyword,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
})}
message={output.message}
onSubmitAnswers={handleClarificationAnswers}
/>
)}
{isErrorOutput(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.error && (
<ContentCodeBlock>
{formatMaybeJson(output.error)}
</ContentCodeBlock>
)}
{output.details && (
<ContentCodeBlock>
{formatMaybeJson(output.details)}
</ContentCodeBlock>
)}
</ContentGrid>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,188 @@
import type { AgentPreviewResponse } from "@/app/api/__generated__/models/agentPreviewResponse";
import type { AgentSavedResponse } from "@/app/api/__generated__/models/agentSavedResponse";
import type { ClarificationNeededResponse } from "@/app/api/__generated__/models/clarificationNeededResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { OperationInProgressResponse } from "@/app/api/__generated__/models/operationInProgressResponse";
import type { OperationPendingResponse } from "@/app/api/__generated__/models/operationPendingResponse";
import type { OperationStartedResponse } from "@/app/api/__generated__/models/operationStartedResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import {
NotePencilIcon,
PencilLineIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
export type EditAgentToolOutput =
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| ErrorResponse;
function parseOutput(output: unknown): EditAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.operation_started ||
type === ResponseType.operation_pending ||
type === ResponseType.operation_in_progress ||
type === ResponseType.agent_preview ||
type === ResponseType.agent_saved ||
type === ResponseType.clarification_needed ||
type === ResponseType.error
) {
return output as EditAgentToolOutput;
}
if ("operation_id" in output && "tool_name" in output)
return output as OperationStartedResponse | OperationPendingResponse;
if ("tool_call_id" in output) return output as OperationInProgressResponse;
if ("agent_json" in output && "agent_name" in output)
return output as AgentPreviewResponse;
if ("agent_id" in output && "library_agent_id" in output)
return output as AgentSavedResponse;
if ("questions" in output) return output as ClarificationNeededResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getEditAgentToolOutput(
part: unknown,
): EditAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isOperationStartedOutput(
output: EditAgentToolOutput,
): output is OperationStartedResponse {
return (
output.type === ResponseType.operation_started ||
("operation_id" in output && "tool_name" in output)
);
}
export function isOperationPendingOutput(
output: EditAgentToolOutput,
): output is OperationPendingResponse {
return output.type === ResponseType.operation_pending;
}
export function isOperationInProgressOutput(
output: EditAgentToolOutput,
): output is OperationInProgressResponse {
return (
output.type === ResponseType.operation_in_progress ||
"tool_call_id" in output
);
}
export function isAgentPreviewOutput(
output: EditAgentToolOutput,
): output is AgentPreviewResponse {
return output.type === ResponseType.agent_preview || "agent_json" in output;
}
export function isAgentSavedOutput(
output: EditAgentToolOutput,
): output is AgentSavedResponse {
return (
output.type === ResponseType.agent_saved || "agent_page_link" in output
);
}
export function isClarificationNeededOutput(
output: EditAgentToolOutput,
): output is ClarificationNeededResponse {
return (
output.type === ResponseType.clarification_needed || "questions" in output
);
}
export function isErrorOutput(
output: EditAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.state) {
case "input-streaming":
case "input-available":
return "Editing the agent";
case "output-available": {
const output = parseOutput(part.output);
if (!output) return "Editing the agent";
if (isOperationStartedOutput(output)) return "Agent update started";
if (isOperationPendingOutput(output)) return "Agent update in progress";
if (isOperationInProgressOutput(output))
return "Agent update already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error editing agent";
}
case "output-error":
return "Error editing agent";
default:
return "Editing the agent";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
if (isError) {
return (
<WarningDiamondIcon size={14} weight="regular" className="text-red-500" />
);
}
if (isStreaming) {
return <OrbitLoader size={24} />;
}
return (
<PencilLineIcon size={14} weight="regular" className="text-neutral-400" />
);
}
export function AccordionIcon() {
return <NotePencilIcon size={32} weight="light" />;
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function truncateText(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}

View File

@@ -0,0 +1,127 @@
"use client";
import { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import {
ContentBadge,
ContentCard,
ContentCardDescription,
ContentCardHeader,
ContentCardTitle,
ContentGrid,
ContentLink,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
AccordionIcon,
getAgentHref,
getAnimationText,
getFindAgentsOutput,
getSourceLabelFromToolType,
isAgentsFoundOutput,
isErrorOutput,
ToolIcon,
} from "./helpers";
export interface FindAgentsToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: FindAgentsToolPart;
}
export function FindAgentsTool({ part }: Props) {
const text = getAnimationText(part);
const output = getFindAgentsOutput(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const query =
typeof part.input === "object" && part.input !== null
? String((part.input as { query?: unknown }).query ?? "").trim()
: "";
const agentsFoundOutput =
part.state === "output-available" && output && isAgentsFoundOutput(output)
? output
: null;
const hasAgents =
!!agentsFoundOutput &&
agentsFoundOutput.agents.length > 0 &&
(typeof agentsFoundOutput.count !== "number" ||
agentsFoundOutput.count > 0);
const totalCount = agentsFoundOutput ? agentsFoundOutput.count : 0;
const { source } = getSourceLabelFromToolType(part.type);
const scopeText =
source === "library"
? "in your library"
: source === "marketplace"
? "in marketplace"
: "";
const accordionDescription = `Found ${totalCount}${scopeText ? ` ${scopeText}` : ""}${
query ? ` for "${query}"` : ""
}`;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon
toolType={part.type}
isStreaming={isStreaming}
isError={isError}
/>
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasAgents && agentsFoundOutput && (
<ToolAccordion
icon={<AccordionIcon toolType={part.type} />}
title="Agent results"
description={accordionDescription}
>
<ContentGrid className="sm:grid-cols-2">
{agentsFoundOutput.agents.map((agent) => {
const href = getAgentHref(agent);
const agentSource =
agent.source === "library"
? "Library"
: agent.source === "marketplace"
? "Marketplace"
: null;
return (
<ContentCard key={agent.id}>
<ContentCardHeader
action={
href ? <ContentLink href={href}>Open</ContentLink> : null
}
>
<div className="flex items-center gap-2">
<ContentCardTitle>{agent.name}</ContentCardTitle>
{agentSource && (
<ContentBadge>{agentSource}</ContentBadge>
)}
</div>
<ContentCardDescription className="mt-1 line-clamp-2">
{agent.description}
</ContentCardDescription>
</ContentCardHeader>
</ContentCard>
);
})}
</ContentGrid>
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,187 @@
import type { AgentInfo } from "@/app/api/__generated__/models/agentInfo";
import type { AgentsFoundResponse } from "@/app/api/__generated__/models/agentsFoundResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import {
FolderOpenIcon,
MagnifyingGlassIcon,
SquaresFourIcon,
StorefrontIcon,
} from "@phosphor-icons/react";
import { ToolUIPart } from "ai";
export interface FindAgentInput {
query: string;
}
export type FindAgentsOutput =
| AgentsFoundResponse
| NoResultsResponse
| ErrorResponse;
export type FindAgentsToolType =
| "tool-find_agent"
| "tool-find_library_agent"
| (string & {});
function parseOutput(output: unknown): FindAgentsOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.agents_found ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as FindAgentsOutput;
}
if ("agents" in output && "count" in output)
return output as AgentsFoundResponse;
if ("suggestions" in output && !("error" in output))
return output as NoResultsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getFindAgentsOutput(part: unknown): FindAgentsOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isAgentsFoundOutput(
output: FindAgentsOutput,
): output is AgentsFoundResponse {
return output.type === ResponseType.agents_found || "agents" in output;
}
export function isNoResultsOutput(
output: FindAgentsOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorOutput(
output: FindAgentsOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getSourceLabelFromToolType(toolType?: FindAgentsToolType): {
source: "marketplace" | "library" | "unknown";
label: string;
} {
if (toolType === "tool-find_library_agent") {
return { source: "library", label: "Library" };
}
if (toolType === "tool-find_agent") {
return { source: "marketplace", label: "Marketplace" };
}
return { source: "unknown", label: "Agents" };
}
export function getAnimationText(part: {
type?: FindAgentsToolType;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const { source } = getSourceLabelFromToolType(part.type);
const query = (part.input as FindAgentInput | undefined)?.query?.trim();
// Action phrase matching legacy ToolCallMessage
const actionPhrase =
source === "library"
? "Looking for library agents"
: "Looking for agents in the marketplace";
const queryText = query ? ` matching "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `${actionPhrase}${queryText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) {
return `${actionPhrase}${queryText}`;
}
if (isNoResultsOutput(output)) {
return `No agents found${queryText}`;
}
if (isAgentsFoundOutput(output)) {
const count = output.count ?? output.agents?.length ?? 0;
return `Found ${count} agent${count === 1 ? "" : "s"}${queryText}`;
}
if (isErrorOutput(output)) {
return `Error finding agents${queryText}`;
}
return `${actionPhrase}${queryText}`;
}
case "output-error":
return `Error finding agents${queryText}`;
default:
return actionPhrase;
}
}
export function getAgentHref(agent: AgentInfo): string | null {
if (agent.source === "library") {
return `/library/agents/${encodeURIComponent(agent.id)}`;
}
const [creator, slug, ...rest] = agent.id.split("/");
if (!creator || !slug || rest.length > 0) return null;
return `/marketplace/agent/${encodeURIComponent(creator)}/${encodeURIComponent(slug)}`;
}
export function ToolIcon({
toolType,
isStreaming,
isError,
}: {
toolType?: FindAgentsToolType;
isStreaming?: boolean;
isError?: boolean;
}) {
const { source } = getSourceLabelFromToolType(toolType);
const IconComponent =
source === "library" ? MagnifyingGlassIcon : SquaresFourIcon;
return (
<IconComponent
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function AccordionIcon({ toolType }: { toolType?: FindAgentsToolType }) {
const { source } = getSourceLabelFromToolType(toolType);
const IconComponent = source === "library" ? FolderOpenIcon : StorefrontIcon;
return <IconComponent size={32} weight="light" />;
}

View File

@@ -0,0 +1,92 @@
"use client";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
ContentCard,
ContentCardDescription,
ContentCardTitle,
} from "../../components/ToolAccordion/AccordionContent";
import type { BlockListResponse } from "@/app/api/__generated__/models/blockListResponse";
import type { BlockInfoSummary } from "@/app/api/__generated__/models/blockInfoSummary";
import { ToolUIPart } from "ai";
import { HorizontalScroll } from "@/app/(platform)/build/components/NewControlPanel/NewBlockMenu/HorizontalScroll";
import {
AccordionIcon,
getAnimationText,
parseOutput,
ToolIcon,
} from "./helpers";
export interface FindBlockInput {
query: string;
}
export type FindBlockOutput = BlockListResponse;
export interface FindBlockToolPart {
type: string;
toolName?: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: FindBlockInput | unknown;
output?: string | FindBlockOutput | unknown;
title?: string;
}
interface Props {
part: FindBlockToolPart;
}
function BlockCard({ block }: { block: BlockInfoSummary }) {
return (
<ContentCard className="w-48 shrink-0">
<ContentCardTitle>{block.name}</ContentCardTitle>
<ContentCardDescription className="mt-1 line-clamp-2">
{block.description}
</ContentCardDescription>
</ContentCard>
);
}
export function FindBlocksTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError = part.state === "output-error";
const parsed =
part.state === "output-available" ? parseOutput(part.output) : null;
const hasBlocks = !!parsed && parsed.blocks.length > 0;
const query = (part.input as FindBlockInput | undefined)?.query?.trim();
const accordionDescription = parsed
? `Found ${parsed.count} block${parsed.count === 1 ? "" : "s"}${query ? ` for "${query}"` : ""}`
: undefined;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasBlocks && parsed && (
<ToolAccordion
icon={<AccordionIcon />}
title="Block results"
description={accordionDescription}
>
<HorizontalScroll dependencyList={[parsed.blocks.length]}>
{parsed.blocks.map((block) => (
<BlockCard key={block.id} block={block} />
))}
</HorizontalScroll>
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,75 @@
import type { BlockListResponse } from "@/app/api/__generated__/models/blockListResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import { CubeIcon, PackageIcon } from "@phosphor-icons/react";
import { FindBlockInput, FindBlockToolPart } from "./FindBlocks";
export function parseOutput(output: unknown): BlockListResponse | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (type === ResponseType.block_list || "blocks" in output) {
return output as BlockListResponse;
}
}
return null;
}
export function getAnimationText(part: FindBlockToolPart): string {
const query = (part.input as FindBlockInput | undefined)?.query?.trim();
const queryText = query ? ` matching "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Searching for blocks${queryText}`;
case "output-available": {
const parsed = parseOutput(part.output);
if (parsed) {
return `Found ${parsed.count} block${parsed.count === 1 ? "" : "s"}${queryText}`;
}
return `Searching for blocks${queryText}`;
}
case "output-error":
return `Error finding blocks${queryText}`;
default:
return "Searching for blocks";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<PackageIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function AccordionIcon() {
return <CubeIcon size={32} weight="light" />;
}

View File

@@ -0,0 +1,93 @@
"use client";
import type { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { ContentMessage } from "../../components/ToolAccordion/AccordionContent";
import {
getAccordionMeta,
getAnimationText,
getRunAgentToolOutput,
isRunAgentAgentDetailsOutput,
isRunAgentErrorOutput,
isRunAgentExecutionStartedOutput,
isRunAgentNeedLoginOutput,
isRunAgentSetupRequirementsOutput,
ToolIcon,
} from "./helpers";
import { ExecutionStartedCard } from "./components/ExecutionStartedCard/ExecutionStartedCard";
import { AgentDetailsCard } from "./components/AgentDetailsCard/AgentDetailsCard";
import { SetupRequirementsCard } from "./components/SetupRequirementsCard/SetupRequirementsCard";
import { ErrorCard } from "./components/ErrorCard/ErrorCard";
export interface RunAgentToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: RunAgentToolPart;
}
export function RunAgentTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getRunAgentToolOutput(part);
const isError =
part.state === "output-error" ||
(!!output && isRunAgentErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isRunAgentExecutionStartedOutput(output) ||
isRunAgentAgentDetailsOutput(output) ||
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentNeedLoginOutput(output) ||
isRunAgentErrorOutput(output));
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={
isRunAgentExecutionStartedOutput(output) ||
isRunAgentSetupRequirementsOutput(output) ||
isRunAgentAgentDetailsOutput(output)
}
>
{isRunAgentExecutionStartedOutput(output) && (
<ExecutionStartedCard output={output} />
)}
{isRunAgentAgentDetailsOutput(output) && (
<AgentDetailsCard output={output} />
)}
{isRunAgentSetupRequirementsOutput(output) && (
<SetupRequirementsCard output={output} />
)}
{isRunAgentNeedLoginOutput(output) && (
<ContentMessage>{output.message}</ContentMessage>
)}
{isRunAgentErrorOutput(output) && <ErrorCard output={output} />}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,116 @@
"use client";
import type { AgentDetailsResponse } from "@/app/api/__generated__/models/agentDetailsResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { FormRenderer } from "@/components/renderers/InputRenderer/FormRenderer";
import { AnimatePresence, motion } from "framer-motion";
import { useState } from "react";
import { useCopilotChatActions } from "../../../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { ContentMessage } from "../../../../components/ToolAccordion/AccordionContent";
import { buildInputSchema } from "./helpers";
interface Props {
output: AgentDetailsResponse;
}
export function AgentDetailsCard({ output }: Props) {
const { onSend } = useCopilotChatActions();
const [showInputForm, setShowInputForm] = useState(false);
const [inputValues, setInputValues] = useState<Record<string, unknown>>({});
function handleRunWithExamples() {
onSend(
`Run the agent "${output.agent.name}" with placeholder/example values so I can test it.`,
);
}
function handleRunWithInputs() {
const nonEmpty = Object.fromEntries(
Object.entries(inputValues).filter(
([, v]) => v !== undefined && v !== null && v !== "",
),
);
onSend(
`Run the agent "${output.agent.name}" with these inputs: ${JSON.stringify(nonEmpty, null, 2)}`,
);
setShowInputForm(false);
setInputValues({});
}
return (
<div className="grid gap-2">
<ContentMessage>
Run this agent with example values or your own inputs.
</ContentMessage>
<div className="flex gap-2 pt-4">
<Button size="small" className="w-fit" onClick={handleRunWithExamples}>
Run with example values
</Button>
<Button
variant="outline"
size="small"
className="w-fit"
onClick={() => setShowInputForm((prev) => !prev)}
>
Run with my inputs
</Button>
</div>
<AnimatePresence initial={false}>
{showInputForm && buildInputSchema(output.agent.inputs) && (
<motion.div
initial={{ height: 0, opacity: 0, filter: "blur(6px)" }}
animate={{ height: "auto", opacity: 1, filter: "blur(0px)" }}
exit={{ height: 0, opacity: 0, filter: "blur(6px)" }}
transition={{
height: { type: "spring", bounce: 0.15, duration: 0.5 },
opacity: { duration: 0.25 },
filter: { duration: 0.2 },
}}
className="overflow-hidden"
style={{ willChange: "height, opacity, filter" }}
>
<div className="mt-4 rounded-2xl border bg-background p-3 pt-4">
<Text variant="body-medium">Enter your inputs</Text>
<FormRenderer
jsonSchema={buildInputSchema(output.agent.inputs)!}
handleChange={(v) => setInputValues(v.formData ?? {})}
uiSchema={{
"ui:submitButtonOptions": { norender: true },
}}
initialValues={inputValues}
formContext={{
showHandles: false,
size: "small",
}}
/>
<div className="-mt-8 flex gap-2">
<Button
variant="primary"
size="small"
className="w-fit"
onClick={handleRunWithInputs}
>
Run
</Button>
<Button
variant="secondary"
size="small"
className="w-fit"
onClick={() => {
setShowInputForm(false);
setInputValues({});
}}
>
Cancel
</Button>
</div>
</div>
</motion.div>
)}
</AnimatePresence>
</div>
);
}

View File

@@ -0,0 +1,8 @@
import type { RJSFSchema } from "@rjsf/utils";
export function buildInputSchema(inputs: unknown): RJSFSchema | null {
if (!inputs || typeof inputs !== "object") return null;
const properties = inputs as RJSFSchema["properties"];
if (!properties || Object.keys(properties).length === 0) return null;
return inputs as RJSFSchema;
}

View File

@@ -0,0 +1,27 @@
"use client";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import {
ContentCodeBlock,
ContentGrid,
ContentMessage,
} from "../../../../components/ToolAccordion/AccordionContent";
import { formatMaybeJson } from "../../helpers";
interface Props {
output: ErrorResponse;
}
export function ErrorCard({ output }: Props) {
return (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.error && (
<ContentCodeBlock>{formatMaybeJson(output.error)}</ContentCodeBlock>
)}
{output.details && (
<ContentCodeBlock>{formatMaybeJson(output.details)}</ContentCodeBlock>
)}
</ContentGrid>
);
}

View File

@@ -0,0 +1,39 @@
"use client";
import type { ExecutionStartedResponse } from "@/app/api/__generated__/models/executionStartedResponse";
import { Button } from "@/components/atoms/Button/Button";
import { useRouter } from "next/navigation";
import {
ContentCard,
ContentCardDescription,
ContentCardSubtitle,
ContentCardTitle,
ContentGrid,
} from "../../../../components/ToolAccordion/AccordionContent";
interface Props {
output: ExecutionStartedResponse;
}
export function ExecutionStartedCard({ output }: Props) {
const router = useRouter();
return (
<ContentGrid>
<ContentCard>
<ContentCardTitle>Execution started</ContentCardTitle>
<ContentCardSubtitle>{output.execution_id}</ContentCardSubtitle>
<ContentCardDescription>{output.message}</ContentCardDescription>
{output.library_agent_link && (
<Button
size="small"
className="mt-3"
onClick={() => router.push(output.library_agent_link!)}
>
View Execution
</Button>
)}
</ContentCard>
</ContentGrid>
);
}

View File

@@ -0,0 +1,105 @@
"use client";
import { useState } from "react";
import { CredentialsGroupedView } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/CredentialsGroupedView";
import { Button } from "@/components/atoms/Button/Button";
import type { CredentialsMetaInput } from "@/lib/autogpt-server-api/types";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
import { useCopilotChatActions } from "../../../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ContentBadge,
ContentCardDescription,
ContentCardTitle,
ContentMessage,
} from "../../../../components/ToolAccordion/AccordionContent";
import { coerceCredentialFields, coerceExpectedInputs } from "./helpers";
interface Props {
output: SetupRequirementsResponse;
}
export function SetupRequirementsCard({ output }: Props) {
const { onSend } = useCopilotChatActions();
const [inputCredentials, setInputCredentials] = useState<
Record<string, CredentialsMetaInput | undefined>
>({});
const [hasSent, setHasSent] = useState(false);
const { credentialFields, requiredCredentials } = coerceCredentialFields(
output.setup_info.user_readiness?.missing_credentials,
);
const expectedInputs = coerceExpectedInputs(
(output.setup_info.requirements as Record<string, unknown>)?.inputs,
);
function handleCredentialChange(key: string, value?: CredentialsMetaInput) {
setInputCredentials((prev) => ({ ...prev, [key]: value }));
}
const isAllComplete =
credentialFields.length > 0 &&
[...requiredCredentials].every((key) => !!inputCredentials[key]);
function handleProceed() {
setHasSent(true);
onSend(
"I've configured the required credentials. Please check if everything is ready and proceed with running the agent.",
);
}
return (
<div className="grid gap-2">
<ContentMessage>{output.message}</ContentMessage>
{credentialFields.length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<CredentialsGroupedView
credentialFields={credentialFields}
requiredCredentials={requiredCredentials}
inputCredentials={inputCredentials}
inputValues={{}}
onCredentialChange={handleCredentialChange}
/>
{isAllComplete && !hasSent && (
<Button
variant="primary"
size="small"
className="mt-3 w-full"
onClick={handleProceed}
>
Proceed
</Button>
)}
</div>
)}
{expectedInputs.length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<ContentCardTitle className="text-xs">
Expected inputs
</ContentCardTitle>
<div className="mt-2 grid gap-2">
{expectedInputs.map((input) => (
<div key={input.name} className="rounded-xl border p-2">
<div className="flex items-center justify-between gap-2">
<ContentCardTitle className="text-xs">
{input.title}
</ContentCardTitle>
<ContentBadge>
{input.required ? "Required" : "Optional"}
</ContentBadge>
</div>
<ContentCardDescription className="mt-1">
{input.name} &bull; {input.type}
{input.description ? ` \u2022 ${input.description}` : ""}
</ContentCardDescription>
</div>
))}
</div>
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,116 @@
import type { CredentialField } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/helpers";
const VALID_CREDENTIAL_TYPES = new Set([
"api_key",
"oauth2",
"user_password",
"host_scoped",
]);
/**
* Transforms raw missing_credentials from SetupRequirementsResponse
* into CredentialField[] tuples compatible with CredentialsGroupedView.
*
* Each CredentialField is [key, schema] where schema matches
* BlockIOCredentialsSubSchema shape.
*/
export function coerceCredentialFields(rawMissingCredentials: unknown): {
credentialFields: CredentialField[];
requiredCredentials: Set<string>;
} {
const missing =
rawMissingCredentials && typeof rawMissingCredentials === "object"
? (rawMissingCredentials as Record<string, unknown>)
: {};
const credentialFields: CredentialField[] = [];
const requiredCredentials = new Set<string>();
Object.entries(missing).forEach(([key, value]) => {
if (!value || typeof value !== "object") return;
const cred = value as Record<string, unknown>;
const provider =
typeof cred.provider === "string" ? cred.provider.trim() : "";
if (!provider) return;
const types =
Array.isArray(cred.types) && cred.types.length > 0
? cred.types
: typeof cred.type === "string"
? [cred.type]
: [];
const credentialTypes = types
.map((t) => (typeof t === "string" ? t.trim() : ""))
.filter((t) => VALID_CREDENTIAL_TYPES.has(t));
if (credentialTypes.length === 0) return;
const scopes = Array.isArray(cred.scopes)
? cred.scopes.filter((s): s is string => typeof s === "string")
: undefined;
const schema = {
type: "object" as const,
properties: {},
credentials_provider: [provider],
credentials_types: credentialTypes,
credentials_scopes: scopes,
};
credentialFields.push([key, schema]);
requiredCredentials.add(key);
});
return { credentialFields, requiredCredentials };
}
export function coerceExpectedInputs(rawInputs: unknown): Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> {
if (!Array.isArray(rawInputs)) return [];
const results: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> = [];
rawInputs.forEach((value, index) => {
if (!value || typeof value !== "object") return;
const input = value as Record<string, unknown>;
const name =
typeof input.name === "string" && input.name.trim()
? input.name.trim()
: `input-${index}`;
const title =
typeof input.title === "string" && input.title.trim()
? input.title.trim()
: name;
const type = typeof input.type === "string" ? input.type : "unknown";
const description =
typeof input.description === "string" && input.description.trim()
? input.description.trim()
: undefined;
const required = Boolean(input.required);
const item: {
name: string;
title: string;
type: string;
description?: string;
required: boolean;
} = { name, title, type, required };
if (description) item.description = description;
results.push(item);
});
return results;
}

View File

@@ -0,0 +1,248 @@
import type { AgentDetailsResponse } from "@/app/api/__generated__/models/agentDetailsResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { ExecutionStartedResponse } from "@/app/api/__generated__/models/executionStartedResponse";
import type { NeedLoginResponse } from "@/app/api/__generated__/models/needLoginResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
import {
PlayIcon,
RocketLaunchIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import { SpinnerLoader } from "../../components/SpinnerLoader/SpinnerLoader";
export interface RunAgentInput {
username_agent_slug?: string;
library_agent_id?: string;
inputs?: Record<string, unknown>;
use_defaults?: boolean;
schedule_name?: string;
cron?: string;
timezone?: string;
}
export type RunAgentToolOutput =
| SetupRequirementsResponse
| ExecutionStartedResponse
| AgentDetailsResponse
| NeedLoginResponse
| ErrorResponse;
const RUN_AGENT_OUTPUT_TYPES = new Set<string>([
ResponseType.setup_requirements,
ResponseType.execution_started,
ResponseType.agent_details,
ResponseType.need_login,
ResponseType.error,
]);
export function isRunAgentSetupRequirementsOutput(
output: RunAgentToolOutput,
): output is SetupRequirementsResponse {
return (
output.type === ResponseType.setup_requirements ||
("setup_info" in output && typeof output.setup_info === "object")
);
}
export function isRunAgentExecutionStartedOutput(
output: RunAgentToolOutput,
): output is ExecutionStartedResponse {
return (
output.type === ResponseType.execution_started || "execution_id" in output
);
}
export function isRunAgentAgentDetailsOutput(
output: RunAgentToolOutput,
): output is AgentDetailsResponse {
return output.type === ResponseType.agent_details || "agent" in output;
}
export function isRunAgentNeedLoginOutput(
output: RunAgentToolOutput,
): output is NeedLoginResponse {
return output.type === ResponseType.need_login;
}
export function isRunAgentErrorOutput(
output: RunAgentToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
function parseOutput(output: unknown): RunAgentToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (typeof type === "string" && RUN_AGENT_OUTPUT_TYPES.has(type)) {
return output as RunAgentToolOutput;
}
if ("execution_id" in output) return output as ExecutionStartedResponse;
if ("setup_info" in output) return output as SetupRequirementsResponse;
if ("agent" in output) return output as AgentDetailsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
if (type === ResponseType.need_login) return output as NeedLoginResponse;
}
return null;
}
export function getRunAgentToolOutput(
part: unknown,
): RunAgentToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getAgentIdentifierText(
input: RunAgentInput | undefined,
): string | null {
if (!input) return null;
const slug = input.username_agent_slug?.trim();
if (slug) return slug;
const libraryId = input.library_agent_id?.trim();
if (libraryId) return `Library agent ${libraryId}`;
return null;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as RunAgentInput | undefined;
const agentIdentifier = getAgentIdentifierText(input);
const isSchedule = Boolean(
input?.schedule_name?.trim() || input?.cron?.trim(),
);
const actionPhrase = isSchedule
? "Scheduling the agent to run"
: "Running the agent";
const identifierText = agentIdentifier ? ` "${agentIdentifier}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `${actionPhrase}${identifierText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `${actionPhrase}${identifierText}`;
if (isRunAgentExecutionStartedOutput(output)) {
return `Started "${output.graph_name}"`;
}
if (isRunAgentAgentDetailsOutput(output)) {
return `Agent inputs needed for "${output.agent.name}"`;
}
if (isRunAgentSetupRequirementsOutput(output)) {
return `Setup needed for "${output.setup_info.agent_name}"`;
}
if (isRunAgentNeedLoginOutput(output))
return "Sign in required to run agent";
return "Error running agent";
}
case "output-error":
return "Error running agent";
default:
return actionPhrase;
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
if (isError) {
return (
<WarningDiamondIcon size={14} weight="regular" className="text-red-500" />
);
}
if (isStreaming) {
return <SpinnerLoader size={40} className="text-neutral-700" />;
}
return <PlayIcon size={14} weight="regular" className="text-neutral-400" />;
}
export function AccordionIcon() {
return <RocketLaunchIcon size={28} weight="light" />;
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function getAccordionMeta(output: RunAgentToolOutput): {
icon: React.ReactNode;
title: string;
titleClassName?: string;
description?: string;
} {
const icon = <AccordionIcon />;
if (isRunAgentExecutionStartedOutput(output)) {
const statusText =
typeof output.status === "string" && output.status.trim()
? output.status.trim()
: "started";
return {
icon: <SpinnerLoader size={28} className="text-neutral-700" />,
title: output.graph_name,
description: `Status: ${statusText}`,
};
}
if (isRunAgentAgentDetailsOutput(output)) {
return {
icon,
title: output.agent.name,
description: "Inputs required",
};
}
if (isRunAgentSetupRequirementsOutput(output)) {
const missingCredsCount = Object.keys(
(output.setup_info.user_readiness?.missing_credentials ?? {}) as Record<
string,
unknown
>,
).length;
return {
icon,
title: output.setup_info.agent_name,
description:
missingCredsCount > 0
? `Missing ${missingCredsCount} credential${missingCredsCount === 1 ? "" : "s"}`
: output.message,
};
}
if (isRunAgentNeedLoginOutput(output)) {
return { icon, title: "Sign in required" };
}
return {
icon: (
<WarningDiamondIcon size={28} weight="light" className="text-red-500" />
),
title: "Error",
titleClassName: "text-red-500",
};
}

View File

@@ -0,0 +1,76 @@
"use client";
import type { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { BlockOutputCard } from "./components/BlockOutputCard/BlockOutputCard";
import { ErrorCard } from "./components/ErrorCard/ErrorCard";
import { SetupRequirementsCard } from "./components/SetupRequirementsCard/SetupRequirementsCard";
import {
getAccordionMeta,
getAnimationText,
getRunBlockToolOutput,
isRunBlockBlockOutput,
isRunBlockErrorOutput,
isRunBlockSetupRequirementsOutput,
ToolIcon,
} from "./helpers";
export interface RunBlockToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: RunBlockToolPart;
}
export function RunBlockTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getRunBlockToolOutput(part);
const isError =
part.state === "output-error" ||
(!!output && isRunBlockErrorOutput(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isRunBlockBlockOutput(output) ||
isRunBlockSetupRequirementsOutput(output) ||
isRunBlockErrorOutput(output));
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion
{...getAccordionMeta(output)}
defaultExpanded={
isRunBlockBlockOutput(output) ||
isRunBlockSetupRequirementsOutput(output)
}
>
{isRunBlockBlockOutput(output) && <BlockOutputCard output={output} />}
{isRunBlockSetupRequirementsOutput(output) && (
<SetupRequirementsCard output={output} />
)}
{isRunBlockErrorOutput(output) && <ErrorCard output={output} />}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,133 @@
"use client";
import React, { useState } from "react";
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import { Button } from "@/components/atoms/Button/Button";
import type { BlockOutputResponse } from "@/app/api/__generated__/models/blockOutputResponse";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import type { OutputMetadata } from "@/components/contextual/OutputRenderers";
import {
ContentBadge,
ContentCard,
ContentCardTitle,
ContentGrid,
ContentMessage,
} from "../../../../components/ToolAccordion/AccordionContent";
interface Props {
output: BlockOutputResponse;
}
const COLLAPSED_LIMIT = 3;
function isWorkspaceRef(value: unknown): value is string {
return typeof value === "string" && value.startsWith("workspace://");
}
function resolveForRenderer(value: unknown): {
value: unknown;
metadata?: OutputMetadata;
} {
if (!isWorkspaceRef(value)) return { value };
const withoutPrefix = value.replace("workspace://", "");
const fileId = withoutPrefix.split("#")[0];
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const url = `/api/proxy${apiPath}`;
const hashIndex = value.indexOf("#");
const mimeHint =
hashIndex !== -1 ? value.slice(hashIndex + 1) || undefined : undefined;
const metadata: OutputMetadata = {};
if (mimeHint) {
metadata.mimeType = mimeHint;
if (mimeHint.startsWith("image/")) metadata.type = "image";
else if (mimeHint.startsWith("video/")) metadata.type = "video";
}
return { value: url, metadata };
}
function RenderOutputValue({ value }: { value: unknown }) {
const resolved = resolveForRenderer(value);
const renderer = globalRegistry.getRenderer(
resolved.value,
resolved.metadata,
);
if (renderer) {
return (
<OutputItem
value={resolved.value}
metadata={resolved.metadata}
renderer={renderer}
/>
);
}
// Fallback for audio workspace refs
if (
isWorkspaceRef(value) &&
resolved.metadata?.mimeType?.startsWith("audio/")
) {
return (
<audio controls src={String(resolved.value)} className="mt-2 w-full" />
);
}
return null;
}
function OutputKeySection({
outputKey,
items,
}: {
outputKey: string;
items: unknown[];
}) {
const [expanded, setExpanded] = useState(false);
const hasMoreItems = items.length > COLLAPSED_LIMIT;
const visibleItems = expanded ? items : items.slice(0, COLLAPSED_LIMIT);
return (
<ContentCard>
<div className="flex items-center justify-between gap-2">
<ContentCardTitle className="text-xs">{outputKey}</ContentCardTitle>
<ContentBadge>
{items.length} item{items.length === 1 ? "" : "s"}
</ContentBadge>
</div>
<div className="mt-2">
{visibleItems.map((item, i) => (
<RenderOutputValue key={i} value={item} />
))}
</div>
{hasMoreItems && (
<Button
variant="ghost"
size="small"
className="mt-1 h-auto px-0 py-0.5 text-[11px] text-muted-foreground"
onClick={() => setExpanded((prev) => !prev)}
>
{expanded ? "Show less" : `Show all ${items.length} items`}
</Button>
)}
</ContentCard>
);
}
export function BlockOutputCard({ output }: Props) {
return (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{Object.entries(output.outputs ?? {}).map(([key, items]) => (
<OutputKeySection key={key} outputKey={key} items={items} />
))}
</ContentGrid>
);
}

View File

@@ -0,0 +1,27 @@
"use client";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import {
ContentCodeBlock,
ContentGrid,
ContentMessage,
} from "../../../../components/ToolAccordion/AccordionContent";
import { formatMaybeJson } from "../../helpers";
interface Props {
output: ErrorResponse;
}
export function ErrorCard({ output }: Props) {
return (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.error && (
<ContentCodeBlock>{formatMaybeJson(output.error)}</ContentCodeBlock>
)}
{output.details && (
<ContentCodeBlock>{formatMaybeJson(output.details)}</ContentCodeBlock>
)}
</ContentGrid>
);
}

View File

@@ -0,0 +1,197 @@
"use client";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { CredentialsGroupedView } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/CredentialsGroupedView";
import { FormRenderer } from "@/components/renderers/InputRenderer/FormRenderer";
import type { CredentialsMetaInput } from "@/lib/autogpt-server-api/types";
import { AnimatePresence, motion } from "framer-motion";
import { useState } from "react";
import { useCopilotChatActions } from "../../../../components/CopilotChatActionsProvider/useCopilotChatActions";
import {
ContentBadge,
ContentCardDescription,
ContentCardTitle,
ContentMessage,
} from "../../../../components/ToolAccordion/AccordionContent";
import {
buildExpectedInputsSchema,
coerceCredentialFields,
coerceExpectedInputs,
} from "./helpers";
interface Props {
output: SetupRequirementsResponse;
}
export function SetupRequirementsCard({ output }: Props) {
const { onSend } = useCopilotChatActions();
const [inputCredentials, setInputCredentials] = useState<
Record<string, CredentialsMetaInput | undefined>
>({});
const [hasSentCredentials, setHasSentCredentials] = useState(false);
const [showInputForm, setShowInputForm] = useState(false);
const [inputValues, setInputValues] = useState<Record<string, unknown>>({});
const { credentialFields, requiredCredentials } = coerceCredentialFields(
output.setup_info.user_readiness?.missing_credentials,
);
const expectedInputs = coerceExpectedInputs(
(output.setup_info.requirements as Record<string, unknown>)?.inputs,
);
const inputSchema = buildExpectedInputsSchema(expectedInputs);
function handleCredentialChange(key: string, value?: CredentialsMetaInput) {
setInputCredentials((prev) => ({ ...prev, [key]: value }));
}
const isAllCredentialsComplete =
credentialFields.length > 0 &&
[...requiredCredentials].every((key) => !!inputCredentials[key]);
function handleProceedCredentials() {
setHasSentCredentials(true);
onSend(
"I've configured the required credentials. Please re-run the block now.",
);
}
function handleRunWithInputs() {
const nonEmpty = Object.fromEntries(
Object.entries(inputValues).filter(
([, v]) => v !== undefined && v !== null && v !== "",
),
);
onSend(
`Run the block with these inputs: ${JSON.stringify(nonEmpty, null, 2)}`,
);
setShowInputForm(false);
setInputValues({});
}
return (
<div className="grid gap-2">
<ContentMessage>{output.message}</ContentMessage>
{credentialFields.length > 0 && (
<div className="rounded-2xl border bg-background p-3">
<CredentialsGroupedView
credentialFields={credentialFields}
requiredCredentials={requiredCredentials}
inputCredentials={inputCredentials}
inputValues={{}}
onCredentialChange={handleCredentialChange}
/>
{isAllCredentialsComplete && !hasSentCredentials && (
<Button
variant="primary"
size="small"
className="mt-3 w-full"
onClick={handleProceedCredentials}
>
Proceed
</Button>
)}
</div>
)}
{inputSchema && (
<div className="flex gap-2 pt-2">
<Button
variant="outline"
size="small"
className="w-fit"
onClick={() => setShowInputForm((prev) => !prev)}
>
{showInputForm ? "Hide inputs" : "Fill in inputs"}
</Button>
</div>
)}
<AnimatePresence initial={false}>
{showInputForm && inputSchema && (
<motion.div
initial={{ height: 0, opacity: 0, filter: "blur(6px)" }}
animate={{ height: "auto", opacity: 1, filter: "blur(0px)" }}
exit={{ height: 0, opacity: 0, filter: "blur(6px)" }}
transition={{
height: { type: "spring", bounce: 0.15, duration: 0.5 },
opacity: { duration: 0.25 },
filter: { duration: 0.2 },
}}
className="overflow-hidden"
style={{ willChange: "height, opacity, filter" }}
>
<div className="rounded-2xl border bg-background p-3 pt-4">
<Text variant="body-medium">Block inputs</Text>
<FormRenderer
jsonSchema={inputSchema}
handleChange={(v) => setInputValues(v.formData ?? {})}
uiSchema={{
"ui:submitButtonOptions": { norender: true },
}}
initialValues={inputValues}
formContext={{
showHandles: false,
size: "small",
}}
/>
<div className="-mt-8 flex gap-2">
<Button
variant="primary"
size="small"
className="w-fit"
onClick={handleRunWithInputs}
>
Run
</Button>
<Button
variant="secondary"
size="small"
className="w-fit"
onClick={() => {
setShowInputForm(false);
setInputValues({});
}}
>
Cancel
</Button>
</div>
</div>
</motion.div>
)}
</AnimatePresence>
{expectedInputs.length > 0 && !inputSchema && (
<div className="rounded-2xl border bg-background p-3">
<ContentCardTitle className="text-xs">
Expected inputs
</ContentCardTitle>
<div className="mt-2 grid gap-2">
{expectedInputs.map((input) => (
<div key={input.name} className="rounded-xl border p-2">
<div className="flex items-center justify-between gap-2">
<ContentCardTitle className="text-xs">
{input.title}
</ContentCardTitle>
<ContentBadge>
{input.required ? "Required" : "Optional"}
</ContentBadge>
</div>
<ContentCardDescription className="mt-1">
{input.name} &bull; {input.type}
{input.description ? ` \u2022 ${input.description}` : ""}
</ContentCardDescription>
</div>
))}
</div>
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,156 @@
import type { CredentialField } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/helpers";
import type { RJSFSchema } from "@rjsf/utils";
const VALID_CREDENTIAL_TYPES = new Set([
"api_key",
"oauth2",
"user_password",
"host_scoped",
]);
export function coerceCredentialFields(rawMissingCredentials: unknown): {
credentialFields: CredentialField[];
requiredCredentials: Set<string>;
} {
const missing =
rawMissingCredentials && typeof rawMissingCredentials === "object"
? (rawMissingCredentials as Record<string, unknown>)
: {};
const credentialFields: CredentialField[] = [];
const requiredCredentials = new Set<string>();
Object.entries(missing).forEach(([key, value]) => {
if (!value || typeof value !== "object") return;
const cred = value as Record<string, unknown>;
const provider =
typeof cred.provider === "string" ? cred.provider.trim() : "";
if (!provider) return;
const types =
Array.isArray(cred.types) && cred.types.length > 0
? cred.types
: typeof cred.type === "string"
? [cred.type]
: [];
const credentialTypes = types
.map((t) => (typeof t === "string" ? t.trim() : ""))
.filter((t) => VALID_CREDENTIAL_TYPES.has(t));
if (credentialTypes.length === 0) return;
const scopes = Array.isArray(cred.scopes)
? cred.scopes.filter((s): s is string => typeof s === "string")
: undefined;
const schema = {
type: "object" as const,
properties: {},
credentials_provider: [provider],
credentials_types: credentialTypes,
credentials_scopes: scopes,
};
credentialFields.push([key, schema]);
requiredCredentials.add(key);
});
return { credentialFields, requiredCredentials };
}
export function coerceExpectedInputs(rawInputs: unknown): Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> {
if (!Array.isArray(rawInputs)) return [];
const results: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}> = [];
rawInputs.forEach((value, index) => {
if (!value || typeof value !== "object") return;
const input = value as Record<string, unknown>;
const name =
typeof input.name === "string" && input.name.trim()
? input.name.trim()
: `input-${index}`;
const title =
typeof input.title === "string" && input.title.trim()
? input.title.trim()
: name;
const type = typeof input.type === "string" ? input.type : "unknown";
const description =
typeof input.description === "string" && input.description.trim()
? input.description.trim()
: undefined;
const required = Boolean(input.required);
const item: {
name: string;
title: string;
type: string;
description?: string;
required: boolean;
} = { name, title, type, required };
if (description) item.description = description;
results.push(item);
});
return results;
}
/**
* Build an RJSF schema from expected inputs so they can be rendered
* as a dynamic form via FormRenderer.
*/
export function buildExpectedInputsSchema(
expectedInputs: Array<{
name: string;
title: string;
type: string;
description?: string;
required: boolean;
}>,
): RJSFSchema | null {
if (expectedInputs.length === 0) return null;
const TYPE_MAP: Record<string, string> = {
string: "string",
str: "string",
text: "string",
number: "number",
int: "integer",
integer: "integer",
float: "number",
boolean: "boolean",
bool: "boolean",
};
const properties: Record<string, Record<string, unknown>> = {};
const required: string[] = [];
for (const input of expectedInputs) {
properties[input.name] = {
type: TYPE_MAP[input.type.toLowerCase()] ?? "string",
title: input.title,
...(input.description ? { description: input.description } : {}),
};
if (input.required) required.push(input.name);
}
return {
type: "object",
properties,
...(required.length > 0 ? { required } : {}),
};
}

View File

@@ -0,0 +1,185 @@
import type { BlockOutputResponse } from "@/app/api/__generated__/models/blockOutputResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import type { SetupRequirementsResponse } from "@/app/api/__generated__/models/setupRequirementsResponse";
import {
PlayCircleIcon,
PlayIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import { SpinnerLoader } from "../../components/SpinnerLoader/SpinnerLoader";
export interface RunBlockInput {
block_id?: string;
input_data?: Record<string, unknown>;
}
export type RunBlockToolOutput =
| SetupRequirementsResponse
| BlockOutputResponse
| ErrorResponse;
const RUN_BLOCK_OUTPUT_TYPES = new Set<string>([
ResponseType.setup_requirements,
ResponseType.block_output,
ResponseType.error,
]);
export function isRunBlockSetupRequirementsOutput(
output: RunBlockToolOutput,
): output is SetupRequirementsResponse {
return (
output.type === ResponseType.setup_requirements ||
("setup_info" in output && typeof output.setup_info === "object")
);
}
export function isRunBlockBlockOutput(
output: RunBlockToolOutput,
): output is BlockOutputResponse {
return output.type === ResponseType.block_output || "block_id" in output;
}
export function isRunBlockErrorOutput(
output: RunBlockToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
function parseOutput(output: unknown): RunBlockToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (typeof type === "string" && RUN_BLOCK_OUTPUT_TYPES.has(type)) {
return output as RunBlockToolOutput;
}
if ("block_id" in output) return output as BlockOutputResponse;
if ("setup_info" in output) return output as SetupRequirementsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getRunBlockToolOutput(
part: unknown,
): RunBlockToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as RunBlockInput | undefined;
const blockId = input?.block_id?.trim();
const blockText = blockId ? ` "${blockId}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Running the block${blockText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Running the block${blockText}`;
if (isRunBlockBlockOutput(output)) return `Ran "${output.block_name}"`;
if (isRunBlockSetupRequirementsOutput(output)) {
return `Setup needed for "${output.setup_info.agent_name}"`;
}
return "Error running block";
}
case "output-error":
return "Error running block";
default:
return "Running the block";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
if (isError) {
return (
<WarningDiamondIcon size={14} weight="regular" className="text-red-500" />
);
}
if (isStreaming) {
return <SpinnerLoader size={40} className="text-neutral-700" />;
}
return <PlayIcon size={14} weight="regular" className="text-neutral-400" />;
}
export function AccordionIcon() {
return <PlayCircleIcon size={32} weight="light" />;
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}
export function getAccordionMeta(output: RunBlockToolOutput): {
icon: React.ReactNode;
title: string;
titleClassName?: string;
description?: string;
} {
const icon = <AccordionIcon />;
if (isRunBlockBlockOutput(output)) {
const keys = Object.keys(output.outputs ?? {});
return {
icon: <SpinnerLoader size={32} className="text-neutral-700" />,
title: output.block_name,
description:
keys.length > 0
? `${keys.length} output key${keys.length === 1 ? "" : "s"}`
: output.message,
};
}
if (isRunBlockSetupRequirementsOutput(output)) {
const missingCredsCount = Object.keys(
(output.setup_info.user_readiness?.missing_credentials ?? {}) as Record<
string,
unknown
>,
).length;
return {
icon,
title: output.setup_info.agent_name,
description:
missingCredsCount > 0
? `Missing ${missingCredsCount} credential${missingCredsCount === 1 ? "" : "s"}`
: output.message,
};
}
return {
icon: (
<WarningDiamondIcon size={32} weight="light" className="text-red-500" />
),
title: "Error",
titleClassName: "text-red-500",
};
}

View File

@@ -0,0 +1,186 @@
"use client";
import type { ToolUIPart } from "ai";
import { useMemo } from "react";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import {
ContentCard,
ContentCardDescription,
ContentCardHeader,
ContentCardSubtitle,
ContentCardTitle,
ContentGrid,
ContentLink,
ContentMessage,
ContentSuggestionsList,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
AccordionIcon,
getAnimationText,
getDocsToolOutput,
getDocsToolTitle,
getToolLabel,
isDocPageOutput,
isDocSearchResultsOutput,
isErrorOutput,
isNoResultsOutput,
toDocsUrl,
ToolIcon,
type DocsToolType,
} from "./helpers";
export interface DocsToolPart {
type: DocsToolType;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: DocsToolPart;
}
function truncate(text: string, maxChars: number): string {
const trimmed = text.trim();
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}
export function SearchDocsTool({ part }: Props) {
const output = getDocsToolOutput(part);
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const isError =
part.state === "output-error" || (!!output && isErrorOutput(output));
const normalized = useMemo(() => {
if (!output) return null;
const title = getDocsToolTitle(part.type, output);
const label = getToolLabel(part.type);
return { title, label };
}, [output, part.type]);
const isOutputAvailable = part.state === "output-available" && !!output;
const docSearchOutput =
isOutputAvailable && output && isDocSearchResultsOutput(output)
? output
: null;
const docPageOutput =
isOutputAvailable && output && isDocPageOutput(output) ? output : null;
const noResultsOutput =
isOutputAvailable && output && isNoResultsOutput(output) ? output : null;
const errorOutput =
isOutputAvailable && output && isErrorOutput(output) ? output : null;
const hasExpandableContent =
isOutputAvailable &&
((!!docSearchOutput && docSearchOutput.count > 0) ||
!!docPageOutput ||
!!noResultsOutput ||
!!errorOutput);
const accordionDescription =
hasExpandableContent && docSearchOutput
? `Found ${docSearchOutput.count} result${docSearchOutput.count === 1 ? "" : "s"} for "${docSearchOutput.query}"`
: hasExpandableContent && docPageOutput
? docPageOutput.path
: hasExpandableContent && (noResultsOutput || errorOutput)
? ((noResultsOutput ?? errorOutput)?.message ?? null)
: null;
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon
toolType={part.type}
isStreaming={isStreaming}
isError={isError}
/>
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && normalized && (
<ToolAccordion
icon={<AccordionIcon toolType={part.type} />}
title={normalized.title}
description={accordionDescription}
>
{docSearchOutput && (
<ContentGrid>
{docSearchOutput.results.map((r) => {
const href = r.doc_url ?? toDocsUrl(r.path);
return (
<ContentCard key={r.path}>
<ContentCardHeader
action={<ContentLink href={href}>Open</ContentLink>}
>
<ContentCardTitle>{r.title}</ContentCardTitle>
<ContentCardSubtitle>
{r.path}
{r.section ? `${r.section}` : ""}
</ContentCardSubtitle>
<ContentCardDescription>
{truncate(r.snippet, 240)}
</ContentCardDescription>
</ContentCardHeader>
</ContentCard>
);
})}
</ContentGrid>
)}
{docPageOutput && (
<div>
<ContentCardHeader
action={
<ContentLink
href={
docPageOutput.doc_url ?? toDocsUrl(docPageOutput.path)
}
>
Open
</ContentLink>
}
>
<ContentCardTitle>{docPageOutput.title}</ContentCardTitle>
<ContentCardSubtitle>{docPageOutput.path}</ContentCardSubtitle>
</ContentCardHeader>
<ContentCardDescription className="whitespace-pre-wrap">
{truncate(docPageOutput.content, 800)}
</ContentCardDescription>
</div>
)}
{noResultsOutput && (
<div>
<ContentMessage>{noResultsOutput.message}</ContentMessage>
{noResultsOutput.suggestions &&
noResultsOutput.suggestions.length > 0 && (
<ContentSuggestionsList items={noResultsOutput.suggestions} />
)}
</div>
)}
{errorOutput && (
<div>
<ContentMessage>{errorOutput.message}</ContentMessage>
{errorOutput.error && (
<ContentCardDescription>
{errorOutput.error}
</ContentCardDescription>
)}
</div>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,215 @@
import type { DocPageResponse } from "@/app/api/__generated__/models/docPageResponse";
import type { DocSearchResultsResponse } from "@/app/api/__generated__/models/docSearchResultsResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import {
ArticleIcon,
FileMagnifyingGlassIcon,
FileTextIcon,
} from "@phosphor-icons/react";
import { ToolUIPart } from "ai";
export interface SearchDocsInput {
query: string;
}
export interface GetDocPageInput {
path: string;
}
export type DocsToolOutput =
| DocSearchResultsResponse
| DocPageResponse
| NoResultsResponse
| ErrorResponse;
export type DocsToolType = "tool-search_docs" | "tool-get_doc_page" | string;
export function getToolLabel(toolType: DocsToolType): string {
switch (toolType) {
case "tool-search_docs":
return "Docs";
case "tool-get_doc_page":
return "Docs page";
default:
return "Docs";
}
}
function parseOutput(output: unknown): DocsToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.doc_search_results ||
type === ResponseType.doc_page ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as DocsToolOutput;
}
if ("results" in output && "query" in output)
return output as DocSearchResultsResponse;
if ("content" in output && "path" in output)
return output as DocPageResponse;
if ("suggestions" in output && !("error" in output))
return output as NoResultsResponse;
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function getDocsToolOutput(part: unknown): DocsToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
export function isDocSearchResultsOutput(
output: DocsToolOutput,
): output is DocSearchResultsResponse {
return output.type === ResponseType.doc_search_results || "results" in output;
}
export function isDocPageOutput(
output: DocsToolOutput,
): output is DocPageResponse {
return output.type === ResponseType.doc_page || "content" in output;
}
export function isNoResultsOutput(
output: DocsToolOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorOutput(output: DocsToolOutput): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getDocsToolTitle(
toolType: DocsToolType,
output: DocsToolOutput,
): string {
if (toolType === "tool-search_docs") {
if (isDocSearchResultsOutput(output)) return "Documentation results";
if (isNoResultsOutput(output)) return "No documentation found";
return "Documentation search error";
}
if (isDocPageOutput(output)) return "Documentation page";
if (isNoResultsOutput(output)) return "No documentation found";
return "Documentation page error";
}
export function getAnimationText(part: {
type: DocsToolType;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
switch (part.type) {
case "tool-search_docs": {
const query = (part.input as SearchDocsInput | undefined)?.query?.trim();
const queryText = query ? ` for "${query}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Searching documentation${queryText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Searching documentation${queryText}`;
if (isDocSearchResultsOutput(output)) {
const count = output.count ?? output.results.length;
return `Found ${count} result${count === 1 ? "" : "s"}${queryText}`;
}
if (isNoResultsOutput(output)) {
return `No results found${queryText}`;
}
return `Error searching documentation${queryText}`;
}
case "output-error":
return `Error searching documentation${queryText}`;
default:
return "Searching documentation";
}
}
case "tool-get_doc_page": {
const path = (part.input as GetDocPageInput | undefined)?.path?.trim();
const pathText = path ? ` "${path}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Loading documentation page${pathText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Loading documentation page${pathText}`;
if (isDocPageOutput(output)) return `Loaded "${output.title}"`;
if (isNoResultsOutput(output)) return "Documentation page not found";
return "Error loading documentation page";
}
case "output-error":
return "Error loading documentation page";
default:
return "Loading documentation page";
}
}
}
return "Processing";
}
export function ToolIcon({
toolType,
isStreaming,
isError,
}: {
toolType: DocsToolType;
isStreaming?: boolean;
isError?: boolean;
}) {
const IconComponent =
toolType === "tool-get_doc_page" ? FileTextIcon : FileMagnifyingGlassIcon;
return (
<IconComponent
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function AccordionIcon({ toolType }: { toolType: DocsToolType }) {
const IconComponent =
toolType === "tool-get_doc_page" ? ArticleIcon : FileTextIcon;
return <IconComponent size={32} weight="light" />;
}
export function toDocsUrl(path: string): string {
const urlPath = path.includes(".")
? path.slice(0, path.lastIndexOf("."))
: path;
return `https://docs.agpt.co/${urlPath}`;
}

View File

@@ -0,0 +1,261 @@
"use client";
import type { ToolUIPart } from "ai";
import React from "react";
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import {
globalRegistry,
OutputItem,
} from "@/components/contextual/OutputRenderers";
import type { OutputMetadata } from "@/components/contextual/OutputRenderers";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
ContentBadge,
ContentCard,
ContentCardHeader,
ContentCardSubtitle,
ContentCardTitle,
ContentCodeBlock,
ContentGrid,
ContentLink,
ContentMessage,
ContentSuggestionsList,
} from "../../components/ToolAccordion/AccordionContent";
import {
formatMaybeJson,
getAnimationText,
getViewAgentOutputToolOutput,
isAgentOutputResponse,
isErrorResponse,
isNoResultsResponse,
AccordionIcon,
ToolIcon,
type ViewAgentOutputToolOutput,
} from "./helpers";
export interface ViewAgentOutputToolPart {
type: string;
toolCallId: string;
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}
interface Props {
part: ViewAgentOutputToolPart;
}
function isWorkspaceRef(value: unknown): value is string {
return typeof value === "string" && value.startsWith("workspace://");
}
function resolveForRenderer(value: unknown): {
value: unknown;
metadata?: OutputMetadata;
} {
if (!isWorkspaceRef(value)) return { value };
const withoutPrefix = value.replace("workspace://", "");
const fileId = withoutPrefix.split("#")[0];
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const url = `/api/proxy${apiPath}`;
const hashIndex = value.indexOf("#");
const mimeHint =
hashIndex !== -1 ? value.slice(hashIndex + 1) || undefined : undefined;
const metadata: OutputMetadata = {};
if (mimeHint) {
metadata.mimeType = mimeHint;
if (mimeHint.startsWith("image/")) metadata.type = "image";
else if (mimeHint.startsWith("video/")) metadata.type = "video";
}
return { value: url, metadata };
}
function RenderOutputValue({ value }: { value: unknown }) {
const resolved = resolveForRenderer(value);
const renderer = globalRegistry.getRenderer(
resolved.value,
resolved.metadata,
);
if (renderer) {
return (
<OutputItem
value={resolved.value}
metadata={resolved.metadata}
renderer={renderer}
/>
);
}
// Fallback for audio workspace refs
if (
isWorkspaceRef(value) &&
resolved.metadata?.mimeType?.startsWith("audio/")
) {
return (
<audio controls src={String(resolved.value)} className="mt-2 w-full" />
);
}
return null;
}
function getAccordionMeta(output: ViewAgentOutputToolOutput): {
icon: React.ReactNode;
title: string;
description?: string;
} {
const icon = <AccordionIcon />;
if (isAgentOutputResponse(output)) {
const status = output.execution?.status;
return {
icon,
title: output.agent_name,
description: status ? `Status: ${status}` : output.message,
};
}
if (isNoResultsResponse(output)) {
return { icon, title: "No results" };
}
return { icon, title: "Error" };
}
export function ViewAgentOutputTool({ part }: Props) {
const text = getAnimationText(part);
const isStreaming =
part.state === "input-streaming" || part.state === "input-available";
const output = getViewAgentOutputToolOutput(part);
const isError =
part.state === "output-error" || (!!output && isErrorResponse(output));
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
(isAgentOutputResponse(output) ||
isNoResultsResponse(output) ||
isErrorResponse(output));
return (
<div className="py-2">
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ToolIcon isStreaming={isStreaming} isError={isError} />
<MorphingTextAnimation
text={text}
className={isError ? "text-red-500" : undefined}
/>
</div>
{hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}>
{isAgentOutputResponse(output) && (
<ContentGrid>
<ContentCardHeader
className="gap-3"
action={
output.library_agent_link ? (
<ContentLink href={output.library_agent_link}>
Open
</ContentLink>
) : null
}
>
<ContentMessage>{output.message}</ContentMessage>
</ContentCardHeader>
{output.execution ? (
<ContentGrid>
<ContentCard>
<ContentCardTitle className="text-xs">
Execution
</ContentCardTitle>
<ContentCardSubtitle className="mt-1">
{output.execution.execution_id}
</ContentCardSubtitle>
<ContentCardSubtitle className="mt-1">
Status: {output.execution.status}
</ContentCardSubtitle>
</ContentCard>
{output.execution.inputs_summary && (
<ContentCard>
<ContentCardTitle className="text-xs">
Inputs summary
</ContentCardTitle>
<div className="mt-2">
<RenderOutputValue
value={output.execution.inputs_summary}
/>
</div>
</ContentCard>
)}
{Object.entries(output.execution.outputs ?? {}).map(
([key, items]) => (
<ContentCard key={key}>
<div className="flex items-center justify-between gap-2">
<ContentCardTitle className="text-xs">
{key}
</ContentCardTitle>
<ContentBadge>
{items.length} item
{items.length === 1 ? "" : "s"}
</ContentBadge>
</div>
<div className="mt-2">
{items.slice(0, 3).map((item, i) => (
<RenderOutputValue key={i} value={item} />
))}
</div>
</ContentCard>
),
)}
</ContentGrid>
) : (
<ContentCard>
<ContentMessage>No execution selected.</ContentMessage>
<ContentCardSubtitle className="mt-1">
Try asking for a specific run or execution_id.
</ContentCardSubtitle>
</ContentCard>
)}
</ContentGrid>
)}
{isNoResultsResponse(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.suggestions && output.suggestions.length > 0 && (
<ContentSuggestionsList
items={output.suggestions}
className="mt-1"
/>
)}
</ContentGrid>
)}
{isErrorResponse(output) && (
<ContentGrid>
<ContentMessage>{output.message}</ContentMessage>
{output.error && (
<ContentCodeBlock>
{formatMaybeJson(output.error)}
</ContentCodeBlock>
)}
{output.details && (
<ContentCodeBlock>
{formatMaybeJson(output.details)}
</ContentCodeBlock>
)}
</ContentGrid>
)}
</ToolAccordion>
)}
</div>
);
}

View File

@@ -0,0 +1,158 @@
import type { AgentOutputResponse } from "@/app/api/__generated__/models/agentOutputResponse";
import type { ErrorResponse } from "@/app/api/__generated__/models/errorResponse";
import type { NoResultsResponse } from "@/app/api/__generated__/models/noResultsResponse";
import { ResponseType } from "@/app/api/__generated__/models/responseType";
import { EyeIcon, MonitorIcon } from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
export interface ViewAgentOutputInput {
agent_name?: string;
library_agent_id?: string;
store_slug?: string;
execution_id?: string;
run_time?: string;
}
export type ViewAgentOutputToolOutput =
| AgentOutputResponse
| NoResultsResponse
| ErrorResponse;
function parseOutput(output: unknown): ViewAgentOutputToolOutput | null {
if (!output) return null;
if (typeof output === "string") {
const trimmed = output.trim();
if (!trimmed) return null;
try {
return parseOutput(JSON.parse(trimmed) as unknown);
} catch {
return null;
}
}
if (typeof output === "object") {
const type = (output as { type?: unknown }).type;
if (
type === ResponseType.agent_output ||
type === ResponseType.no_results ||
type === ResponseType.error
) {
return output as ViewAgentOutputToolOutput;
}
if ("agent_id" in output && "agent_name" in output) {
return output as AgentOutputResponse;
}
if ("suggestions" in output && !("error" in output)) {
return output as NoResultsResponse;
}
if ("error" in output || "details" in output)
return output as ErrorResponse;
}
return null;
}
export function isAgentOutputResponse(
output: ViewAgentOutputToolOutput,
): output is AgentOutputResponse {
return output.type === ResponseType.agent_output || "agent_id" in output;
}
export function isNoResultsResponse(
output: ViewAgentOutputToolOutput,
): output is NoResultsResponse {
return (
output.type === ResponseType.no_results ||
("suggestions" in output && !("error" in output))
);
}
export function isErrorResponse(
output: ViewAgentOutputToolOutput,
): output is ErrorResponse {
return output.type === ResponseType.error || "error" in output;
}
export function getViewAgentOutputToolOutput(
part: unknown,
): ViewAgentOutputToolOutput | null {
if (!part || typeof part !== "object") return null;
return parseOutput((part as { output?: unknown }).output);
}
function getAgentIdentifierText(
input: ViewAgentOutputInput | undefined,
): string | null {
if (!input) return null;
const libraryId = input.library_agent_id?.trim();
if (libraryId) return `Library agent ${libraryId}`;
const slug = input.store_slug?.trim();
if (slug) return slug;
const name = input.agent_name?.trim();
if (name) return name;
return null;
}
export function getAnimationText(part: {
state: ToolUIPart["state"];
input?: unknown;
output?: unknown;
}): string {
const input = part.input as ViewAgentOutputInput | undefined;
const agent = getAgentIdentifierText(input);
const agentText = agent ? ` "${agent}"` : "";
switch (part.state) {
case "input-streaming":
case "input-available":
return `Retrieving agent output${agentText}`;
case "output-available": {
const output = parseOutput(part.output);
if (!output) return `Retrieving agent output${agentText}`;
if (isAgentOutputResponse(output)) {
if (output.execution)
return `Retrieved output (${output.execution.status})`;
return "Retrieved agent output";
}
if (isNoResultsResponse(output)) return "No outputs found";
return "Error loading agent output";
}
case "output-error":
return "Error loading agent output";
default:
return "Retrieving agent output";
}
}
export function ToolIcon({
isStreaming,
isError,
}: {
isStreaming?: boolean;
isError?: boolean;
}) {
return (
<EyeIcon
size={14}
weight="regular"
className={
isError
? "text-red-500"
: isStreaming
? "text-neutral-500"
: "text-neutral-400"
}
/>
);
}
export function AccordionIcon() {
return <MonitorIcon size={32} weight="light" />;
}
export function formatMaybeJson(value: unknown): string {
if (typeof value === "string") return value;
try {
return JSON.stringify(value, null, 2);
} catch {
return String(value);
}
}

View File

@@ -0,0 +1,108 @@
import {
getGetV2GetSessionQueryKey,
getGetV2ListSessionsQueryKey,
useGetV2GetSession,
usePostV2CreateSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { toast } from "@/components/molecules/Toast/use-toast";
import { useQueryClient } from "@tanstack/react-query";
import * as Sentry from "@sentry/nextjs";
import { parseAsString, useQueryState } from "nuqs";
import { useEffect, useMemo, useRef } from "react";
import { convertChatSessionMessagesToUiMessages } from "./helpers/convertChatSessionToUiMessages";
export function useChatSession() {
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const queryClient = useQueryClient();
const sessionQuery = useGetV2GetSession(sessionId ?? "", {
query: {
enabled: !!sessionId,
staleTime: Infinity,
refetchOnWindowFocus: false,
refetchOnReconnect: false,
},
});
// When the user navigates away from a session, invalidate its query cache.
// useChat destroys its Chat instance on id change, so messages are lost.
// Invalidating ensures the next visit fetches fresh data from the API
// instead of hydrating from stale cache that's missing recent messages.
const prevSessionIdRef = useRef(sessionId);
useEffect(() => {
const prev = prevSessionIdRef.current;
prevSessionIdRef.current = sessionId;
if (prev && prev !== sessionId) {
queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(prev),
});
}
}, [sessionId, queryClient]);
// Memoize so the effect in useCopilotPage doesn't infinite-loop on a new
// array reference every render. Re-derives only when query data changes.
const hydratedMessages = useMemo(() => {
if (sessionQuery.data?.status !== 200 || !sessionId) return undefined;
return convertChatSessionMessagesToUiMessages(
sessionId,
sessionQuery.data.data.messages ?? [],
);
}, [sessionQuery.data, sessionId]);
const { mutateAsync: createSessionMutation, isPending: isCreatingSession } =
usePostV2CreateSession({
mutation: {
onSuccess: (response) => {
if (response.status === 200 && response.data?.id) {
setSessionId(response.data.id);
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
}
},
},
});
async function createSession() {
if (sessionId) return sessionId;
try {
const response = await createSessionMutation();
if (response.status !== 200 || !response.data?.id) {
const error = new Error("Failed to create session");
Sentry.captureException(error, {
extra: { status: response.status },
});
toast({
variant: "destructive",
title: "Could not start a new chat session",
description: "Please try again.",
});
throw error;
}
return response.data.id;
} catch (error) {
if (
error instanceof Error &&
error.message === "Failed to create session"
) {
throw error; // already handled above
}
Sentry.captureException(error);
toast({
variant: "destructive",
title: "Could not start a new chat session",
description: "Please try again.",
});
throw error;
}
}
return {
sessionId,
setSessionId,
hydratedMessages,
isLoadingSession: sessionQuery.isLoading,
createSession,
isCreatingSession,
};
}

View File

@@ -1,127 +1,134 @@
import {
getGetV2ListSessionsQueryKey,
postV2CreateSession,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useOnboarding } from "@/providers/onboarding/onboarding-provider";
import { SessionKey, sessionStorage } from "@/services/storage/session-storage";
import * as Sentry from "@sentry/nextjs";
import { useQueryClient } from "@tanstack/react-query";
import { useRouter } from "next/navigation";
import { useEffect } from "react";
import { useCopilotStore } from "./copilot-page-store";
import { getGreetingName, getQuickActions } from "./helpers";
import { useCopilotSessionId } from "./useCopilotSessionId";
import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { useEffect, useMemo, useState } from "react";
import { useChatSession } from "./useChatSession";
export function useCopilotPage() {
const router = useRouter();
const queryClient = useQueryClient();
const { user, isLoggedIn, isUserLoading } = useSupabase();
const { toast } = useToast();
const { completeStep } = useOnboarding();
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const [pendingMessage, setPendingMessage] = useState<string | null>(null);
const { urlSessionId, setUrlSessionId } = useCopilotSessionId();
const setIsStreaming = useCopilotStore((s) => s.setIsStreaming);
const isCreating = useCopilotStore((s) => s.isCreatingSession);
const setIsCreating = useCopilotStore((s) => s.setIsCreatingSession);
const {
sessionId,
setSessionId,
hydratedMessages,
isLoadingSession,
createSession,
isCreatingSession,
} = useChatSession();
const greetingName = getGreetingName(user);
const quickActions = getQuickActions();
const breakpoint = useBreakpoint();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const hasSession = Boolean(urlSessionId);
const initialPrompt = urlSessionId
? getInitialPrompt(urlSessionId)
: undefined;
const transport = useMemo(
() =>
sessionId
? new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
prepareSendMessagesRequest: ({ messages }) => {
const last = messages[messages.length - 1];
return {
body: {
message: last.parts
?.map((p) => (p.type === "text" ? p.text : ""))
.join(""),
is_user_message: last.role === "user",
context: null,
},
};
},
})
: null,
[sessionId],
);
const { messages, sendMessage, stop, status, error, setMessages } = useChat({
id: sessionId ?? undefined,
transport: transport ?? undefined,
});
useEffect(() => {
if (isLoggedIn) completeStep("VISIT_COPILOT");
}, [completeStep, isLoggedIn]);
if (!hydratedMessages || hydratedMessages.length === 0) return;
setMessages((prev) => {
if (prev.length >= hydratedMessages.length) return prev;
return hydratedMessages;
});
}, [hydratedMessages, setMessages]);
async function startChatWithPrompt(prompt: string) {
if (!prompt?.trim()) return;
if (isCreating) return;
// Clear messages when session is null
useEffect(() => {
if (!sessionId) setMessages([]);
}, [sessionId, setMessages]);
const trimmedPrompt = prompt.trim();
setIsCreating(true);
useEffect(() => {
if (!sessionId || !pendingMessage) return;
const msg = pendingMessage;
setPendingMessage(null);
sendMessage({ text: msg });
}, [sessionId, pendingMessage, sendMessage]);
try {
const sessionResponse = await postV2CreateSession({
body: JSON.stringify({}),
});
async function onSend(message: string) {
const trimmed = message.trim();
if (!trimmed) return;
if (sessionResponse.status !== 200 || !sessionResponse.data?.id) {
throw new Error("Failed to create session");
}
const sessionId = sessionResponse.data.id;
setInitialPrompt(sessionId, trimmedPrompt);
await queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
await setUrlSessionId(sessionId, { shallow: true });
} catch (error) {
console.error("[CopilotPage] Failed to start chat:", error);
toast({ title: "Failed to start chat", variant: "destructive" });
Sentry.captureException(error);
} finally {
setIsCreating(false);
if (sessionId) {
sendMessage({ text: trimmed });
return;
}
setPendingMessage(trimmed);
await createSession();
}
function handleQuickAction(action: string) {
startChatWithPrompt(action);
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions({ limit: 50 });
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
function handleOpenDrawer() {
setIsDrawerOpen(true);
}
function handleSessionNotFound() {
router.replace("/copilot");
function handleCloseDrawer() {
setIsDrawerOpen(false);
}
function handleStreamingChange(isStreamingValue: boolean) {
setIsStreaming(isStreamingValue);
function handleDrawerOpenChange(open: boolean) {
setIsDrawerOpen(open);
}
function handleSelectSession(id: string) {
setSessionId(id);
if (isMobile) setIsDrawerOpen(false);
}
function handleNewChat() {
setSessionId(null);
if (isMobile) setIsDrawerOpen(false);
}
return {
state: {
greetingName,
quickActions,
isLoading: isUserLoading,
hasSession,
initialPrompt,
},
handlers: {
handleQuickAction,
startChatWithPrompt,
handleSessionNotFound,
handleStreamingChange,
},
sessionId,
messages,
status,
error,
stop,
isLoadingSession,
isCreatingSession,
createSession,
onSend,
// Mobile drawer
isMobile,
isDrawerOpen,
sessions,
isLoadingSessions,
handleOpenDrawer,
handleCloseDrawer,
handleDrawerOpenChange,
handleSelectSession,
handleNewChat,
};
}
function getInitialPrompt(sessionId: string): string | undefined {
try {
const prompts = JSON.parse(
sessionStorage.get(SessionKey.CHAT_INITIAL_PROMPTS) || "{}",
);
return prompts[sessionId];
} catch {
return undefined;
}
}
function setInitialPrompt(sessionId: string, prompt: string): void {
try {
const prompts = JSON.parse(
sessionStorage.get(SessionKey.CHAT_INITIAL_PROMPTS) || "{}",
);
prompts[sessionId] = prompt;
sessionStorage.set(
SessionKey.CHAT_INITIAL_PROMPTS,
JSON.stringify(prompts),
);
} catch {
// Ignore storage errors
}
}

View File

@@ -88,39 +88,27 @@ export async function POST(
}
/**
* Legacy GET endpoint for backward compatibility
* Resume an active stream for a session.
*
* Called by the AI SDK's `useChat(resume: true)` on page load.
* Proxies to the backend which checks for an active stream and either
* replays it (200 + SSE) or returns 204 No Content.
*/
export async function GET(
request: NextRequest,
_request: NextRequest,
{ params }: { params: Promise<{ sessionId: string }> },
) {
const { sessionId } = await params;
const searchParams = request.nextUrl.searchParams;
const message = searchParams.get("message");
const isUserMessage = searchParams.get("is_user_message");
if (!message) {
return new Response("Missing message parameter", { status: 400 });
}
try {
// Get auth token from server-side session
const token = await getServerAuthToken();
// Build backend URL
const backendUrl = environment.getAGPTServerBaseUrl();
const streamUrl = new URL(
`/api/chat/sessions/${sessionId}/stream`,
backendUrl,
);
streamUrl.searchParams.set("message", message);
// Pass is_user_message parameter if provided
if (isUserMessage !== null) {
streamUrl.searchParams.set("is_user_message", isUserMessage);
}
// Forward request to backend with auth header
const headers: Record<string, string> = {
Accept: "text/event-stream",
"Cache-Control": "no-cache",
@@ -136,6 +124,11 @@ export async function GET(
headers,
});
// 204 = no active stream to resume
if (response.status === 204) {
return new Response(null, { status: 204 });
}
if (!response.ok) {
const error = await response.text();
return new Response(error, {
@@ -144,17 +137,17 @@ export async function GET(
});
}
// Return the SSE stream directly
return new Response(response.body, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
},
});
} catch (error) {
console.error("SSE proxy error:", error);
console.error("Resume stream proxy error:", error);
return new Response(
JSON.stringify({
error: "Failed to connect to chat service",

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,7 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@source "../node_modules/streamdown/dist/*.js";
@layer base {
:root {
@@ -29,6 +30,14 @@
--chart-3: 197 37% 24%;
--chart-4: 43 74% 66%;
--chart-5: 27 87% 67%;
--sidebar-background: 0 0% 98%;
--sidebar-foreground: 240 5.3% 26.1%;
--sidebar-primary: 240 5.9% 10%;
--sidebar-primary-foreground: 0 0% 98%;
--sidebar-accent: 240 4.8% 95.9%;
--sidebar-accent-foreground: 240 5.9% 10%;
--sidebar-border: 220 13% 91%;
--sidebar-ring: 217.2 91.2% 59.8%;
}
.dark {
@@ -56,6 +65,14 @@
--chart-3: 30 80% 55%;
--chart-4: 280 65% 60%;
--chart-5: 340 75% 55%;
--sidebar-background: 240 5.9% 10%;
--sidebar-foreground: 240 4.8% 95.9%;
--sidebar-primary: 224.3 76.3% 48%;
--sidebar-primary-foreground: 0 0% 100%;
--sidebar-accent: 240 3.7% 15.9%;
--sidebar-accent-foreground: 240 4.8% 95.9%;
--sidebar-border: 240 3.7% 15.9%;
--sidebar-ring: 217.2 91.2% 59.8%;
}
* {

View File

@@ -0,0 +1,109 @@
"use client";
import { Button } from "@/components/ui/button";
import { scrollbarStyles } from "@/components/styles/scrollbars";
import { cn } from "@/lib/utils";
import { ArrowDownIcon } from "lucide-react";
import type { ComponentProps } from "react";
import { useCallback } from "react";
import { StickToBottom, useStickToBottomContext } from "use-stick-to-bottom";
export type ConversationProps = ComponentProps<typeof StickToBottom>;
export const Conversation = ({ className, ...props }: ConversationProps) => (
<StickToBottom
className={cn(
"relative flex-1 overflow-y-hidden",
scrollbarStyles,
className,
)}
initial="smooth"
resize="smooth"
role="log"
{...props}
/>
);
export type ConversationContentProps = ComponentProps<
typeof StickToBottom.Content
>;
export const ConversationContent = ({
className,
...props
}: ConversationContentProps) => (
<StickToBottom.Content
className={cn("flex flex-col gap-8 p-4", className)}
{...props}
/>
);
export type ConversationEmptyStateProps = ComponentProps<"div"> & {
title?: string;
description?: string;
icon?: React.ReactNode;
};
export const ConversationEmptyState = ({
className,
title = "No messages yet",
description = "Start a conversation to see messages here",
icon,
children,
...props
}: ConversationEmptyStateProps) => (
<div
className={cn(
"flex size-full flex-col items-center justify-center gap-3 p-8 text-center",
className,
)}
{...props}
>
{children ?? (
<>
{icon && (
<div className="text-neutral-500 dark:text-neutral-400">{icon}</div>
)}
<div className="space-y-1">
<h3 className="text-sm font-medium">{title}</h3>
{description && (
<p className="text-sm text-neutral-500 dark:text-neutral-400">
{description}
</p>
)}
</div>
</>
)}
</div>
);
export type ConversationScrollButtonProps = ComponentProps<typeof Button>;
export const ConversationScrollButton = ({
className,
...props
}: ConversationScrollButtonProps) => {
const { isAtBottom, scrollToBottom } = useStickToBottomContext();
const handleScrollToBottom = useCallback(() => {
scrollToBottom();
}, [scrollToBottom]);
return (
!isAtBottom && (
<Button
className={cn(
"absolute bottom-4 left-[50%] translate-x-[-50%] rounded-full dark:bg-white dark:dark:bg-neutral-950 dark:dark:hover:bg-neutral-800 dark:hover:bg-neutral-100",
className,
)}
onClick={handleScrollToBottom}
size="icon"
type="button"
variant="outline"
{...props}
>
<ArrowDownIcon className="size-4" />
</Button>
)
);
};

View File

@@ -0,0 +1,338 @@
"use client";
import { Button } from "@/components/ui/button";
import { ButtonGroup, ButtonGroupText } from "@/components/ui/button-group";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { cn } from "@/lib/utils";
import { cjk } from "@streamdown/cjk";
import { code } from "@streamdown/code";
import { math } from "@streamdown/math";
import { mermaid } from "@streamdown/mermaid";
import type { UIMessage } from "ai";
import { ChevronLeftIcon, ChevronRightIcon } from "lucide-react";
import type { ComponentProps, HTMLAttributes, ReactElement } from "react";
import { createContext, memo, useContext, useEffect, useState } from "react";
import { Streamdown } from "streamdown";
export type MessageProps = HTMLAttributes<HTMLDivElement> & {
from: UIMessage["role"];
};
export const Message = ({ className, from, ...props }: MessageProps) => (
<div
className={cn(
"group flex w-full max-w-[95%] flex-col gap-2",
from === "user" ? "is-user ml-auto justify-end" : "is-assistant",
className,
)}
{...props}
/>
);
export type MessageContentProps = HTMLAttributes<HTMLDivElement>;
export const MessageContent = ({
children,
className,
...props
}: MessageContentProps) => (
<div
className={cn(
"is-user:dark flex w-full min-w-0 max-w-full flex-col gap-2 overflow-hidden text-sm",
"group-[.is-user]:w-fit",
"group-[.is-user]:ml-auto group-[.is-user]:rounded-lg group-[.is-user]:bg-neutral-100 group-[.is-user]:px-4 group-[.is-user]:py-3 group-[.is-user]:text-neutral-950 dark:group-[.is-user]:bg-neutral-800 dark:group-[.is-user]:text-neutral-50",
"group-[.is-assistant]:text-neutral-950 dark:group-[.is-assistant]:text-neutral-50",
className,
)}
{...props}
>
{children}
</div>
);
export type MessageActionsProps = ComponentProps<"div">;
export const MessageActions = ({
className,
children,
...props
}: MessageActionsProps) => (
<div className={cn("flex items-center gap-1", className)} {...props}>
{children}
</div>
);
export type MessageActionProps = ComponentProps<typeof Button> & {
tooltip?: string;
label?: string;
};
export const MessageAction = ({
tooltip,
children,
label,
variant = "ghost",
size = "icon-sm",
...props
}: MessageActionProps) => {
const button = (
<Button size={size} type="button" variant={variant} {...props}>
{children}
<span className="sr-only">{label || tooltip}</span>
</Button>
);
if (tooltip) {
return (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>{button}</TooltipTrigger>
<TooltipContent>
<p>{tooltip}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
);
}
return button;
};
interface MessageBranchContextType {
currentBranch: number;
totalBranches: number;
goToPrevious: () => void;
goToNext: () => void;
branches: ReactElement[];
setBranches: (branches: ReactElement[]) => void;
}
const MessageBranchContext = createContext<MessageBranchContextType | null>(
null,
);
const useMessageBranch = () => {
const context = useContext(MessageBranchContext);
if (!context) {
throw new Error("MessageBranch components must be used within");
}
return context;
};
export type MessageBranchProps = HTMLAttributes<HTMLDivElement> & {
defaultBranch?: number;
onBranchChange?: (branchIndex: number) => void;
};
export const MessageBranch = ({
defaultBranch = 0,
onBranchChange,
className,
...props
}: MessageBranchProps) => {
const [currentBranch, setCurrentBranch] = useState(defaultBranch);
const [branches, setBranches] = useState<ReactElement[]>([]);
const handleBranchChange = (newBranch: number) => {
setCurrentBranch(newBranch);
onBranchChange?.(newBranch);
};
const goToPrevious = () => {
const newBranch =
currentBranch > 0 ? currentBranch - 1 : branches.length - 1;
handleBranchChange(newBranch);
};
const goToNext = () => {
const newBranch =
currentBranch < branches.length - 1 ? currentBranch + 1 : 0;
handleBranchChange(newBranch);
};
const contextValue: MessageBranchContextType = {
currentBranch,
totalBranches: branches.length,
goToPrevious,
goToNext,
branches,
setBranches,
};
return (
<MessageBranchContext.Provider value={contextValue}>
<div
className={cn("grid w-full gap-2 [&>div]:pb-0", className)}
{...props}
/>
</MessageBranchContext.Provider>
);
};
export type MessageBranchContentProps = HTMLAttributes<HTMLDivElement>;
export const MessageBranchContent = ({
children,
...props
}: MessageBranchContentProps) => {
const { currentBranch, setBranches, branches } = useMessageBranch();
const childrenArray = Array.isArray(children) ? children : [children];
// Use useEffect to update branches when they change
useEffect(() => {
if (branches.length !== childrenArray.length) {
setBranches(childrenArray);
}
}, [childrenArray, branches, setBranches]);
return childrenArray.map((branch, index) => (
<div
className={cn(
"grid gap-2 overflow-hidden [&>div]:pb-0",
index === currentBranch ? "block" : "hidden",
)}
key={branch.key}
{...props}
>
{branch}
</div>
));
};
export type MessageBranchSelectorProps = HTMLAttributes<HTMLDivElement> & {
from: UIMessage["role"];
};
export const MessageBranchSelector = ({
className,
from: _from,
...props
}: MessageBranchSelectorProps) => {
const { totalBranches } = useMessageBranch();
// Don't render if there's only one branch
if (totalBranches <= 1) {
return null;
}
return (
<ButtonGroup
className={cn(
"[&>*:not(:first-child)]:rounded-l-md [&>*:not(:last-child)]:rounded-r-md",
className,
)}
orientation="horizontal"
{...props}
/>
);
};
export type MessageBranchPreviousProps = ComponentProps<typeof Button>;
export const MessageBranchPrevious = ({
children,
...props
}: MessageBranchPreviousProps) => {
const { goToPrevious, totalBranches } = useMessageBranch();
return (
<Button
aria-label="Previous branch"
disabled={totalBranches <= 1}
onClick={goToPrevious}
size="icon-sm"
type="button"
variant="ghost"
{...props}
>
{children ?? <ChevronLeftIcon size={14} />}
</Button>
);
};
export type MessageBranchNextProps = ComponentProps<typeof Button>;
export const MessageBranchNext = ({
children,
...props
}: MessageBranchNextProps) => {
const { goToNext, totalBranches } = useMessageBranch();
return (
<Button
aria-label="Next branch"
disabled={totalBranches <= 1}
onClick={goToNext}
size="icon-sm"
type="button"
variant="ghost"
{...props}
>
{children ?? <ChevronRightIcon size={14} />}
</Button>
);
};
export type MessageBranchPageProps = HTMLAttributes<HTMLSpanElement>;
export const MessageBranchPage = ({
className,
...props
}: MessageBranchPageProps) => {
const { currentBranch, totalBranches } = useMessageBranch();
return (
<ButtonGroupText
className={cn(
"border-none bg-transparent text-neutral-500 shadow-none dark:text-neutral-400",
className,
)}
{...props}
>
{currentBranch + 1} of {totalBranches}
</ButtonGroupText>
);
};
export type MessageResponseProps = ComponentProps<typeof Streamdown>;
export const MessageResponse = memo(
({ className, ...props }: MessageResponseProps) => (
<Streamdown
className={cn(
"size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0 [&_pre]:!bg-white",
className,
)}
plugins={{ code, mermaid, math, cjk }}
{...props}
/>
),
(prevProps, nextProps) => prevProps.children === nextProps.children,
);
MessageResponse.displayName = "MessageResponse";
export type MessageToolbarProps = ComponentProps<"div">;
export const MessageToolbar = ({
className,
children,
...props
}: MessageToolbarProps) => (
<div
className={cn(
"mt-4 flex w-full items-center justify-between gap-4",
className,
)}
{...props}
>
{children}
</div>
);

View File

@@ -104,7 +104,31 @@ export function FileInput(props: Props) {
return false;
}
const getFileLabelFromValue = (val: string) => {
const getFileLabelFromValue = (val: unknown): string => {
// Handle object format from external API: { name, type, size, data }
if (val && typeof val === "object") {
const obj = val as Record<string, unknown>;
if (typeof obj.name === "string") {
return getFileLabel(
obj.name,
typeof obj.type === "string" ? obj.type : "",
);
}
if (typeof obj.type === "string") {
const mimeParts = obj.type.split("/");
if (mimeParts.length > 1) {
return `${mimeParts[1].toUpperCase()} file`;
}
return `${obj.type} file`;
}
return "File";
}
// Handle string values (data URIs or file paths)
if (typeof val !== "string") {
return "File";
}
if (val.startsWith("data:")) {
const matches = val.match(/^data:([^;]+);/);
if (matches?.[1]) {

View File

@@ -77,7 +77,7 @@ export function OverflowText(props: Props) {
"block min-w-0 overflow-hidden text-ellipsis whitespace-nowrap",
)}
>
<Text variant={variant} className={className} {...restProps}>
<Text variant={variant} as="span" className={className} {...restProps}>
{value}
</Text>
</span>

View File

@@ -1,4 +1,5 @@
import React from "react";
import { cn } from "@/lib/utils";
import { As, Variant, variantElementMap, variants } from "./helpers";
type CustomProps = {
@@ -22,7 +23,7 @@ export function Text({
}: TextProps) {
const variantClasses = variants[size || variant] || variants.body;
const Element = outerAs || variantElementMap[variant];
const combinedClassName = `${variantClasses} ${className}`.trim();
const combinedClassName = cn(variantClasses, className);
return React.createElement(
Element,

View File

@@ -1,114 +0,0 @@
"use client";
import { useCopilotSessionId } from "@/app/(platform)/copilot/useCopilotSessionId";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { useEffect, useRef } from "react";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatErrorState } from "./components/ChatErrorState/ChatErrorState";
import { useChat } from "./useChat";
export interface ChatProps {
className?: string;
initialPrompt?: string;
onSessionNotFound?: () => void;
onStreamingChange?: (isStreaming: boolean) => void;
}
export function Chat({
className,
initialPrompt,
onSessionNotFound,
onStreamingChange,
}: ChatProps) {
const { urlSessionId } = useCopilotSessionId();
const hasHandledNotFoundRef = useRef(false);
const {
session,
messages,
isLoading,
isCreating,
error,
isSessionNotFound,
sessionId,
createSession,
showLoader,
startPollingForOperation,
} = useChat({ urlSessionId });
// Extract active stream info for reconnection
const activeStream = (
session as {
active_stream?: {
task_id: string;
last_message_id: string;
operation_id: string;
tool_name: string;
};
}
)?.active_stream;
useEffect(() => {
if (!onSessionNotFound) return;
if (!urlSessionId) return;
if (!isSessionNotFound || isLoading || isCreating) return;
if (hasHandledNotFoundRef.current) return;
hasHandledNotFoundRef.current = true;
onSessionNotFound();
}, [
onSessionNotFound,
urlSessionId,
isSessionNotFound,
isLoading,
isCreating,
]);
const shouldShowLoader = showLoader && (isLoading || isCreating);
return (
<div className={cn("flex h-full flex-col", className)}>
{/* Main Content */}
<main className="flex min-h-0 w-full flex-1 flex-col overflow-hidden bg-[#f8f8f9]">
{/* Loading State */}
{shouldShowLoader && (
<div className="flex flex-1 items-center justify-center">
<div className="flex flex-col items-center gap-3">
<LoadingSpinner size="large" className="text-neutral-400" />
<Text variant="body" className="text-zinc-500">
Loading your chat...
</Text>
</div>
</div>
)}
{/* Error State */}
{error && !isLoading && (
<ChatErrorState error={error} onRetry={createSession} />
)}
{/* Session Content */}
{sessionId && !isLoading && !error && (
<ChatContainer
sessionId={sessionId}
initialMessages={messages}
initialPrompt={initialPrompt}
className="flex-1"
onStreamingChange={onStreamingChange}
onOperationStarted={startPollingForOperation}
activeStream={
activeStream
? {
taskId: activeStream.task_id,
lastMessageId: activeStream.last_message_id,
operationId: activeStream.operation_id,
toolName: activeStream.tool_name,
}
: undefined
}
/>
)}
</main>
</div>
);
}

View File

@@ -1,159 +0,0 @@
# SSE Reconnection Contract for Long-Running Operations
This document describes the client-side contract for handling SSE (Server-Sent Events) disconnections and reconnecting to long-running background tasks.
## Overview
When a user triggers a long-running operation (like agent generation), the backend:
1. Spawns a background task that survives SSE disconnections
2. Returns an `operation_started` response with a `task_id`
3. Stores stream messages in Redis Streams for replay
Clients can reconnect to the task stream at any time to receive missed messages.
## Client-Side Flow
### 1. Receiving Operation Started
When you receive an `operation_started` tool response:
```typescript
// The response includes a task_id for reconnection
{
type: "operation_started",
tool_name: "generate_agent",
operation_id: "uuid-...",
task_id: "task-uuid-...", // <-- Store this for reconnection
message: "Operation started. You can close this tab."
}
```
### 2. Storing Task Info
Use the chat store to track the active task:
```typescript
import { useChatStore } from "./chat-store";
// When operation_started is received:
useChatStore.getState().setActiveTask(sessionId, {
taskId: response.task_id,
operationId: response.operation_id,
toolName: response.tool_name,
lastMessageId: "0",
});
```
### 3. Reconnecting to a Task
To reconnect (e.g., after page refresh or tab reopen):
```typescript
const { reconnectToTask, getActiveTask } = useChatStore.getState();
// Check if there's an active task for this session
const activeTask = getActiveTask(sessionId);
if (activeTask) {
// Reconnect to the task stream
await reconnectToTask(
sessionId,
activeTask.taskId,
activeTask.lastMessageId, // Resume from last position
(chunk) => {
// Handle incoming chunks
console.log("Received chunk:", chunk);
},
);
}
```
### 4. Tracking Message Position
To enable precise replay, update the last message ID as chunks arrive:
```typescript
const { updateTaskLastMessageId } = useChatStore.getState();
function handleChunk(chunk: StreamChunk) {
// If chunk has an index/id, track it
if (chunk.idx !== undefined) {
updateTaskLastMessageId(sessionId, String(chunk.idx));
}
}
```
## API Endpoints
### Task Stream Reconnection
```
GET /api/chat/tasks/{taskId}/stream?last_message_id={idx}
```
- `taskId`: The task ID from `operation_started`
- `last_message_id`: Last received message index (default: "0" for full replay)
Returns: SSE stream of missed messages + live updates
## Chunk Types
The reconnected stream follows the same Vercel AI SDK protocol:
| Type | Description |
| ----------------------- | ----------------------- |
| `start` | Message lifecycle start |
| `text-delta` | Streaming text content |
| `text-end` | Text block completed |
| `tool-output-available` | Tool result available |
| `finish` | Stream completed |
| `error` | Error occurred |
## Error Handling
If reconnection fails:
1. Check if task still exists (may have expired - default TTL: 1 hour)
2. Fall back to polling the session for final state
3. Show appropriate UI message to user
## Persistence Considerations
For robust reconnection across browser restarts:
```typescript
// Store in localStorage/sessionStorage
const ACTIVE_TASKS_KEY = "chat_active_tasks";
function persistActiveTask(sessionId: string, task: ActiveTaskInfo) {
const tasks = JSON.parse(localStorage.getItem(ACTIVE_TASKS_KEY) || "{}");
tasks[sessionId] = task;
localStorage.setItem(ACTIVE_TASKS_KEY, JSON.stringify(tasks));
}
function loadPersistedTasks(): Record<string, ActiveTaskInfo> {
return JSON.parse(localStorage.getItem(ACTIVE_TASKS_KEY) || "{}");
}
```
## Backend Configuration
The following backend settings affect reconnection behavior:
| Setting | Default | Description |
| ------------------- | ------- | ---------------------------------- |
| `stream_ttl` | 3600s | How long streams are kept in Redis |
| `stream_max_length` | 1000 | Max messages per stream |
## Testing
To test reconnection locally:
1. Start a long-running operation (e.g., agent generation)
2. Note the `task_id` from the `operation_started` response
3. Close the browser tab
4. Reopen and call `reconnectToTask` with the saved `task_id`
5. Verify that missed messages are replayed
See the main README for full local development setup.

View File

@@ -1,16 +0,0 @@
/**
* Constants for the chat system.
*
* Centralizes magic strings and values used across chat components.
*/
// LocalStorage keys
export const STORAGE_KEY_ACTIVE_TASKS = "chat_active_tasks";
// Redis Stream IDs
export const INITIAL_MESSAGE_ID = "0";
export const INITIAL_STREAM_ID = "0-0";
// TTL values (in milliseconds)
export const COMPLETED_STREAM_TTL_MS = 5 * 60 * 1000; // 5 minutes
export const ACTIVE_TASK_TTL_MS = 60 * 60 * 1000; // 1 hour

View File

@@ -1,501 +0,0 @@
"use client";
import { create } from "zustand";
import {
ACTIVE_TASK_TTL_MS,
COMPLETED_STREAM_TTL_MS,
INITIAL_STREAM_ID,
STORAGE_KEY_ACTIVE_TASKS,
} from "./chat-constants";
import type {
ActiveStream,
StreamChunk,
StreamCompleteCallback,
StreamResult,
StreamStatus,
} from "./chat-types";
import { executeStream, executeTaskReconnect } from "./stream-executor";
export interface ActiveTaskInfo {
taskId: string;
sessionId: string;
operationId: string;
toolName: string;
lastMessageId: string;
startedAt: number;
}
/** Load active tasks from localStorage */
function loadPersistedTasks(): Map<string, ActiveTaskInfo> {
if (typeof window === "undefined") return new Map();
try {
const stored = localStorage.getItem(STORAGE_KEY_ACTIVE_TASKS);
if (!stored) return new Map();
const parsed = JSON.parse(stored) as Record<string, ActiveTaskInfo>;
const now = Date.now();
const tasks = new Map<string, ActiveTaskInfo>();
// Filter out expired tasks
for (const [sessionId, task] of Object.entries(parsed)) {
if (now - task.startedAt < ACTIVE_TASK_TTL_MS) {
tasks.set(sessionId, task);
}
}
return tasks;
} catch {
return new Map();
}
}
/** Save active tasks to localStorage */
function persistTasks(tasks: Map<string, ActiveTaskInfo>): void {
if (typeof window === "undefined") return;
try {
const obj: Record<string, ActiveTaskInfo> = {};
for (const [sessionId, task] of tasks) {
obj[sessionId] = task;
}
localStorage.setItem(STORAGE_KEY_ACTIVE_TASKS, JSON.stringify(obj));
} catch {
// Ignore storage errors
}
}
interface ChatStoreState {
activeStreams: Map<string, ActiveStream>;
completedStreams: Map<string, StreamResult>;
activeSessions: Set<string>;
streamCompleteCallbacks: Set<StreamCompleteCallback>;
/** Active tasks for SSE reconnection - keyed by sessionId */
activeTasks: Map<string, ActiveTaskInfo>;
}
interface ChatStoreActions {
startStream: (
sessionId: string,
message: string,
isUserMessage: boolean,
context?: { url: string; content: string },
onChunk?: (chunk: StreamChunk) => void,
) => Promise<void>;
stopStream: (sessionId: string) => void;
subscribeToStream: (
sessionId: string,
onChunk: (chunk: StreamChunk) => void,
skipReplay?: boolean,
) => () => void;
getStreamStatus: (sessionId: string) => StreamStatus;
getCompletedStream: (sessionId: string) => StreamResult | undefined;
clearCompletedStream: (sessionId: string) => void;
isStreaming: (sessionId: string) => boolean;
registerActiveSession: (sessionId: string) => void;
unregisterActiveSession: (sessionId: string) => void;
isSessionActive: (sessionId: string) => boolean;
onStreamComplete: (callback: StreamCompleteCallback) => () => void;
/** Track active task for SSE reconnection */
setActiveTask: (
sessionId: string,
taskInfo: Omit<ActiveTaskInfo, "sessionId" | "startedAt">,
) => void;
/** Get active task for a session */
getActiveTask: (sessionId: string) => ActiveTaskInfo | undefined;
/** Clear active task when operation completes */
clearActiveTask: (sessionId: string) => void;
/** Reconnect to an existing task stream */
reconnectToTask: (
sessionId: string,
taskId: string,
lastMessageId?: string,
onChunk?: (chunk: StreamChunk) => void,
) => Promise<void>;
/** Update last message ID for a task (for tracking replay position) */
updateTaskLastMessageId: (sessionId: string, lastMessageId: string) => void;
}
type ChatStore = ChatStoreState & ChatStoreActions;
function notifyStreamComplete(
callbacks: Set<StreamCompleteCallback>,
sessionId: string,
) {
for (const callback of callbacks) {
try {
callback(sessionId);
} catch (err) {
console.warn("[ChatStore] Stream complete callback error:", err);
}
}
}
function cleanupExpiredStreams(
completedStreams: Map<string, StreamResult>,
): Map<string, StreamResult> {
const now = Date.now();
const cleaned = new Map(completedStreams);
for (const [sessionId, result] of cleaned) {
if (now - result.completedAt > COMPLETED_STREAM_TTL_MS) {
cleaned.delete(sessionId);
}
}
return cleaned;
}
/**
* Finalize a stream by moving it from activeStreams to completedStreams.
* Also handles cleanup and notifications.
*/
function finalizeStream(
sessionId: string,
stream: ActiveStream,
onChunk: ((chunk: StreamChunk) => void) | undefined,
get: () => ChatStoreState & ChatStoreActions,
set: (state: Partial<ChatStoreState>) => void,
): void {
if (onChunk) stream.onChunkCallbacks.delete(onChunk);
if (stream.status !== "streaming") {
const currentState = get();
const finalActiveStreams = new Map(currentState.activeStreams);
let finalCompletedStreams = new Map(currentState.completedStreams);
const storedStream = finalActiveStreams.get(sessionId);
if (storedStream === stream) {
const result: StreamResult = {
sessionId,
status: stream.status,
chunks: stream.chunks,
completedAt: Date.now(),
error: stream.error,
};
finalCompletedStreams.set(sessionId, result);
finalActiveStreams.delete(sessionId);
finalCompletedStreams = cleanupExpiredStreams(finalCompletedStreams);
set({
activeStreams: finalActiveStreams,
completedStreams: finalCompletedStreams,
});
if (stream.status === "completed" || stream.status === "error") {
notifyStreamComplete(currentState.streamCompleteCallbacks, sessionId);
}
}
}
}
/**
* Clean up an existing stream for a session and move it to completed streams.
* Returns updated maps for both active and completed streams.
*/
function cleanupExistingStream(
sessionId: string,
activeStreams: Map<string, ActiveStream>,
completedStreams: Map<string, StreamResult>,
callbacks: Set<StreamCompleteCallback>,
): {
activeStreams: Map<string, ActiveStream>;
completedStreams: Map<string, StreamResult>;
} {
const newActiveStreams = new Map(activeStreams);
let newCompletedStreams = new Map(completedStreams);
const existingStream = newActiveStreams.get(sessionId);
if (existingStream) {
existingStream.abortController.abort();
const normalizedStatus =
existingStream.status === "streaming"
? "completed"
: existingStream.status;
const result: StreamResult = {
sessionId,
status: normalizedStatus,
chunks: existingStream.chunks,
completedAt: Date.now(),
error: existingStream.error,
};
newCompletedStreams.set(sessionId, result);
newActiveStreams.delete(sessionId);
newCompletedStreams = cleanupExpiredStreams(newCompletedStreams);
if (normalizedStatus === "completed" || normalizedStatus === "error") {
notifyStreamComplete(callbacks, sessionId);
}
}
return {
activeStreams: newActiveStreams,
completedStreams: newCompletedStreams,
};
}
/**
* Create a new active stream with initial state.
*/
function createActiveStream(
sessionId: string,
onChunk?: (chunk: StreamChunk) => void,
): ActiveStream {
const abortController = new AbortController();
const initialCallbacks = new Set<(chunk: StreamChunk) => void>();
if (onChunk) initialCallbacks.add(onChunk);
return {
sessionId,
abortController,
status: "streaming",
startedAt: Date.now(),
chunks: [],
onChunkCallbacks: initialCallbacks,
};
}
export const useChatStore = create<ChatStore>((set, get) => ({
activeStreams: new Map(),
completedStreams: new Map(),
activeSessions: new Set(),
streamCompleteCallbacks: new Set(),
activeTasks: loadPersistedTasks(),
startStream: async function startStream(
sessionId,
message,
isUserMessage,
context,
onChunk,
) {
const state = get();
const callbacks = state.streamCompleteCallbacks;
// Clean up any existing stream for this session
const {
activeStreams: newActiveStreams,
completedStreams: newCompletedStreams,
} = cleanupExistingStream(
sessionId,
state.activeStreams,
state.completedStreams,
callbacks,
);
// Create new stream
const stream = createActiveStream(sessionId, onChunk);
newActiveStreams.set(sessionId, stream);
set({
activeStreams: newActiveStreams,
completedStreams: newCompletedStreams,
});
try {
await executeStream(stream, message, isUserMessage, context);
} finally {
finalizeStream(sessionId, stream, onChunk, get, set);
}
},
stopStream: function stopStream(sessionId) {
const state = get();
const stream = state.activeStreams.get(sessionId);
if (!stream) return;
stream.abortController.abort();
stream.status = "completed";
const newActiveStreams = new Map(state.activeStreams);
let newCompletedStreams = new Map(state.completedStreams);
const result: StreamResult = {
sessionId,
status: stream.status,
chunks: stream.chunks,
completedAt: Date.now(),
error: stream.error,
};
newCompletedStreams.set(sessionId, result);
newActiveStreams.delete(sessionId);
newCompletedStreams = cleanupExpiredStreams(newCompletedStreams);
set({
activeStreams: newActiveStreams,
completedStreams: newCompletedStreams,
});
notifyStreamComplete(state.streamCompleteCallbacks, sessionId);
},
subscribeToStream: function subscribeToStream(
sessionId,
onChunk,
skipReplay = false,
) {
const state = get();
const stream = state.activeStreams.get(sessionId);
if (stream) {
if (!skipReplay) {
for (const chunk of stream.chunks) {
onChunk(chunk);
}
}
stream.onChunkCallbacks.add(onChunk);
return function unsubscribe() {
stream.onChunkCallbacks.delete(onChunk);
};
}
return function noop() {};
},
getStreamStatus: function getStreamStatus(sessionId) {
const { activeStreams, completedStreams } = get();
const active = activeStreams.get(sessionId);
if (active) return active.status;
const completed = completedStreams.get(sessionId);
if (completed) return completed.status;
return "idle";
},
getCompletedStream: function getCompletedStream(sessionId) {
return get().completedStreams.get(sessionId);
},
clearCompletedStream: function clearCompletedStream(sessionId) {
const state = get();
if (!state.completedStreams.has(sessionId)) return;
const newCompletedStreams = new Map(state.completedStreams);
newCompletedStreams.delete(sessionId);
set({ completedStreams: newCompletedStreams });
},
isStreaming: function isStreaming(sessionId) {
const stream = get().activeStreams.get(sessionId);
return stream?.status === "streaming";
},
registerActiveSession: function registerActiveSession(sessionId) {
const state = get();
if (state.activeSessions.has(sessionId)) return;
const newActiveSessions = new Set(state.activeSessions);
newActiveSessions.add(sessionId);
set({ activeSessions: newActiveSessions });
},
unregisterActiveSession: function unregisterActiveSession(sessionId) {
const state = get();
if (!state.activeSessions.has(sessionId)) return;
const newActiveSessions = new Set(state.activeSessions);
newActiveSessions.delete(sessionId);
set({ activeSessions: newActiveSessions });
},
isSessionActive: function isSessionActive(sessionId) {
return get().activeSessions.has(sessionId);
},
onStreamComplete: function onStreamComplete(callback) {
const state = get();
const newCallbacks = new Set(state.streamCompleteCallbacks);
newCallbacks.add(callback);
set({ streamCompleteCallbacks: newCallbacks });
return function unsubscribe() {
const currentState = get();
const cleanedCallbacks = new Set(currentState.streamCompleteCallbacks);
cleanedCallbacks.delete(callback);
set({ streamCompleteCallbacks: cleanedCallbacks });
};
},
setActiveTask: function setActiveTask(sessionId, taskInfo) {
const state = get();
const newActiveTasks = new Map(state.activeTasks);
newActiveTasks.set(sessionId, {
...taskInfo,
sessionId,
startedAt: Date.now(),
});
set({ activeTasks: newActiveTasks });
persistTasks(newActiveTasks);
},
getActiveTask: function getActiveTask(sessionId) {
return get().activeTasks.get(sessionId);
},
clearActiveTask: function clearActiveTask(sessionId) {
const state = get();
if (!state.activeTasks.has(sessionId)) return;
const newActiveTasks = new Map(state.activeTasks);
newActiveTasks.delete(sessionId);
set({ activeTasks: newActiveTasks });
persistTasks(newActiveTasks);
},
reconnectToTask: async function reconnectToTask(
sessionId,
taskId,
lastMessageId = INITIAL_STREAM_ID,
onChunk,
) {
const state = get();
const callbacks = state.streamCompleteCallbacks;
// Clean up any existing stream for this session
const {
activeStreams: newActiveStreams,
completedStreams: newCompletedStreams,
} = cleanupExistingStream(
sessionId,
state.activeStreams,
state.completedStreams,
callbacks,
);
// Create new stream for reconnection
const stream = createActiveStream(sessionId, onChunk);
newActiveStreams.set(sessionId, stream);
set({
activeStreams: newActiveStreams,
completedStreams: newCompletedStreams,
});
try {
await executeTaskReconnect(stream, taskId, lastMessageId);
} finally {
finalizeStream(sessionId, stream, onChunk, get, set);
// Clear active task on completion
if (stream.status === "completed" || stream.status === "error") {
const taskState = get();
if (taskState.activeTasks.has(sessionId)) {
const newActiveTasks = new Map(taskState.activeTasks);
newActiveTasks.delete(sessionId);
set({ activeTasks: newActiveTasks });
persistTasks(newActiveTasks);
}
}
}
},
updateTaskLastMessageId: function updateTaskLastMessageId(
sessionId,
lastMessageId,
) {
const state = get();
const task = state.activeTasks.get(sessionId);
if (!task) return;
const newActiveTasks = new Map(state.activeTasks);
newActiveTasks.set(sessionId, {
...task,
lastMessageId,
});
set({ activeTasks: newActiveTasks });
persistTasks(newActiveTasks);
},
}));

View File

@@ -1,163 +0,0 @@
import type { ToolArguments, ToolResult } from "@/types/chat";
export type StreamStatus = "idle" | "streaming" | "completed" | "error";
export interface StreamChunk {
type:
| "stream_start"
| "text_chunk"
| "text_ended"
| "tool_call"
| "tool_call_start"
| "tool_response"
| "login_needed"
| "need_login"
| "credentials_needed"
| "error"
| "usage"
| "stream_end";
taskId?: string;
timestamp?: string;
content?: string;
message?: string;
code?: string;
details?: Record<string, unknown>;
tool_id?: string;
tool_name?: string;
arguments?: ToolArguments;
result?: ToolResult;
success?: boolean;
idx?: number;
session_id?: string;
agent_info?: {
graph_id: string;
name: string;
trigger_type: string;
};
provider?: string;
provider_name?: string;
credential_type?: string;
scopes?: string[];
title?: string;
[key: string]: unknown;
}
export type VercelStreamChunk =
| { type: "start"; messageId: string; taskId?: string }
| { type: "finish" }
| { type: "text-start"; id: string }
| { type: "text-delta"; id: string; delta: string }
| { type: "text-end"; id: string }
| { type: "tool-input-start"; toolCallId: string; toolName: string }
| {
type: "tool-input-available";
toolCallId: string;
toolName: string;
input: Record<string, unknown>;
}
| {
type: "tool-output-available";
toolCallId: string;
toolName?: string;
output: unknown;
success?: boolean;
}
| {
type: "usage";
promptTokens: number;
completionTokens: number;
totalTokens: number;
}
| {
type: "error";
errorText: string;
code?: string;
details?: Record<string, unknown>;
};
export interface ActiveStream {
sessionId: string;
abortController: AbortController;
status: StreamStatus;
startedAt: number;
chunks: StreamChunk[];
error?: Error;
onChunkCallbacks: Set<(chunk: StreamChunk) => void>;
}
export interface StreamResult {
sessionId: string;
status: StreamStatus;
chunks: StreamChunk[];
completedAt: number;
error?: Error;
}
export type StreamCompleteCallback = (sessionId: string) => void;
// Type guards for message types
/**
* Check if a message has a toolId property.
*/
export function hasToolId<T extends { type: string }>(
msg: T,
): msg is T & { toolId: string } {
return (
"toolId" in msg &&
typeof (msg as Record<string, unknown>).toolId === "string"
);
}
/**
* Check if a message has an operationId property.
*/
export function hasOperationId<T extends { type: string }>(
msg: T,
): msg is T & { operationId: string } {
return (
"operationId" in msg &&
typeof (msg as Record<string, unknown>).operationId === "string"
);
}
/**
* Check if a message has a toolCallId property.
*/
export function hasToolCallId<T extends { type: string }>(
msg: T,
): msg is T & { toolCallId: string } {
return (
"toolCallId" in msg &&
typeof (msg as Record<string, unknown>).toolCallId === "string"
);
}
/**
* Check if a message is an operation message type.
*/
export function isOperationMessage<T extends { type: string }>(
msg: T,
): msg is T & {
type: "operation_started" | "operation_pending" | "operation_in_progress";
} {
return (
msg.type === "operation_started" ||
msg.type === "operation_pending" ||
msg.type === "operation_in_progress"
);
}
/**
* Get the tool ID from a message if available.
* Checks toolId, operationId, and toolCallId properties.
*/
export function getToolIdFromMessage<T extends { type: string }>(
msg: T,
): string | undefined {
const record = msg as Record<string, unknown>;
if (typeof record.toolId === "string") return record.toolId;
if (typeof record.operationId === "string") return record.operationId;
if (typeof record.toolCallId === "string") return record.toolCallId;
return undefined;
}

Some files were not shown because too many files have changed in this diff Show More