Compare commits

..

6 Commits

Author SHA1 Message Date
Swifty
726542472a minimal fix to claude sdk 2026-03-03 22:31:59 +01:00
Nicholas Tindle
757ec1f064 feat(platform): Add file upload to copilot chat [SECRT-1788] (#12220)
## Summary

- Add file attachment support to copilot chat (documents, images,
spreadsheets, video, audio)
- Show upload progress with spinner overlays on file chips during upload
- Display attached files as styled pills in sent user messages using AI
SDK's native `FileUIPart`
- Backend upload endpoint with virus scanning (ClamAV), per-file size
limits, and per-user storage caps
- Enrich chat stream with file metadata so the LLM can access files via
`read_workspace_file`

Resolves: [SECRT-1788](https://linear.app/autogpt/issue/SECRT-1788)

### Backend
| File | Change |
|------|--------|
| `chat/routes.py` | Accept `file_ids` in stream request, enrich user
message with file metadata |
| `workspace/routes.py` | New `POST /files/upload` and `GET
/storage/usage` endpoints |
| `executor/utils.py` | Thread `file_ids` through
`CoPilotExecutionEntry` and RabbitMQ |
| `settings.py` | Add `max_file_size_mb` and `max_workspace_storage_mb`
config |

### Frontend
| File | Change |
|------|--------|
| `AttachmentMenu.tsx` | **New** — `+` button with popover for file
category selection |
| `FileChips.tsx` | **New** — file preview chips with upload spinner
state |
| `MessageAttachments.tsx` | **New** — paperclip pills rendering
`FileUIPart` in chat bubbles |
| `upload/route.ts` | **New** — Next.js API proxy for multipart uploads
to backend |
| `ChatInput.tsx` | Integrate attachment menu, file chips, upload
progress |
| `useCopilotPage.ts` | Upload flow, `FileUIPart` construction,
transport `file_ids` extraction |
| `ChatMessagesContainer.tsx` | Render file parts as
`MessageAttachments` |
| `ChatContainer.tsx` / `EmptySession.tsx` | Thread `isUploadingFiles`
prop |
| `useChatInput.ts` | `canSendEmpty` option for file-only sends |
| `stream/route.ts` | Forward `file_ids` to backend |

## Test plan

- [x] Attach files via `+` button → file chips appear with X buttons
- [x] Remove a chip → file is removed from the list
- [x] Send message with files → chips show upload spinners → message
appears with file attachment pills
- [x] Upload failure → toast error, chips revert to editable (no phantom
message sent)
- [x] New session (empty form): same upload flow works
- [x] Messages without files render normally
- [x] Network tab: `file_ids` present in stream POST body

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Adds authenticated file upload/storage-quota enforcement and threads
`file_ids` through the chat streaming path, which affects data handling
and storage behavior. Risk is mitigated by UUID/workspace scoping, size
limits, and virus scanning but still touches security- and
reliability-sensitive upload flows.
> 
> **Overview**
> Copilot chat now supports attaching files: the frontend adds
drag-and-drop and an attach button, shows selected files as removable
chips with an upload-in-progress state, and renders sent attachments
using AI SDK `FileUIPart` with download links.
> 
> On send, files are uploaded to the backend (with client-side limits
and failure handling) and the chat stream request includes `file_ids`;
the backend sanitizes/filters IDs, scopes them to the user’s workspace,
appends an `[Attached files]` metadata block to the user message for the
LLM, and forwards the sanitized IDs through `enqueue_copilot_turn`.
> 
> The backend adds `POST /workspace/files/upload` (filename
sanitization, per-file size limit, ClamAV scan, and per-user storage
quota with post-write rollback) plus `GET /workspace/storage/usage`,
introduces `max_workspace_storage_mb` config, optimizes workspace size
calculation, and fixes executor cleanup to avoid un-awaited coroutine
warnings; new route tests cover file ID validation and upload quota/scan
behaviors.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
8d3b95d046. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 20:23:27 +00:00
Ubbe
9442c648a4 fix(platform/copilot): bypass Vercel SSE proxy, refactor hook architecture (#12254)
## Summary

Reliability, architecture, and UX improvements for the CoPilot SSE
streaming pipeline.

### Frontend

- **SSE proxy bypass**: Connect directly to the Python backend for SSE
streams, avoiding the Next.js serverless proxy and its 800s Vercel
function timeout ceiling
- **Hook refactor**: Decompose the 490-line `useCopilotPage` monolith
into focused domain modules:
- `helpers.ts` — pure functions (`deduplicateMessages`,
`resolveInProgressTools`)
- `store.ts` — Zustand store for shared UI state (`sessionToDelete`,
drawer open/close)
- `useCopilotStream.ts` — SSE transport, `useChat` wrapper,
reconnect/resume logic, stop+cancel
  - `useCopilotPage.ts` — thin orchestrator (~160 lines)
- **ChatMessagesContainer refactor**: Split 525-line monolith into
sub-components:
  - `helpers.ts` — pure text parsing (markers, workspace URLs)
- `components/ThinkingIndicator.tsx` — ScaleLoader animation + cycling
phrases with pulse
- `components/MessagePartRenderer.tsx` — tool dispatch switch +
workspace media
- **Stop UX fixes**:
- Guard `isReconnecting` and resume effect with `isUserStoppingRef` so
the input unlocks immediately after explicit stop (previously stuck
until page refresh)
- Inject cancellation marker locally in `stop()` so "You manually
stopped this chat" shows instantly
- **Thinking indicator polish**: Replace MorphingBlob SVG with
ScaleLoader (16px), fix initial dark circle flash via
`animation-fill-mode: backwards`, smooth `animate-pulse` text instead of
shimmer gradient
- **ChatSidebar consolidation**: Reads `sessionToDelete` from Zustand
store instead of duplicating delete state/mutation locally
- **Auth error handling**: `getAuthHeaders()` throws on failure instead
of silently returning empty headers; 401 errors show user-facing toast
- **Stale closure fix**: Use refs for reconnect guards to avoid stale
closures during rapid reconnect cycles
- **Session switch resume**: Clear `hasResumedRef` on session switch so
returning to a session with an active stream auto-reconnects
- **Target session cache invalidation**: Invalidate the target session's
React Query cache on switch so `active_stream` is accurate for resume
- **Dedup hardening**: Content-fingerprint dedup resets on non-assistant
messages, preventing legitimate repeated responses from being dropped
- **Marker prefixes**: Hex-suffixed markers (`[__COPILOT_ERROR_f7a1__]`)
to prevent LLM false-positives
- **Code style**: Remove unnecessary `useCallback` wrappers per project
convention, replace unsafe `as` cast with runtime type guard

### Backend (minimal)

- **Faster heartbeat**: 10s → 3s interval to keep SSE alive through
proxies/LBs
- **Faster stall detection**: SSE subscriber queue timeout 30s → 10s
- **Marker prefixes**: Matching hex-suffixed prefixes for error/system
markers

## Test plan

- [ ] Verify SSE streams connect directly to backend (no Next.js proxy
in network tab)
- [ ] Verify reconnect works on transient disconnects (up to 3 attempts
with backoff)
- [ ] Verify auth failure shows user-facing toast
- [ ] Verify switching sessions and switching back shows messages and
resumes active stream
- [ ] Verify deleting a chat from sidebar works (shared Zustand state)
- [ ] Verify mobile drawer delete works (shared Zustand state)
- [ ] Verify thinking indicator shows ScaleLoader + pulsing text, no
dark circle flash
- [ ] Verify stopping a stream immediately unlocks the input and shows
"You manually stopped this chat"
- [ ] Verify marker prefix parsing still works with hex-suffixed
prefixes
- [ ] `pnpm format && pnpm lint && pnpm types` pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 19:56:24 +08:00
Zamil Majdy
1c51dd18aa fix(test): backdate UserBalance.updatedAt in test_block_credit_reset (#12236)
## Root cause

The test constructs \`month3\` using \`datetime.now().replace(month=3,
day=1)\` — hardcoded to **March of the real current year**. When
\`update(balance=400)\` runs, Prisma auto-sets \`updatedAt\` to the
**real wall-clock time**.

The refill guard in \`BetaUserCredit.get_credits\` is:
\`\`\`python
if (snapshot_time.year, snapshot_time.month) == (cur_time.year,
cur_time.month):
    return balance  # same month → skip refill
\`\`\`

This means the test only fails when run **during the real month of
March**, because the mocked \`month3\` and the real \`updatedAt\` both
land in March:

| Test runs in | \`snapshot_time\` (real \`updatedAt\`) | \`cur_time\`
(mocked month3) | Same? | Result |
|---|---|---|---|---|
| January 2026 | \`(2026, 1)\` | \`(2026, 3)\` |  | refill triggers  |
| February 2026 | \`(2026, 2)\` | \`(2026, 3)\` |  | refill triggers 
|
| **March 2026** | **\`(2026, 3)\`** | **\`(2026, 3)\`** | **** |
**skips refill ** |
| April 2026 | \`(2026, 4)\` | \`(2026, 3)\` |  | refill triggers  |

It would silently pass again in April, then fail again next March 2027.

## Fix

Explicitly pass \`updatedAt=month2\` when updating the balance to 400,
so the month2→month3 transition is correctly detected regardless of when
the test actually runs. This matches the existing pattern used earlier
in the same test for the month1 setup.

## Test plan
- [ ] \`pytest backend/data/credit_test.py::test_block_credit_reset\`
passes
- [ ] No other credit tests broken
2026-03-01 07:46:04 +00:00
Zamil Majdy
6f4f80871d feat(copilot): Langfuse SDK tracing for Claude Agent SDK path (#12228)
## Problem

The Copilot SDK path (`ClaudeSDKClient`) routes API calls through `POST
/api/v1/messages` (Anthropic-native endpoint). OpenRouter Broadcast
**silently excludes** this endpoint — it only forwards `POST
/api/v1/chat/completions` (OpenAI-compat) to Langfuse. As a result, all
SDK-path turns were invisible in Langfuse.

**Root cause confirmed** via live pod test: two HTTP calls (one per
endpoint), only the `/chat/completions` one appeared in Langfuse.

## Solution

Add **Langfuse SDK direct tracing** in `sdk/service.py`, wrapping each
`stream_chat_completion_sdk()` call with a `generation` observation.

### What gets captured per user turn
| Field | Value |
|---|---|
| `name` | `copilot-sdk-session` |
| `model` | resolved SDK model |
| `input` | user message |
| `output` | final accumulated assistant text |
| `usage_details.input` | aggregated input tokens (from
`ResultMessage.usage`) |
| `usage_details.output` | aggregated output tokens |
| `cost_details.total` | total cost USD |
| trace `session_id` | copilot session ID |
| trace `user_id` | authenticated user ID |
| trace `tags` | `["sdk"]` |

Token counts and cost are **aggregated** across all internal Anthropic
API calls in the session (tool-use turns included), sourced from
`ResultMessage.usage`.

### Implementation notes
- Span is opened via
`start_as_current_observation(as_type='generation')` before
`ClaudeSDKClient` enters
- Span is **always closed in `finally`** — survives errors,
cancellations, and user stops
- Fails open: any Langfuse init error is caught and logged at `DEBUG`,
tracing is disabled for that turn but the session continues normally
- Only runs when `_is_langfuse_configured()` returns true (same guard as
the non-SDK path)

## Also included

`reproduce_openrouter_broadcast_gap.py` — standalone repro script (no
sensitive data) demonstrating that `/api/v1/messages` is not captured by
OpenRouter Broadcast while `/api/v1/chat/completions` is. To be filed
with OpenRouter support.

## Test plan

- [ ] Deploy to dev, send a Copilot message via the SDK path
- [ ] Confirm trace appears in Langfuse with `tags=["sdk"]`, correct
`session_id`/`user_id`, non-zero token counts
- [ ] Confirm session still works normally when `LANGFUSE_PUBLIC_KEY` is
not set (no-op path)
- [ ] Confirm session still works on error/cancellation (span closed in
finally)
2026-02-27 16:26:46 +00:00
Ubbe
e8cca6cd9a feat(frontend/copilot): migrate ChatInput to ai-sdk prompt-input component (#12207)
## Summary

- **Migrate ChatInput** to composable `PromptInput*` sub-components from
AI SDK Elements, replacing the custom implementation with a boxy,
Claude-style input layout (textarea + footer with tools and submit)
- **Eliminate JS-based DOM height manipulation** (60+ lines removed from
`useChatInput.ts`) in favor of CSS-driven auto-resize via
`min-h`/`max-h`, fixing input sizing jumps (SECRT-2040)
- **Change stop button color** from red to black (`bg-zinc-800`) per
SECRT-2038, while keeping mic recording button red
- **Add new UI primitives**: `InputGroup`, `Spinner`, `Textarea`, and
`prompt-input` composition layer

### New files
- `src/components/ai-elements/prompt-input.tsx` — Composable prompt
input sub-components (PromptInput, PromptInputBody, PromptInputTextarea,
PromptInputFooter, PromptInputTools, PromptInputButton,
PromptInputSubmit)
- `src/components/ui/input-group.tsx` — Layout primitive with flex-col
support, rounded-xlarge styling
- `src/components/ui/spinner.tsx` — Loading spinner using Phosphor
CircleNotch
- `src/components/ui/textarea.tsx` — Base shadcn Textarea component

### Modified files
- `ChatInput.tsx` — Rewritten to compose PromptInput* sub-components
with InputGroup
- `useChatInput.ts` — Simplified: removed maxRows, hasMultipleLines,
handleKeyDown, all DOM style manipulation
- `useVoiceRecording.ts` — Removed `baseHandleKeyDown` dependency;
PromptInputTextarea handles Enter→submit natively

## Resolves
- SECRT-2042: Migrate copilot chat input to ai-sdk prompt-input
component
- SECRT-2038: Change stop button color from red to black

## Test plan
- [ ] Type a message and send it — verify it submits and clears the
input
- [ ] Multi-line input grows smoothly without sizing jumps
- [ ] Press Enter to send, Shift+Enter for new line
- [ ] Voice recording: press space on empty input to start, space again
to stop
- [ ] Mic button stays red while recording; stop-generating button is
black
- [ ] Input has boxy rectangular shape with rounded-xlarge corners
- [ ] Streaming: stop button appears during generation, clicking it
stops the stream
- [ ] EmptySession layout renders correctly with the new input
- [ ] Input is disabled during transcription with "Transcribing..."
placeholder

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 15:24:19 +00:00
60 changed files with 3694 additions and 1761 deletions

1
.nvmrc Normal file
View File

@@ -0,0 +1 @@
22

View File

@@ -2,6 +2,7 @@
import asyncio
import logging
import re
from collections.abc import AsyncGenerator
from typing import Annotated
from uuid import uuid4
@@ -9,7 +10,8 @@ from uuid import uuid4
from autogpt_libs import auth
from fastapi import APIRouter, Depends, HTTPException, Query, Response, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from prisma.models import UserWorkspaceFile
from pydantic import BaseModel, Field
from backend.copilot import service as chat_service
from backend.copilot import stream_registry
@@ -47,10 +49,14 @@ from backend.copilot.tools.models import (
UnderstandingUpdatedResponse,
)
from backend.copilot.tracking import track_user_message
from backend.data.workspace import get_or_create_workspace
from backend.util.exceptions import NotFoundError
config = ChatConfig()
_UUID_RE = re.compile(
r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$", re.I
)
logger = logging.getLogger(__name__)
@@ -79,6 +85,9 @@ class StreamChatRequest(BaseModel):
message: str
is_user_message: bool = True
context: dict[str, str] | None = None # {url: str, content: str}
file_ids: list[str] | None = Field(
default=None, max_length=20
) # Workspace file IDs attached to this message
class CreateSessionResponse(BaseModel):
@@ -394,6 +403,38 @@ async def stream_chat_post(
},
)
# Enrich message with file metadata if file_ids are provided.
# Also sanitise file_ids so only validated, workspace-scoped IDs are
# forwarded downstream (e.g. to the executor via enqueue_copilot_turn).
sanitized_file_ids: list[str] | None = None
if request.file_ids and user_id:
# Filter to valid UUIDs only to prevent DB abuse
valid_ids = [fid for fid in request.file_ids if _UUID_RE.match(fid)]
if valid_ids:
workspace = await get_or_create_workspace(user_id)
# Batch query instead of N+1
files = await UserWorkspaceFile.prisma().find_many(
where={
"id": {"in": valid_ids},
"workspaceId": workspace.id,
"isDeleted": False,
}
)
# Only keep IDs that actually exist in the user's workspace
sanitized_file_ids = [wf.id for wf in files] or None
file_lines: list[str] = [
f"- {wf.name} ({wf.mimeType}, {round(wf.sizeBytes / 1024, 1)} KB), file_id={wf.id}"
for wf in files
]
if file_lines:
files_block = (
"\n\n[Attached files]\n"
+ "\n".join(file_lines)
+ "\nUse read_workspace_file with the file_id to access file contents."
)
request.message += files_block
# Atomically append user message to session BEFORE creating task to avoid
# race condition where GET_SESSION sees task as "running" but message isn't
# saved yet. append_and_save_message re-fetches inside a lock to prevent
@@ -445,6 +486,7 @@ async def stream_chat_post(
turn_id=turn_id,
is_user_message=request.is_user_message,
context=request.context,
file_ids=sanitized_file_ids,
)
setup_time = (time.perf_counter() - stream_start_time) * 1000
@@ -487,7 +529,7 @@ async def stream_chat_post(
)
while True:
try:
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=10.0)
chunks_yielded += 1
if not first_chunk_yielded:
@@ -640,7 +682,7 @@ async def resume_session_stream(
try:
while True:
try:
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=10.0)
if chunk_count < 3:
logger.info(
"Resume stream chunk",

View File

@@ -0,0 +1,160 @@
"""Tests for chat route file_ids validation and enrichment."""
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from backend.api.features.chat import routes as chat_routes
app = fastapi.FastAPI()
app.include_router(chat_routes.router)
client = fastapi.testclient.TestClient(app)
TEST_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
# ---- file_ids Pydantic validation (B1) ----
def test_stream_chat_rejects_too_many_file_ids():
"""More than 20 file_ids should be rejected by Pydantic validation (422)."""
response = client.post(
"/sessions/sess-1/stream",
json={
"message": "hello",
"file_ids": [f"00000000-0000-0000-0000-{i:012d}" for i in range(21)],
},
)
assert response.status_code == 422
def _mock_stream_internals(mocker: pytest_mock.MockFixture):
"""Mock the async internals of stream_chat_post so tests can exercise
validation and enrichment logic without needing Redis/RabbitMQ."""
mocker.patch(
"backend.api.features.chat.routes._validate_and_get_session",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.append_and_save_message",
return_value=None,
)
mock_registry = mocker.MagicMock()
mock_registry.create_session = mocker.AsyncMock(return_value=None)
mocker.patch(
"backend.api.features.chat.routes.stream_registry",
mock_registry,
)
mocker.patch(
"backend.api.features.chat.routes.enqueue_copilot_turn",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.track_user_message",
return_value=None,
)
def test_stream_chat_accepts_20_file_ids(mocker: pytest_mock.MockFixture):
"""Exactly 20 file_ids should be accepted (not rejected by validation)."""
_mock_stream_internals(mocker)
# Patch workspace lookup as imported by the routes module
mocker.patch(
"backend.api.features.chat.routes.get_or_create_workspace",
return_value=type("W", (), {"id": "ws-1"})(),
)
mock_prisma = mocker.MagicMock()
mock_prisma.find_many = mocker.AsyncMock(return_value=[])
mocker.patch(
"prisma.models.UserWorkspaceFile.prisma",
return_value=mock_prisma,
)
response = client.post(
"/sessions/sess-1/stream",
json={
"message": "hello",
"file_ids": [f"00000000-0000-0000-0000-{i:012d}" for i in range(20)],
},
)
# Should get past validation — 200 streaming response expected
assert response.status_code == 200
# ---- UUID format filtering ----
def test_file_ids_filters_invalid_uuids(mocker: pytest_mock.MockFixture):
"""Non-UUID strings in file_ids should be silently filtered out
and NOT passed to the database query."""
_mock_stream_internals(mocker)
mocker.patch(
"backend.api.features.chat.routes.get_or_create_workspace",
return_value=type("W", (), {"id": "ws-1"})(),
)
mock_prisma = mocker.MagicMock()
mock_prisma.find_many = mocker.AsyncMock(return_value=[])
mocker.patch(
"prisma.models.UserWorkspaceFile.prisma",
return_value=mock_prisma,
)
valid_id = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
client.post(
"/sessions/sess-1/stream",
json={
"message": "hello",
"file_ids": [
valid_id,
"not-a-uuid",
"../../../etc/passwd",
"",
],
},
)
# The find_many call should only receive the one valid UUID
mock_prisma.find_many.assert_called_once()
call_kwargs = mock_prisma.find_many.call_args[1]
assert call_kwargs["where"]["id"]["in"] == [valid_id]
# ---- Cross-workspace file_ids ----
def test_file_ids_scoped_to_workspace(mocker: pytest_mock.MockFixture):
"""The batch query should scope to the user's workspace."""
_mock_stream_internals(mocker)
mocker.patch(
"backend.api.features.chat.routes.get_or_create_workspace",
return_value=type("W", (), {"id": "my-workspace-id"})(),
)
mock_prisma = mocker.MagicMock()
mock_prisma.find_many = mocker.AsyncMock(return_value=[])
mocker.patch(
"prisma.models.UserWorkspaceFile.prisma",
return_value=mock_prisma,
)
fid = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
client.post(
"/sessions/sess-1/stream",
json={"message": "hi", "file_ids": [fid]},
)
call_kwargs = mock_prisma.find_many.call_args[1]
assert call_kwargs["where"]["workspaceId"] == "my-workspace-id"
assert call_kwargs["where"]["isDeleted"] is False

View File

@@ -3,15 +3,29 @@ Workspace API routes for managing user file storage.
"""
import logging
import os
import re
from typing import Annotated
from urllib.parse import quote
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
from fastapi import Query, UploadFile
from fastapi.responses import Response
from pydantic import BaseModel
from backend.data.workspace import WorkspaceFile, get_workspace, get_workspace_file
from backend.data.workspace import (
WorkspaceFile,
count_workspace_files,
get_or_create_workspace,
get_workspace,
get_workspace_file,
get_workspace_total_size,
soft_delete_workspace_file,
)
from backend.util.settings import Config
from backend.util.virus_scanner import scan_content_safe
from backend.util.workspace import WorkspaceManager
from backend.util.workspace_storage import get_workspace_storage
@@ -98,6 +112,21 @@ async def _create_file_download_response(file: WorkspaceFile) -> Response:
raise
class UploadFileResponse(BaseModel):
file_id: str
name: str
path: str
mime_type: str
size_bytes: int
class StorageUsageResponse(BaseModel):
used_bytes: int
limit_bytes: int
used_percent: float
file_count: int
@router.get(
"/files/{file_id}/download",
summary="Download file by ID",
@@ -120,3 +149,120 @@ async def download_file(
raise fastapi.HTTPException(status_code=404, detail="File not found")
return await _create_file_download_response(file)
@router.post(
"/files/upload",
summary="Upload file to workspace",
)
async def upload_file(
user_id: Annotated[str, fastapi.Security(get_user_id)],
file: UploadFile,
session_id: str | None = Query(default=None),
) -> UploadFileResponse:
"""
Upload a file to the user's workspace.
Files are stored in session-scoped paths when session_id is provided,
so the agent's session-scoped tools can discover them automatically.
"""
config = Config()
# Sanitize filename — strip any directory components
filename = os.path.basename(file.filename or "upload") or "upload"
# Read file content with early abort on size limit
max_file_bytes = config.max_file_size_mb * 1024 * 1024
chunks: list[bytes] = []
total_size = 0
while chunk := await file.read(64 * 1024): # 64KB chunks
total_size += len(chunk)
if total_size > max_file_bytes:
raise fastapi.HTTPException(
status_code=413,
detail=f"File exceeds maximum size of {config.max_file_size_mb} MB",
)
chunks.append(chunk)
content = b"".join(chunks)
# Get or create workspace
workspace = await get_or_create_workspace(user_id)
# Pre-write storage cap check (soft check — final enforcement is post-write)
storage_limit_bytes = config.max_workspace_storage_mb * 1024 * 1024
current_usage = await get_workspace_total_size(workspace.id)
if storage_limit_bytes and current_usage + len(content) > storage_limit_bytes:
used_percent = (current_usage / storage_limit_bytes) * 100
raise fastapi.HTTPException(
status_code=413,
detail={
"message": "Storage limit exceeded",
"used_bytes": current_usage,
"limit_bytes": storage_limit_bytes,
"used_percent": round(used_percent, 1),
},
)
# Warn at 80% usage
if (
storage_limit_bytes
and (usage_ratio := (current_usage + len(content)) / storage_limit_bytes) >= 0.8
):
logger.warning(
f"User {user_id} workspace storage at {usage_ratio * 100:.1f}% "
f"({current_usage + len(content)} / {storage_limit_bytes} bytes)"
)
# Virus scan
await scan_content_safe(content, filename=filename)
# Write file via WorkspaceManager
manager = WorkspaceManager(user_id, workspace.id, session_id)
workspace_file = await manager.write_file(content, filename)
# Post-write storage check — eliminates TOCTOU race on the quota.
# If a concurrent upload pushed us over the limit, undo this write.
new_total = await get_workspace_total_size(workspace.id)
if storage_limit_bytes and new_total > storage_limit_bytes:
await soft_delete_workspace_file(workspace_file.id, workspace.id)
raise fastapi.HTTPException(
status_code=413,
detail={
"message": "Storage limit exceeded (concurrent upload)",
"used_bytes": new_total,
"limit_bytes": storage_limit_bytes,
},
)
return UploadFileResponse(
file_id=workspace_file.id,
name=workspace_file.name,
path=workspace_file.path,
mime_type=workspace_file.mime_type,
size_bytes=workspace_file.size_bytes,
)
@router.get(
"/storage/usage",
summary="Get workspace storage usage",
)
async def get_storage_usage(
user_id: Annotated[str, fastapi.Security(get_user_id)],
) -> StorageUsageResponse:
"""
Get storage usage information for the user's workspace.
"""
config = Config()
workspace = await get_or_create_workspace(user_id)
used_bytes = await get_workspace_total_size(workspace.id)
file_count = await count_workspace_files(workspace.id)
limit_bytes = config.max_workspace_storage_mb * 1024 * 1024
return StorageUsageResponse(
used_bytes=used_bytes,
limit_bytes=limit_bytes,
used_percent=round((used_bytes / limit_bytes) * 100, 1) if limit_bytes else 0,
file_count=file_count,
)

View File

@@ -0,0 +1,307 @@
"""Tests for workspace file upload and download routes."""
import io
from datetime import datetime, timezone
import fastapi
import fastapi.testclient
import pytest
import pytest_mock
from backend.api.features.workspace import routes as workspace_routes
from backend.data.workspace import WorkspaceFile
app = fastapi.FastAPI()
app.include_router(workspace_routes.router)
@app.exception_handler(ValueError)
async def _value_error_handler(
request: fastapi.Request, exc: ValueError
) -> fastapi.responses.JSONResponse:
"""Mirror the production ValueError → 400 mapping from rest_api.py."""
return fastapi.responses.JSONResponse(status_code=400, content={"detail": str(exc)})
client = fastapi.testclient.TestClient(app)
TEST_USER_ID = "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
MOCK_WORKSPACE = type("W", (), {"id": "ws-1"})()
_NOW = datetime(2023, 1, 1, tzinfo=timezone.utc)
MOCK_FILE = WorkspaceFile(
id="file-aaa-bbb",
workspace_id="ws-1",
created_at=_NOW,
updated_at=_NOW,
name="hello.txt",
path="/session/hello.txt",
mime_type="text/plain",
size_bytes=13,
storage_path="local://hello.txt",
)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield
app.dependency_overrides.clear()
def _upload(
filename: str = "hello.txt",
content: bytes = b"Hello, world!",
content_type: str = "text/plain",
):
"""Helper to POST a file upload."""
return client.post(
"/files/upload?session_id=sess-1",
files={"file": (filename, io.BytesIO(content), content_type)},
)
# ---- Happy path ----
def test_upload_happy_path(mocker: pytest_mock.MockFixture):
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
response = _upload()
assert response.status_code == 200
data = response.json()
assert data["file_id"] == "file-aaa-bbb"
assert data["name"] == "hello.txt"
assert data["size_bytes"] == 13
# ---- Per-file size limit ----
def test_upload_exceeds_max_file_size(mocker: pytest_mock.MockFixture):
"""Files larger than max_file_size_mb should be rejected with 413."""
cfg = mocker.patch("backend.api.features.workspace.routes.Config")
cfg.return_value.max_file_size_mb = 0 # 0 MB → any content is too big
cfg.return_value.max_workspace_storage_mb = 500
response = _upload(content=b"x" * 1024)
assert response.status_code == 413
# ---- Storage quota exceeded ----
def test_upload_storage_quota_exceeded(mocker: pytest_mock.MockFixture):
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
# Current usage already at limit
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=500 * 1024 * 1024,
)
response = _upload()
assert response.status_code == 413
assert "Storage limit exceeded" in response.text
# ---- Post-write quota race (B2) ----
def test_upload_post_write_quota_race(mocker: pytest_mock.MockFixture):
"""If a concurrent upload tips the total over the limit after write,
the file should be soft-deleted and 413 returned."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
# Pre-write check passes (under limit), but post-write check fails
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
side_effect=[0, 600 * 1024 * 1024], # first call OK, second over limit
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
mock_delete = mocker.patch(
"backend.api.features.workspace.routes.soft_delete_workspace_file",
return_value=None,
)
response = _upload()
assert response.status_code == 413
mock_delete.assert_called_once_with("file-aaa-bbb", "ws-1")
# ---- Any extension accepted (no allowlist) ----
def test_upload_any_extension(mocker: pytest_mock.MockFixture):
"""Any file extension should be accepted — ClamAV is the security layer."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
response = _upload(filename="data.xyz", content=b"arbitrary")
assert response.status_code == 200
# ---- Virus scan rejection ----
def test_upload_blocked_by_virus_scan(mocker: pytest_mock.MockFixture):
"""Files flagged by ClamAV should be rejected and never written to storage."""
from backend.api.features.store.exceptions import VirusDetectedError
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
side_effect=VirusDetectedError("Eicar-Test-Signature"),
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
response = _upload(filename="evil.exe", content=b"X5O!P%@AP...")
assert response.status_code == 400
assert "Virus detected" in response.text
mock_manager.write_file.assert_not_called()
# ---- No file extension ----
def test_upload_file_without_extension(mocker: pytest_mock.MockFixture):
"""Files without an extension should be accepted and stored as-is."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
response = _upload(
filename="Makefile",
content=b"all:\n\techo hello",
content_type="application/octet-stream",
)
assert response.status_code == 200
mock_manager.write_file.assert_called_once()
assert mock_manager.write_file.call_args[0][1] == "Makefile"
# ---- Filename sanitization (SF5) ----
def test_upload_strips_path_components(mocker: pytest_mock.MockFixture):
"""Path-traversal filenames should be reduced to their basename."""
mocker.patch(
"backend.api.features.workspace.routes.get_or_create_workspace",
return_value=MOCK_WORKSPACE,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_total_size",
return_value=0,
)
mocker.patch(
"backend.api.features.workspace.routes.scan_content_safe",
return_value=None,
)
mock_manager = mocker.MagicMock()
mock_manager.write_file = mocker.AsyncMock(return_value=MOCK_FILE)
mocker.patch(
"backend.api.features.workspace.routes.WorkspaceManager",
return_value=mock_manager,
)
# Filename with traversal
_upload(filename="../../etc/passwd.txt")
# write_file should have been called with just the basename
mock_manager.write_file.assert_called_once()
call_args = mock_manager.write_file.call_args
assert call_args[0][1] == "passwd.txt"
# ---- Download ----
def test_download_file_not_found(mocker: pytest_mock.MockFixture):
mocker.patch(
"backend.api.features.workspace.routes.get_workspace",
return_value=MOCK_WORKSPACE,
)
mocker.patch(
"backend.api.features.workspace.routes.get_workspace_file",
return_value=None,
)
response = client.get("/files/some-file-id/download")
assert response.status_code == 404

View File

@@ -119,12 +119,12 @@ class CoPilotProcessor:
"""
from backend.util.workspace_storage import shutdown_workspace_storage
coro = shutdown_workspace_storage()
try:
future = asyncio.run_coroutine_threadsafe(
shutdown_workspace_storage(), self.execution_loop
)
future = asyncio.run_coroutine_threadsafe(coro, self.execution_loop)
future.result(timeout=5)
except Exception as e:
coro.close() # Prevent "coroutine was never awaited" warning
error_msg = str(e) or type(e).__name__
logger.warning(
f"[CoPilotExecutor] Worker {self.tid} cleanup error: {error_msg}"

View File

@@ -153,6 +153,9 @@ class CoPilotExecutionEntry(BaseModel):
context: dict[str, str] | None = None
"""Optional context for the message (e.g., {url: str, content: str})"""
file_ids: list[str] | None = None
"""Workspace file IDs attached to the user's message"""
class CancelCoPilotEvent(BaseModel):
"""Event to cancel a CoPilot operation."""
@@ -171,6 +174,7 @@ async def enqueue_copilot_turn(
turn_id: str,
is_user_message: bool = True,
context: dict[str, str] | None = None,
file_ids: list[str] | None = None,
) -> None:
"""Enqueue a CoPilot task for processing by the executor service.
@@ -181,6 +185,7 @@ async def enqueue_copilot_turn(
turn_id: Per-turn UUID for Redis stream isolation
is_user_message: Whether the message is from the user (vs system/assistant)
context: Optional context for the message (e.g., {url: str, content: str})
file_ids: Optional workspace file IDs attached to the user's message
"""
from backend.util.clients import get_async_copilot_queue
@@ -191,6 +196,7 @@ async def enqueue_copilot_turn(
message=message,
is_user_message=is_user_message,
context=context,
file_ids=file_ids,
)
queue_client = await get_async_copilot_queue()

View File

@@ -0,0 +1,172 @@
"""Tests for OTEL tracing setup in the SDK copilot path."""
import os
from unittest.mock import MagicMock, patch
class TestSetupLangfuseOtel:
"""Tests for _setup_langfuse_otel()."""
def test_noop_when_langfuse_not_configured(self):
"""No env vars should be set when Langfuse credentials are missing."""
with patch(
"backend.copilot.sdk.service._is_langfuse_configured", return_value=False
):
from backend.copilot.sdk.service import _setup_langfuse_otel
# Clear any previously set env vars
env_keys = [
"LANGSMITH_OTEL_ENABLED",
"LANGSMITH_OTEL_ONLY",
"LANGSMITH_TRACING",
"OTEL_EXPORTER_OTLP_ENDPOINT",
"OTEL_EXPORTER_OTLP_HEADERS",
]
saved = {k: os.environ.pop(k, None) for k in env_keys}
try:
_setup_langfuse_otel()
for key in env_keys:
assert key not in os.environ, f"{key} should not be set"
finally:
for k, v in saved.items():
if v is not None:
os.environ[k] = v
def test_sets_env_vars_when_langfuse_configured(self):
"""OTEL env vars should be set when Langfuse credentials exist."""
mock_settings = MagicMock()
mock_settings.secrets.langfuse_public_key = "pk-test-123"
mock_settings.secrets.langfuse_secret_key = "sk-test-456"
mock_settings.secrets.langfuse_host = "https://langfuse.example.com"
mock_settings.secrets.langfuse_tracing_environment = "test"
with (
patch(
"backend.copilot.sdk.service._is_langfuse_configured",
return_value=True,
),
patch("backend.copilot.sdk.service.Settings", return_value=mock_settings),
patch(
"backend.copilot.sdk.service.configure_claude_agent_sdk",
return_value=True,
) as mock_configure,
):
from backend.copilot.sdk.service import _setup_langfuse_otel
# Clear env vars so setdefault works
env_keys = [
"LANGSMITH_OTEL_ENABLED",
"LANGSMITH_OTEL_ONLY",
"LANGSMITH_TRACING",
"OTEL_EXPORTER_OTLP_ENDPOINT",
"OTEL_EXPORTER_OTLP_HEADERS",
"OTEL_RESOURCE_ATTRIBUTES",
]
saved = {k: os.environ.pop(k, None) for k in env_keys}
try:
_setup_langfuse_otel()
assert os.environ["LANGSMITH_OTEL_ENABLED"] == "true"
assert os.environ["LANGSMITH_OTEL_ONLY"] == "true"
assert os.environ["LANGSMITH_TRACING"] == "true"
assert (
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"]
== "https://langfuse.example.com/api/public/otel"
)
assert "Authorization=Basic" in os.environ["OTEL_EXPORTER_OTLP_HEADERS"]
assert (
os.environ["OTEL_RESOURCE_ATTRIBUTES"]
== "langfuse.environment=test"
)
mock_configure.assert_called_once_with(tags=["sdk"])
finally:
for k, v in saved.items():
if v is not None:
os.environ[k] = v
elif k in os.environ:
del os.environ[k]
def test_existing_env_vars_not_overwritten(self):
"""Explicit env-var overrides should not be clobbered."""
mock_settings = MagicMock()
mock_settings.secrets.langfuse_public_key = "pk-test"
mock_settings.secrets.langfuse_secret_key = "sk-test"
mock_settings.secrets.langfuse_host = "https://langfuse.example.com"
with (
patch(
"backend.copilot.sdk.service._is_langfuse_configured",
return_value=True,
),
patch("backend.copilot.sdk.service.Settings", return_value=mock_settings),
patch(
"backend.copilot.sdk.service.configure_claude_agent_sdk",
return_value=True,
),
):
from backend.copilot.sdk.service import _setup_langfuse_otel
saved = os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT")
try:
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://custom.endpoint/v1"
_setup_langfuse_otel()
assert (
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"]
== "https://custom.endpoint/v1"
)
finally:
if saved is not None:
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = saved
elif "OTEL_EXPORTER_OTLP_ENDPOINT" in os.environ:
del os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"]
def test_graceful_failure_on_exception(self):
"""Setup should not raise even if internal code fails."""
with (
patch(
"backend.copilot.sdk.service._is_langfuse_configured",
return_value=True,
),
patch(
"backend.copilot.sdk.service.Settings",
side_effect=RuntimeError("settings unavailable"),
),
):
from backend.copilot.sdk.service import _setup_langfuse_otel
# Should not raise — just logs and returns
_setup_langfuse_otel()
class TestPropagateAttributesImport:
"""Verify langfuse.propagate_attributes is available."""
def test_propagate_attributes_is_importable(self):
from langfuse import propagate_attributes
assert callable(propagate_attributes)
def test_propagate_attributes_returns_context_manager(self):
from langfuse import propagate_attributes
ctx = propagate_attributes(user_id="u1", session_id="s1", tags=["test"])
assert hasattr(ctx, "__enter__")
assert hasattr(ctx, "__exit__")
class TestReceiveResponseCompat:
"""Verify ClaudeSDKClient.receive_response() exists (langsmith patches it)."""
def test_receive_response_exists(self):
from claude_agent_sdk import ClaudeSDKClient
assert hasattr(ClaudeSDKClient, "receive_response")
def test_receive_response_is_async_generator(self):
import inspect
from claude_agent_sdk import ClaudeSDKClient
method = getattr(ClaudeSDKClient, "receive_response")
assert inspect.isfunction(method) or inspect.ismethod(method)

View File

@@ -1,17 +1,23 @@
"""Claude Agent SDK service layer for CoPilot chat completions."""
import asyncio
import base64
import json
import logging
import os
import sys
import uuid
from collections.abc import AsyncGenerator
from dataclasses import dataclass
from typing import Any, cast
from langfuse import propagate_attributes
from langsmith.integrations.claude_agent_sdk import configure_claude_agent_sdk
from backend.data.redis_client import get_redis_async
from backend.executor.cluster_lock import AsyncClusterLock
from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings
from ..config import ChatConfig
from ..model import (
@@ -31,7 +37,11 @@ from ..response_model import (
StreamToolInputAvailable,
StreamToolOutputAvailable,
)
from ..service import _build_system_prompt, _generate_session_title
from ..service import (
_build_system_prompt,
_generate_session_title,
_is_langfuse_configured,
)
from ..tools.sandbox import WORKSPACE_PREFIX, make_session_path
from ..tracking import track_user_message
from .response_adapter import SDKResponseAdapter
@@ -56,6 +66,55 @@ logger = logging.getLogger(__name__)
config = ChatConfig()
def _setup_langfuse_otel() -> None:
"""Configure OTEL tracing for the Claude Agent SDK → Langfuse.
This uses LangSmith's built-in Claude Agent SDK integration to monkey-patch
``ClaudeSDKClient``, capturing every tool call and model turn as OTEL spans.
Spans are exported via OTLP to Langfuse (or any OTEL-compatible backend).
To route traces elsewhere, override ``OTEL_EXPORTER_OTLP_ENDPOINT`` and
``OTEL_EXPORTER_OTLP_HEADERS`` environment variables — no code changes needed.
"""
if not _is_langfuse_configured():
return
try:
settings = Settings()
pk = settings.secrets.langfuse_public_key
sk = settings.secrets.langfuse_secret_key
host = settings.secrets.langfuse_host
# OTEL exporter config — these are only set if not already present,
# so explicit env-var overrides always win.
creds = base64.b64encode(f"{pk}:{sk}".encode()).decode()
os.environ.setdefault("LANGSMITH_OTEL_ENABLED", "true")
os.environ.setdefault("LANGSMITH_OTEL_ONLY", "true")
os.environ.setdefault("LANGSMITH_TRACING", "true")
os.environ.setdefault("OTEL_EXPORTER_OTLP_ENDPOINT", f"{host}/api/public/otel")
os.environ.setdefault(
"OTEL_EXPORTER_OTLP_HEADERS", f"Authorization=Basic {creds}"
)
# Set the Langfuse environment via OTEL resource attributes so the
# Langfuse server maps it to the first-class environment field.
tracing_env = settings.secrets.langfuse_tracing_environment
os.environ.setdefault(
"OTEL_RESOURCE_ATTRIBUTES",
f"langfuse.environment={tracing_env}",
)
configure_claude_agent_sdk(tags=["sdk"])
logger.info(
"OTEL tracing configured for Claude Agent SDK → %s [%s]", host, tracing_env
)
except Exception:
logger.warning("OTEL setup skipped — failed to configure", exc_info=True)
_setup_langfuse_otel()
# Set to hold background tasks to prevent garbage collection
_background_tasks: set[asyncio.Task[Any]] = set()
@@ -75,13 +134,15 @@ class CapturedTranscript:
_SDK_CWD_PREFIX = WORKSPACE_PREFIX
# Special message prefixes for text-based markers (parsed by frontend)
COPILOT_ERROR_PREFIX = "[COPILOT_ERROR]" # Renders as ErrorCard
COPILOT_SYSTEM_PREFIX = "[COPILOT_SYSTEM]" # Renders as system info message
# Special message prefixes for text-based markers (parsed by frontend).
# The hex suffix makes accidental LLM generation of these strings virtually
# impossible, avoiding false-positive marker detection in normal conversation.
COPILOT_ERROR_PREFIX = "[__COPILOT_ERROR_f7a1__]" # Renders as ErrorCard
COPILOT_SYSTEM_PREFIX = "[__COPILOT_SYSTEM_e3b0__]" # Renders as system info message
# Heartbeat interval — keep SSE alive through proxies/LBs during tool execution.
# IMPORTANT: Must be less than frontend timeout (12s in useCopilotPage.ts)
_HEARTBEAT_INTERVAL = 10.0 # seconds
_HEARTBEAT_INTERVAL = 3.0 # seconds
# Appended to the system prompt to inform the agent about available tools.
@@ -176,7 +237,10 @@ def _resolve_sdk_model() -> str | None:
return model
def _build_sdk_env() -> dict[str, str]:
def _build_sdk_env(
user_id: str | None = None,
session_id: str | None = None,
) -> dict[str, str]:
"""Build env vars for the SDK CLI process.
Routes API calls through OpenRouter (or a custom base_url) using
@@ -186,6 +250,11 @@ def _build_sdk_env() -> dict[str, str]:
Only overrides ``ANTHROPIC_API_KEY`` when a valid proxy URL and auth
token are both present — otherwise returns an empty dict so the SDK
falls back to its default credentials.
When ``user_id`` or ``session_id`` are provided they are injected via
``CLAUDE_CODE_EXTRA_BODY`` so that OpenRouter (or any proxy) receives
them on every API call — mirroring the ``extra_body`` the non-SDK path
sends.
"""
env: dict[str, str] = {}
if config.api_key and config.base_url:
@@ -200,6 +269,17 @@ def _build_sdk_env() -> dict[str, str]:
env["ANTHROPIC_AUTH_TOKEN"] = config.api_key
# Must be explicitly empty so the CLI uses AUTH_TOKEN instead
env["ANTHROPIC_API_KEY"] = ""
# Inject user/session metadata via CLAUDE_CODE_EXTRA_BODY so
# OpenRouter receives it in every API request body.
extra_body: dict[str, Any] = {}
if user_id:
extra_body["user"] = user_id[:128]
if session_id:
extra_body["session_id"] = session_id[:128]
if extra_body:
env["CLAUDE_CODE_EXTRA_BODY"] = json.dumps(extra_body)
return env
@@ -515,6 +595,9 @@ async def stream_chat_completion_sdk(
)
return
# OTEL context manager — initialized inside the try and cleaned up in finally.
_otel_ctx: Any = None
# Make sure there is no more code between the lock acquitition and try-block.
try:
# Build system prompt (reuses non-SDK path with Langfuse support).
@@ -545,7 +628,7 @@ async def stream_chat_completion_sdk(
from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
# Fail fast when no API credentials are available at all
sdk_env = _build_sdk_env()
sdk_env = _build_sdk_env(user_id=user_id, session_id=session_id)
if not sdk_env and not os.environ.get("ANTHROPIC_API_KEY"):
raise RuntimeError(
"No API key configured. Set OPEN_ROUTER_API_KEY "
@@ -633,6 +716,19 @@ async def stream_chat_completion_sdk(
adapter = SDKResponseAdapter(message_id=message_id, session_id=session_id)
# Propagate user_id/session_id as OTEL context attributes so the
# langsmith tracing integration attaches them to every span. This
# is what Langfuse (or any OTEL backend) maps to its native
# user/session fields.
_otel_ctx = propagate_attributes(
user_id=user_id,
session_id=session_id,
trace_name="copilot-sdk",
tags=["sdk"],
metadata={"resume": str(use_resume)},
)
_otel_ctx.__enter__()
async with ClaudeSDKClient(options=options) as client:
current_message = message or ""
if not current_message and session.messages:
@@ -676,7 +772,7 @@ async def stream_chat_completion_sdk(
# Instead, wrap __anext__() in a Task and use asyncio.wait()
# with a timeout. On timeout we emit a heartbeat but keep the
# Task alive so it can deliver the next message.
msg_iter = client.receive_messages().__aiter__()
msg_iter = client.receive_response().__aiter__()
pending_task: asyncio.Task[Any] | None = None
try:
while not stream_completed:
@@ -710,7 +806,7 @@ async def stream_chat_completion_sdk(
break
except Exception as stream_err:
# SDK sends {"type": "error"} which raises
# Exception in receive_messages() — capture it
# Exception in receive_response() — capture it
# so the session can still be saved and the
# frontend gets a clean finish.
logger.error(
@@ -1060,6 +1156,13 @@ async def stream_chat_completion_sdk(
raise
finally:
# --- Close OTEL context ---
if _otel_ctx is not None:
try:
_otel_ctx.__exit__(*sys.exc_info())
except Exception:
logger.warning("OTEL context teardown failed", exc_info=True)
# --- Persist session messages ---
# This MUST run in finally to persist messages even when the generator
# is stopped early (e.g., user clicks stop, processor breaks stream loop).

View File

@@ -10,7 +10,6 @@ from .add_understanding import AddUnderstandingTool
from .agent_output import AgentOutputTool
from .base import BaseTool
from .bash_exec import BashExecTool
from .browse_web import BrowseWebTool
from .create_agent import CreateAgentTool
from .customize_agent import CustomizeAgentTool
from .edit_agent import EditAgentTool
@@ -51,8 +50,6 @@ TOOL_REGISTRY: dict[str, BaseTool] = {
"get_doc_page": GetDocPageTool(),
# Web fetch for safe URL retrieval
"web_fetch": WebFetchTool(),
# Browser-based browsing for JS-rendered pages (Stagehand + Browserbase)
"browse_web": BrowseWebTool(),
# Sandboxed code execution (bubblewrap)
"bash_exec": BashExecTool(),
# Persistent workspace tools (cloud storage, survives across sessions)

View File

@@ -1,3 +1,4 @@
import logging
import uuid
from datetime import UTC, datetime
from os import getenv
@@ -12,12 +13,34 @@ from backend.blocks.firecrawl.scrape import FirecrawlScrapeBlock
from backend.blocks.io import AgentInputBlock, AgentOutputBlock
from backend.blocks.llm import AITextGeneratorBlock
from backend.copilot.model import ChatSession
from backend.data import db as db_module
from backend.data.db import prisma
from backend.data.graph import Graph, Link, Node, create_graph
from backend.data.model import APIKeyCredentials
from backend.data.user import get_or_create_user
from backend.integrations.credentials_store import IntegrationCredentialsStore
_logger = logging.getLogger(__name__)
async def _ensure_db_connected() -> None:
"""Ensure the Prisma connection is alive on the current event loop.
On Python 3.11, the httpx transport inside Prisma can reference a stale
(closed) event loop when session-scoped async fixtures are evaluated long
after the initial ``server`` fixture connected Prisma. A cheap health-check
followed by a reconnect fixes this without affecting other fixtures.
"""
try:
await prisma.query_raw("SELECT 1")
except Exception:
_logger.info("Prisma connection stale reconnecting")
try:
await db_module.disconnect()
except Exception:
pass
await db_module.connect()
def make_session(user_id: str):
return ChatSession(
@@ -43,6 +66,8 @@ async def setup_test_data(server):
Depends on ``server`` to ensure Prisma is connected.
"""
await _ensure_db_connected()
# 1. Create a test user
user_data = {
"sub": f"test-user-{uuid.uuid4()}",
@@ -164,6 +189,8 @@ async def setup_llm_test_data(server):
Depends on ``server`` to ensure Prisma is connected.
"""
await _ensure_db_connected()
key = getenv("OPENAI_API_KEY")
if not key:
return pytest.skip("OPENAI_API_KEY is not set")
@@ -330,6 +357,8 @@ async def setup_firecrawl_test_data(server):
Depends on ``server`` to ensure Prisma is connected.
"""
await _ensure_db_connected()
# 1. Create a test user
user_data = {
"sub": f"test-user-{uuid.uuid4()}",

View File

@@ -1,227 +0,0 @@
"""Web browsing tool — navigate real browser sessions to extract page content.
Uses Stagehand + Browserbase for cloud-based browser execution. Handles
JS-rendered pages, SPAs, and dynamic content that web_fetch cannot reach.
Requires environment variables:
STAGEHAND_API_KEY — Browserbase API key
STAGEHAND_PROJECT_ID — Browserbase project ID
ANTHROPIC_API_KEY — LLM key used by Stagehand for extraction
"""
import logging
import os
import threading
from typing import Any
from backend.copilot.model import ChatSession
from .base import BaseTool
from .models import BrowseWebResponse, ErrorResponse, ToolResponseBase
logger = logging.getLogger(__name__)
# Stagehand uses the LLM internally for natural-language extraction/actions.
_STAGEHAND_MODEL = "anthropic/claude-sonnet-4-5-20250929"
# Hard cap on extracted content returned to the LLM context.
_MAX_CONTENT_CHARS = 50_000
# Explicit timeouts for Stagehand browser operations (milliseconds).
_GOTO_TIMEOUT_MS = 30_000 # page navigation
_EXTRACT_TIMEOUT_MS = 60_000 # LLM extraction
# ---------------------------------------------------------------------------
# Thread-safety patch for Stagehand signal handlers (applied lazily, once).
#
# Stagehand calls signal.signal() during __init__, which raises ValueError
# when called from a non-main thread (e.g. the CoPilot executor thread pool).
# We patch _register_signal_handlers to be a no-op outside the main thread.
# The patch is applied exactly once per process via double-checked locking.
# ---------------------------------------------------------------------------
_stagehand_patched = False
_patch_lock = threading.Lock()
def _patch_stagehand_once() -> None:
"""Monkey-patch Stagehand signal handler registration to be thread-safe.
Must be called after ``import stagehand.main`` has succeeded.
Safe to call from multiple threads — applies the patch at most once.
"""
global _stagehand_patched
if _stagehand_patched:
return
with _patch_lock:
if _stagehand_patched:
return
import stagehand.main # noqa: PLC0415
_original = stagehand.main.Stagehand._register_signal_handlers
def _safe_register(self: Any) -> None:
if threading.current_thread() is threading.main_thread():
_original(self)
stagehand.main.Stagehand._register_signal_handlers = _safe_register
_stagehand_patched = True
class BrowseWebTool(BaseTool):
"""Navigate a URL with a real browser and extract its content.
Use this instead of ``web_fetch`` when the page requires JavaScript
to render (SPAs, dashboards, paywalled content with JS checks, etc.).
"""
@property
def name(self) -> str:
return "browse_web"
@property
def description(self) -> str:
return (
"Navigate to a URL using a real browser and extract content. "
"Handles JavaScript-rendered pages and dynamic content that "
"web_fetch cannot reach. "
"Specify exactly what to extract via the `instruction` parameter."
)
@property
def parameters(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The HTTP/HTTPS URL to navigate to.",
},
"instruction": {
"type": "string",
"description": (
"What to extract from the page. Be specific — e.g. "
"'Extract all pricing plans with features and prices', "
"'Get the main article text and author', "
"'List all navigation links'. "
"Defaults to extracting the main page content."
),
"default": "Extract the main content of this page.",
},
},
"required": ["url"],
}
@property
def requires_auth(self) -> bool:
return True
async def _execute(
self,
user_id: str | None, # noqa: ARG002
session: ChatSession,
**kwargs: Any,
) -> ToolResponseBase:
"""Navigate to a URL with a real browser and return extracted content."""
url: str = (kwargs.get("url") or "").strip()
instruction: str = (
kwargs.get("instruction") or "Extract the main content of this page."
)
session_id = session.session_id if session else None
if not url:
return ErrorResponse(
message="Please provide a URL to browse.",
error="missing_url",
session_id=session_id,
)
if not url.startswith(("http://", "https://")):
return ErrorResponse(
message="Only HTTP/HTTPS URLs are supported.",
error="invalid_url",
session_id=session_id,
)
api_key = os.environ.get("STAGEHAND_API_KEY")
project_id = os.environ.get("STAGEHAND_PROJECT_ID")
model_api_key = os.environ.get("ANTHROPIC_API_KEY")
if not api_key or not project_id:
return ErrorResponse(
message=(
"Web browsing is not configured on this platform. "
"STAGEHAND_API_KEY and STAGEHAND_PROJECT_ID are required."
),
error="not_configured",
session_id=session_id,
)
if not model_api_key:
return ErrorResponse(
message=(
"Web browsing is not configured: ANTHROPIC_API_KEY is required "
"for Stagehand's extraction model."
),
error="not_configured",
session_id=session_id,
)
# Lazy import — Stagehand is an optional heavy dependency.
# Importing here scopes any ImportError to this tool only, so other
# tools continue to register and work normally if Stagehand is absent.
try:
from stagehand import Stagehand # noqa: PLC0415
except ImportError:
return ErrorResponse(
message="Web browsing is not available: Stagehand is not installed.",
error="not_configured",
session_id=session_id,
)
# Apply the signal handler patch now that we know stagehand is present.
_patch_stagehand_once()
client: Any | None = None
try:
client = Stagehand(
api_key=api_key,
project_id=project_id,
model_name=_STAGEHAND_MODEL,
model_api_key=model_api_key,
)
await client.init()
page = client.page
assert page is not None, "Stagehand page is not initialized"
await page.goto(url, timeoutMs=_GOTO_TIMEOUT_MS)
result = await page.extract(instruction, timeoutMs=_EXTRACT_TIMEOUT_MS)
# Extract the text content from the Pydantic result model.
raw = result.model_dump().get("extraction", "")
content = str(raw) if raw else ""
truncated = len(content) > _MAX_CONTENT_CHARS
if truncated:
suffix = "\n\n[Content truncated]"
keep = max(0, _MAX_CONTENT_CHARS - len(suffix))
content = content[:keep] + suffix
return BrowseWebResponse(
message=f"Browsed {url}",
url=url,
content=content,
truncated=truncated,
session_id=session_id,
)
except Exception:
logger.exception("[browse_web] Failed for %s", url)
return ErrorResponse(
message="Failed to browse URL.",
error="browse_failed",
session_id=session_id,
)
finally:
if client is not None:
try:
await client.close()
except Exception:
pass

View File

@@ -1,486 +0,0 @@
"""Unit tests for BrowseWebTool.
All tests run without a running server / database. External dependencies
(Stagehand, Browserbase) are mocked via sys.modules injection so the suite
stays fast and deterministic.
"""
import sys
import threading
import uuid
from datetime import UTC, datetime
from unittest.mock import AsyncMock, MagicMock
import pytest
import backend.copilot.tools.browse_web as _browse_web_mod
from backend.copilot.model import ChatSession
from backend.copilot.tools.browse_web import (
_MAX_CONTENT_CHARS,
BrowseWebTool,
_patch_stagehand_once,
)
from backend.copilot.tools.models import BrowseWebResponse, ErrorResponse, ResponseType
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def make_session(user_id: str = "test-user") -> ChatSession:
return ChatSession(
session_id=str(uuid.uuid4()),
user_id=user_id,
messages=[],
usage=[],
started_at=datetime.now(UTC),
updated_at=datetime.now(UTC),
successful_agent_runs={},
successful_agent_schedules={},
)
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def reset_stagehand_patch():
"""Reset the process-level _stagehand_patched flag before every test."""
_browse_web_mod._stagehand_patched = False
yield
_browse_web_mod._stagehand_patched = False
@pytest.fixture()
def env_vars(monkeypatch):
"""Inject the three env vars required by BrowseWebTool."""
monkeypatch.setenv("STAGEHAND_API_KEY", "test-api-key")
monkeypatch.setenv("STAGEHAND_PROJECT_ID", "test-project-id")
monkeypatch.setenv("ANTHROPIC_API_KEY", "test-anthropic-key")
@pytest.fixture()
def stagehand_mocks(monkeypatch):
"""Inject mock stagehand + stagehand.main into sys.modules.
Returns a dict with the mock objects so individual tests can
assert on calls or inject side-effects.
"""
# --- mock page ---
mock_result = MagicMock()
mock_result.model_dump.return_value = {"extraction": "Page content here"}
mock_page = AsyncMock()
mock_page.goto = AsyncMock(return_value=None)
mock_page.extract = AsyncMock(return_value=mock_result)
# --- mock client ---
mock_client = AsyncMock()
mock_client.page = mock_page
mock_client.init = AsyncMock(return_value=None)
mock_client.close = AsyncMock(return_value=None)
MockStagehand = MagicMock(return_value=mock_client)
# --- stagehand top-level module ---
mock_stagehand = MagicMock()
mock_stagehand.Stagehand = MockStagehand
# --- stagehand.main (needed by _patch_stagehand_once) ---
mock_main = MagicMock()
mock_main.Stagehand = MagicMock()
mock_main.Stagehand._register_signal_handlers = MagicMock()
monkeypatch.setitem(sys.modules, "stagehand", mock_stagehand)
monkeypatch.setitem(sys.modules, "stagehand.main", mock_main)
return {
"client": mock_client,
"page": mock_page,
"result": mock_result,
"MockStagehand": MockStagehand,
"mock_main": mock_main,
}
# ---------------------------------------------------------------------------
# 1. Tool metadata
# ---------------------------------------------------------------------------
class TestBrowseWebToolMetadata:
def test_name(self):
assert BrowseWebTool().name == "browse_web"
def test_requires_auth(self):
assert BrowseWebTool().requires_auth is True
def test_url_is_required_parameter(self):
params = BrowseWebTool().parameters
assert "url" in params["properties"]
assert "url" in params["required"]
def test_instruction_is_optional(self):
params = BrowseWebTool().parameters
assert "instruction" in params["properties"]
assert "instruction" not in params.get("required", [])
def test_registered_in_tool_registry(self):
from backend.copilot.tools import TOOL_REGISTRY
assert "browse_web" in TOOL_REGISTRY
assert isinstance(TOOL_REGISTRY["browse_web"], BrowseWebTool)
def test_response_type_enum_value(self):
assert ResponseType.BROWSE_WEB == "browse_web"
# ---------------------------------------------------------------------------
# 2. Input validation (no external deps)
# ---------------------------------------------------------------------------
class TestInputValidation:
async def test_missing_url_returns_error(self):
result = await BrowseWebTool()._execute(user_id="u1", session=make_session())
assert isinstance(result, ErrorResponse)
assert "url" in result.message.lower()
async def test_empty_url_returns_error(self):
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url=""
)
assert isinstance(result, ErrorResponse)
async def test_ftp_url_rejected(self):
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="ftp://example.com/file"
)
assert isinstance(result, ErrorResponse)
assert "http" in result.message.lower()
async def test_file_url_rejected(self):
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="file:///etc/passwd"
)
assert isinstance(result, ErrorResponse)
async def test_javascript_url_rejected(self):
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="javascript:alert(1)"
)
assert isinstance(result, ErrorResponse)
# ---------------------------------------------------------------------------
# 3. Environment variable checks
# ---------------------------------------------------------------------------
class TestEnvVarChecks:
async def test_missing_api_key(self, monkeypatch):
monkeypatch.delenv("STAGEHAND_API_KEY", raising=False)
monkeypatch.setenv("STAGEHAND_PROJECT_ID", "proj")
monkeypatch.setenv("ANTHROPIC_API_KEY", "key")
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert result.error == "not_configured"
async def test_missing_project_id(self, monkeypatch):
monkeypatch.setenv("STAGEHAND_API_KEY", "key")
monkeypatch.delenv("STAGEHAND_PROJECT_ID", raising=False)
monkeypatch.setenv("ANTHROPIC_API_KEY", "key")
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert result.error == "not_configured"
async def test_missing_anthropic_key(self, monkeypatch):
monkeypatch.setenv("STAGEHAND_API_KEY", "key")
monkeypatch.setenv("STAGEHAND_PROJECT_ID", "proj")
monkeypatch.delenv("ANTHROPIC_API_KEY", raising=False)
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert result.error == "not_configured"
# ---------------------------------------------------------------------------
# 4. Stagehand absent (ImportError path)
# ---------------------------------------------------------------------------
class TestStagehandAbsent:
async def test_returns_not_configured_error(self, env_vars, monkeypatch):
"""Blocking the stagehand import must return a graceful ErrorResponse."""
# sys.modules entry set to None → Python raises ImportError on import
monkeypatch.setitem(sys.modules, "stagehand", None)
monkeypatch.setitem(sys.modules, "stagehand.main", None)
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert result.error == "not_configured"
assert "not available" in result.message or "not installed" in result.message
async def test_other_tools_unaffected_when_stagehand_absent(
self, env_vars, monkeypatch
):
"""Registry import must not raise even when stagehand is blocked."""
monkeypatch.setitem(sys.modules, "stagehand", None)
# This import already happened at module load; just verify the registry exists
from backend.copilot.tools import TOOL_REGISTRY
assert "browse_web" in TOOL_REGISTRY
assert "web_fetch" in TOOL_REGISTRY # unrelated tool still present
# ---------------------------------------------------------------------------
# 5. Successful browse
# ---------------------------------------------------------------------------
class TestSuccessfulBrowse:
async def test_returns_browse_web_response(self, env_vars, stagehand_mocks):
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.url == "https://example.com"
assert result.content == "Page content here"
assert result.truncated is False
async def test_http_url_accepted(self, env_vars, stagehand_mocks):
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="http://example.com"
)
assert isinstance(result, BrowseWebResponse)
async def test_session_id_propagated(self, env_vars, stagehand_mocks):
session = make_session()
result = await BrowseWebTool()._execute(
user_id="u1", session=session, url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.session_id == session.session_id
async def test_custom_instruction_forwarded_to_extract(
self, env_vars, stagehand_mocks
):
await BrowseWebTool()._execute(
user_id="u1",
session=make_session(),
url="https://example.com",
instruction="Extract all pricing plans",
)
stagehand_mocks["page"].extract.assert_awaited_once()
first_arg = stagehand_mocks["page"].extract.call_args[0][0]
assert first_arg == "Extract all pricing plans"
async def test_default_instruction_used_when_omitted(
self, env_vars, stagehand_mocks
):
await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
first_arg = stagehand_mocks["page"].extract.call_args[0][0]
assert "main content" in first_arg.lower()
async def test_explicit_timeouts_passed_to_stagehand(
self, env_vars, stagehand_mocks
):
from backend.copilot.tools.browse_web import (
_EXTRACT_TIMEOUT_MS,
_GOTO_TIMEOUT_MS,
)
await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
goto_kwargs = stagehand_mocks["page"].goto.call_args[1]
extract_kwargs = stagehand_mocks["page"].extract.call_args[1]
assert goto_kwargs.get("timeoutMs") == _GOTO_TIMEOUT_MS
assert extract_kwargs.get("timeoutMs") == _EXTRACT_TIMEOUT_MS
async def test_client_closed_after_success(self, env_vars, stagehand_mocks):
await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
stagehand_mocks["client"].close.assert_awaited_once()
# ---------------------------------------------------------------------------
# 6. Truncation
# ---------------------------------------------------------------------------
class TestTruncation:
async def test_short_content_not_truncated(self, env_vars, stagehand_mocks):
stagehand_mocks["result"].model_dump.return_value = {"extraction": "short"}
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.truncated is False
assert result.content == "short"
async def test_oversized_content_is_truncated(self, env_vars, stagehand_mocks):
big = "a" * (_MAX_CONTENT_CHARS + 1000)
stagehand_mocks["result"].model_dump.return_value = {"extraction": big}
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.truncated is True
assert result.content.endswith("[Content truncated]")
async def test_truncated_content_never_exceeds_cap(self, env_vars, stagehand_mocks):
"""The final string must be ≤ _MAX_CONTENT_CHARS regardless of input size."""
big = "b" * (_MAX_CONTENT_CHARS * 3)
stagehand_mocks["result"].model_dump.return_value = {"extraction": big}
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert len(result.content) == _MAX_CONTENT_CHARS
async def test_content_exactly_at_limit_not_truncated(
self, env_vars, stagehand_mocks
):
exact = "c" * _MAX_CONTENT_CHARS
stagehand_mocks["result"].model_dump.return_value = {"extraction": exact}
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.truncated is False
assert len(result.content) == _MAX_CONTENT_CHARS
async def test_empty_extraction_returns_empty_content(
self, env_vars, stagehand_mocks
):
stagehand_mocks["result"].model_dump.return_value = {"extraction": ""}
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.content == ""
assert result.truncated is False
async def test_none_extraction_returns_empty_content(
self, env_vars, stagehand_mocks
):
stagehand_mocks["result"].model_dump.return_value = {"extraction": None}
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, BrowseWebResponse)
assert result.content == ""
# ---------------------------------------------------------------------------
# 7. Error handling
# ---------------------------------------------------------------------------
class TestErrorHandling:
async def test_stagehand_init_exception_returns_generic_error(
self, env_vars, stagehand_mocks
):
stagehand_mocks["client"].init.side_effect = RuntimeError("Connection refused")
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert result.error == "browse_failed"
async def test_raw_exception_text_not_leaked_to_user(
self, env_vars, stagehand_mocks
):
"""Internal error details must not appear in the user-facing message."""
stagehand_mocks["client"].init.side_effect = RuntimeError("SECRET_TOKEN_abc123")
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert "SECRET_TOKEN_abc123" not in result.message
assert result.message == "Failed to browse URL."
async def test_goto_timeout_returns_error(self, env_vars, stagehand_mocks):
stagehand_mocks["page"].goto.side_effect = TimeoutError("Navigation timed out")
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
assert result.error == "browse_failed"
async def test_client_closed_after_exception(self, env_vars, stagehand_mocks):
stagehand_mocks["page"].goto.side_effect = RuntimeError("boom")
await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
stagehand_mocks["client"].close.assert_awaited_once()
async def test_close_failure_does_not_propagate(self, env_vars, stagehand_mocks):
"""If close() itself raises, the tool must still return ErrorResponse."""
stagehand_mocks["client"].init.side_effect = RuntimeError("init failed")
stagehand_mocks["client"].close.side_effect = RuntimeError("close also failed")
result = await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
assert isinstance(result, ErrorResponse)
# ---------------------------------------------------------------------------
# 8. Thread-safety of _patch_stagehand_once
# ---------------------------------------------------------------------------
class TestPatchStagehandOnce:
def test_idempotent_double_call(self, stagehand_mocks):
"""_stagehand_patched transitions False→True exactly once."""
assert _browse_web_mod._stagehand_patched is False
_patch_stagehand_once()
assert _browse_web_mod._stagehand_patched is True
_patch_stagehand_once() # second call — still True, not re-patched
assert _browse_web_mod._stagehand_patched is True
def test_safe_register_is_noop_in_worker_thread(self, stagehand_mocks):
"""The patched handler must silently do nothing when called from a worker."""
_patch_stagehand_once()
mock_main = sys.modules["stagehand.main"]
safe_register = mock_main.Stagehand._register_signal_handlers
errors: list[Exception] = []
def run():
try:
safe_register(MagicMock())
except Exception as exc:
errors.append(exc)
t = threading.Thread(target=run)
t.start()
t.join()
assert errors == [], f"Worker thread raised: {errors}"
def test_patched_flag_set_after_execution(self, env_vars, stagehand_mocks):
"""After a successful browse, _stagehand_patched must be True."""
async def _run():
return await BrowseWebTool()._execute(
user_id="u1", session=make_session(), url="https://example.com"
)
import asyncio
asyncio.get_event_loop().run_until_complete(_run())
assert _browse_web_mod._stagehand_patched is True

View File

@@ -41,8 +41,6 @@ class ResponseType(str, Enum):
INPUT_VALIDATION_ERROR = "input_validation_error"
# Web fetch
WEB_FETCH = "web_fetch"
# Browser-based web browsing (JS-rendered pages)
BROWSE_WEB = "browse_web"
# Code execution
BASH_EXEC = "bash_exec"
# Feature request types
@@ -440,15 +438,6 @@ class WebFetchResponse(ToolResponseBase):
truncated: bool = False
class BrowseWebResponse(ToolResponseBase):
"""Response for browse_web tool."""
type: ResponseType = ResponseType.BROWSE_WEB
url: str
content: str
truncated: bool = False
class BashExecResponse(ToolResponseBase):
"""Response for bash_exec tool."""

View File

@@ -178,9 +178,13 @@ async def test_block_credit_reset(server: SpinTestServer):
assert month2_balance == 1100 # Balance persists, no reset
# Now test the refill behavior when balance is low
# Set balance below refill threshold
# Set balance below refill threshold and backdate updatedAt to month2 so
# the month3 refill check sees a different (month2 → month3) transition.
# Without the explicit updatedAt, Prisma sets it to real-world NOW which
# may share the same calendar month as the mocked month3, suppressing refill.
await UserBalance.prisma().update(
where={"userId": DEFAULT_USER_ID}, data={"balance": 400}
where={"userId": DEFAULT_USER_ID},
data={"balance": 400, "updatedAt": month2},
)
# Create a month 2 transaction to update the last transaction time

View File

@@ -327,11 +327,16 @@ async def get_workspace_total_size(workspace_id: str) -> int:
"""
Get the total size of all files in a workspace.
Queries Prisma directly (skipping Pydantic model conversion) and only
fetches the ``sizeBytes`` column to minimise data transfer.
Args:
workspace_id: The workspace ID
Returns:
Total size in bytes
"""
files = await list_workspace_files(workspace_id)
return sum(file.size_bytes for file in files)
files = await UserWorkspaceFile.prisma().find_many(
where={"workspaceId": workspace_id, "isDeleted": False},
)
return sum(f.sizeBytes for f in files)

View File

@@ -413,6 +413,13 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
description="Maximum file size in MB for workspace files (1-1024 MB)",
)
max_workspace_storage_mb: int = Field(
default=500,
ge=1,
le=10240,
description="Maximum total workspace storage per user in MB.",
)
# AutoMod configuration
automod_enabled: bool = Field(
default=False,
@@ -723,6 +730,9 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings):
langfuse_host: str = Field(
default="https://cloud.langfuse.com", description="Langfuse host URL"
)
langfuse_tracing_environment: str = Field(
default="local", description="Tracing environment tag (local/dev/production)"
)
# PostHog analytics
posthog_api_key: str = Field(default="", description="PostHog API key")

View File

@@ -3230,6 +3230,39 @@ pydantic = ">=1.10.7,<3.0"
requests = ">=2,<3"
wrapt = ">=1.14,<2.0"
[[package]]
name = "langsmith"
version = "0.7.7"
description = "Client library to connect to the LangSmith Observability and Evaluation Platform."
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "langsmith-0.7.7-py3-none-any.whl", hash = "sha256:ef3d0aff77917bf3776368e90f387df5ffd7cb7cff11ece0ec4fd227e433b5de"},
{file = "langsmith-0.7.7.tar.gz", hash = "sha256:2294d3c4a5a8205ef38880c1c412d85322e6055858ae999ef6641c815995d437"},
]
[package.dependencies]
httpx = ">=0.23.0,<1"
orjson = {version = ">=3.9.14", markers = "platform_python_implementation != \"PyPy\""}
packaging = ">=23.2"
pydantic = ">=2,<3"
requests = ">=2.0.0"
requests-toolbelt = ">=1.0.0"
uuid-utils = ">=0.12.0,<1.0"
xxhash = ">=3.0.0"
zstandard = ">=0.23.0"
[package.extras]
claude-agent-sdk = ["claude-agent-sdk (>=0.1.0) ; python_version >= \"3.10\""]
google-adk = ["google-adk (>=1.0.0)", "wrapt (>=1.16.0)"]
langsmith-pyo3 = ["langsmith-pyo3 (>=0.1.0rc2)"]
openai-agents = ["openai-agents (>=0.0.3)"]
otel = ["opentelemetry-api (>=1.30.0)", "opentelemetry-exporter-otlp-proto-http (>=1.30.0)", "opentelemetry-sdk (>=1.30.0)"]
pytest = ["pytest (>=7.0.0)", "rich (>=13.9.4)", "vcrpy (>=7.0.0)"]
sandbox = ["websockets (>=15.0)"]
vcr = ["vcrpy (>=7.0.0)"]
[[package]]
name = "launchdarkly-eventsource"
version = "1.5.1"
@@ -7747,6 +7780,38 @@ h2 = ["h2 (>=4,<5)"]
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
zstd = ["backports-zstd (>=1.0.0) ; python_version < \"3.14\""]
[[package]]
name = "uuid-utils"
version = "0.14.1"
description = "Fast, drop-in replacement for Python's uuid module, powered by Rust."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "uuid_utils-0.14.1-cp39-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:93a3b5dc798a54a1feb693f2d1cb4cf08258c32ff05ae4929b5f0a2ca624a4f0"},
{file = "uuid_utils-0.14.1-cp39-abi3-macosx_10_12_x86_64.whl", hash = "sha256:ccd65a4b8e83af23eae5e56d88034b2fe7264f465d3e830845f10d1591b81741"},
{file = "uuid_utils-0.14.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b56b0cacd81583834820588378e432b0696186683b813058b707aedc1e16c4b1"},
{file = "uuid_utils-0.14.1-cp39-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb3cf14de789097320a3c56bfdfdd51b1225d11d67298afbedee7e84e3837c96"},
{file = "uuid_utils-0.14.1-cp39-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:60e0854a90d67f4b0cc6e54773deb8be618f4c9bad98d3326f081423b5d14fae"},
{file = "uuid_utils-0.14.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce6743ba194de3910b5feb1a62590cd2587e33a73ab6af8a01b642ceb5055862"},
{file = "uuid_utils-0.14.1-cp39-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:043fb58fde6cf1620a6c066382f04f87a8e74feb0f95a585e4ed46f5d44af57b"},
{file = "uuid_utils-0.14.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:c915d53f22945e55fe0d3d3b0b87fd965a57f5fd15666fd92d6593a73b1dd297"},
{file = "uuid_utils-0.14.1-cp39-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:0972488e3f9b449e83f006ead5a0e0a33ad4a13e4462e865b7c286ab7d7566a3"},
{file = "uuid_utils-0.14.1-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:1c238812ae0c8ffe77d8d447a32c6dfd058ea4631246b08b5a71df586ff08531"},
{file = "uuid_utils-0.14.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:bec8f8ef627af86abf8298e7ec50926627e29b34fa907fcfbedb45aaa72bca43"},
{file = "uuid_utils-0.14.1-cp39-abi3-win32.whl", hash = "sha256:b54d6aa6252d96bac1fdbc80d26ba71bad9f220b2724d692ad2f2310c22ef523"},
{file = "uuid_utils-0.14.1-cp39-abi3-win_amd64.whl", hash = "sha256:fc27638c2ce267a0ce3e06828aff786f91367f093c80625ee21dad0208e0f5ba"},
{file = "uuid_utils-0.14.1-cp39-abi3-win_arm64.whl", hash = "sha256:b04cb49b42afbc4ff8dbc60cf054930afc479d6f4dd7f1ec3bbe5dbfdde06b7a"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:b197cd5424cf89fb019ca7f53641d05bfe34b1879614bed111c9c313b5574cd8"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:12c65020ba6cb6abe1d57fcbfc2d0ea0506c67049ee031714057f5caf0f9bc9c"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b5d2ad28063d422ccc2c28d46471d47b61a58de885d35113a8f18cb547e25bf"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:da2234387b45fde40b0fedfee64a0ba591caeea9c48c7698ab6e2d85c7991533"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:50fffc2827348c1e48972eed3d1c698959e63f9d030aa5dd82ba451113158a62"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c1dbe718765f70f5b7f9b7f66b6a937802941b1cc56bcf642ce0274169741e01"},
{file = "uuid_utils-0.14.1-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:258186964039a8e36db10810c1ece879d229b01331e09e9030bc5dcabe231bd2"},
{file = "uuid_utils-0.14.1.tar.gz", hash = "sha256:9bfc95f64af80ccf129c604fb6b8ca66c6f256451e32bc4570f760e4309c9b69"},
]
[[package]]
name = "uvicorn"
version = "0.40.0"
@@ -8292,6 +8357,156 @@ cffi = ">=1.16.0"
[package.extras]
test = ["pytest"]
[[package]]
name = "xxhash"
version = "3.6.0"
description = "Python binding for xxHash"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "xxhash-3.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:87ff03d7e35c61435976554477a7f4cd1704c3596a89a8300d5ce7fc83874a71"},
{file = "xxhash-3.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f572dfd3d0e2eb1a57511831cf6341242f5a9f8298a45862d085f5b93394a27d"},
{file = "xxhash-3.6.0-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:89952ea539566b9fed2bbd94e589672794b4286f342254fad28b149f9615fef8"},
{file = "xxhash-3.6.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:48e6f2ffb07a50b52465a1032c3cf1f4a5683f944acaca8a134a2f23674c2058"},
{file = "xxhash-3.6.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b5b848ad6c16d308c3ac7ad4ba6bede80ed5df2ba8ed382f8932df63158dd4b2"},
{file = "xxhash-3.6.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a034590a727b44dd8ac5914236a7b8504144447a9682586c3327e935f33ec8cc"},
{file = "xxhash-3.6.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a8f1972e75ebdd161d7896743122834fe87378160c20e97f8b09166213bf8cc"},
{file = "xxhash-3.6.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ee34327b187f002a596d7b167ebc59a1b729e963ce645964bbc050d2f1b73d07"},
{file = "xxhash-3.6.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:339f518c3c7a850dd033ab416ea25a692759dc7478a71131fe8869010d2b75e4"},
{file = "xxhash-3.6.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:bf48889c9630542d4709192578aebbd836177c9f7a4a2778a7d6340107c65f06"},
{file = "xxhash-3.6.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:5576b002a56207f640636056b4160a378fe36a58db73ae5c27a7ec8db35f71d4"},
{file = "xxhash-3.6.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af1f3278bd02814d6dedc5dec397993b549d6f16c19379721e5a1d31e132c49b"},
{file = "xxhash-3.6.0-cp310-cp310-win32.whl", hash = "sha256:aed058764db109dc9052720da65fafe84873b05eb8b07e5e653597951af57c3b"},
{file = "xxhash-3.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:e82da5670f2d0d98950317f82a0e4a0197150ff19a6df2ba40399c2a3b9ae5fb"},
{file = "xxhash-3.6.0-cp310-cp310-win_arm64.whl", hash = "sha256:4a082ffff8c6ac07707fb6b671caf7c6e020c75226c561830b73d862060f281d"},
{file = "xxhash-3.6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b47bbd8cf2d72797f3c2772eaaac0ded3d3af26481a26d7d7d41dc2d3c46b04a"},
{file = "xxhash-3.6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2b6821e94346f96db75abaa6e255706fb06ebd530899ed76d32cd99f20dc52fa"},
{file = "xxhash-3.6.0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d0a9751f71a1a65ce3584e9cae4467651c7e70c9d31017fa57574583a4540248"},
{file = "xxhash-3.6.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b29ee68625ab37b04c0b40c3fafdf24d2f75ccd778333cfb698f65f6c463f62"},
{file = "xxhash-3.6.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6812c25fe0d6c36a46ccb002f40f27ac903bf18af9f6dd8f9669cb4d176ab18f"},
{file = "xxhash-3.6.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:4ccbff013972390b51a18ef1255ef5ac125c92dc9143b2d1909f59abc765540e"},
{file = "xxhash-3.6.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:297b7fbf86c82c550e12e8fb71968b3f033d27b874276ba3624ea868c11165a8"},
{file = "xxhash-3.6.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:dea26ae1eb293db089798d3973a5fc928a18fdd97cc8801226fae705b02b14b0"},
{file = "xxhash-3.6.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:7a0b169aafb98f4284f73635a8e93f0735f9cbde17bd5ec332480484241aaa77"},
{file = "xxhash-3.6.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:08d45aef063a4531b785cd72de4887766d01dc8f362a515693df349fdb825e0c"},
{file = "xxhash-3.6.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:929142361a48ee07f09121fe9e96a84950e8d4df3bb298ca5d88061969f34d7b"},
{file = "xxhash-3.6.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:51312c768403d8540487dbbfb557454cfc55589bbde6424456951f7fcd4facb3"},
{file = "xxhash-3.6.0-cp311-cp311-win32.whl", hash = "sha256:d1927a69feddc24c987b337ce81ac15c4720955b667fe9b588e02254b80446fd"},
{file = "xxhash-3.6.0-cp311-cp311-win_amd64.whl", hash = "sha256:26734cdc2d4ffe449b41d186bbeac416f704a482ed835d375a5c0cb02bc63fef"},
{file = "xxhash-3.6.0-cp311-cp311-win_arm64.whl", hash = "sha256:d72f67ef8bf36e05f5b6c65e8524f265bd61071471cd4cf1d36743ebeeeb06b7"},
{file = "xxhash-3.6.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:01362c4331775398e7bb34e3ab403bc9ee9f7c497bc7dee6272114055277dd3c"},
{file = "xxhash-3.6.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b7b2df81a23f8cb99656378e72501b2cb41b1827c0f5a86f87d6b06b69f9f204"},
{file = "xxhash-3.6.0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dc94790144e66b14f67b10ac8ed75b39ca47536bf8800eb7c24b50271ea0c490"},
{file = "xxhash-3.6.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:93f107c673bccf0d592cdba077dedaf52fe7f42dcd7676eba1f6d6f0c3efffd2"},
{file = "xxhash-3.6.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2aa5ee3444c25b69813663c9f8067dcfaa2e126dc55e8dddf40f4d1c25d7effa"},
{file = "xxhash-3.6.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:f7f99123f0e1194fa59cc69ad46dbae2e07becec5df50a0509a808f90a0f03f0"},
{file = "xxhash-3.6.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:49e03e6fe2cac4a1bc64952dd250cf0dbc5ef4ebb7b8d96bce82e2de163c82a2"},
{file = "xxhash-3.6.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bd17fede52a17a4f9a7bc4472a5867cb0b160deeb431795c0e4abe158bc784e9"},
{file = "xxhash-3.6.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:6fb5f5476bef678f69db04f2bd1efbed3030d2aba305b0fc1773645f187d6a4e"},
{file = "xxhash-3.6.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:843b52f6d88071f87eba1631b684fcb4b2068cd2180a0224122fe4ef011a9374"},
{file = "xxhash-3.6.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:7d14a6cfaf03b1b6f5f9790f76880601ccc7896aff7ab9cd8978a939c1eb7e0d"},
{file = "xxhash-3.6.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:418daf3db71e1413cfe211c2f9a528456936645c17f46b5204705581a45390ae"},
{file = "xxhash-3.6.0-cp312-cp312-win32.whl", hash = "sha256:50fc255f39428a27299c20e280d6193d8b63b8ef8028995323bf834a026b4fbb"},
{file = "xxhash-3.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:c0f2ab8c715630565ab8991b536ecded9416d615538be8ecddce43ccf26cbc7c"},
{file = "xxhash-3.6.0-cp312-cp312-win_arm64.whl", hash = "sha256:eae5c13f3bc455a3bbb68bdc513912dc7356de7e2280363ea235f71f54064829"},
{file = "xxhash-3.6.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:599e64ba7f67472481ceb6ee80fa3bd828fd61ba59fb11475572cc5ee52b89ec"},
{file = "xxhash-3.6.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7d8b8aaa30fca4f16f0c84a5c8d7ddee0e25250ec2796c973775373257dde8f1"},
{file = "xxhash-3.6.0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d597acf8506d6e7101a4a44a5e428977a51c0fadbbfd3c39650cca9253f6e5a6"},
{file = "xxhash-3.6.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:858dc935963a33bc33490128edc1c12b0c14d9c7ebaa4e387a7869ecc4f3e263"},
{file = "xxhash-3.6.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ba284920194615cb8edf73bf52236ce2e1664ccd4a38fdb543506413529cc546"},
{file = "xxhash-3.6.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:4b54219177f6c6674d5378bd862c6aedf64725f70dd29c472eaae154df1a2e89"},
{file = "xxhash-3.6.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:42c36dd7dbad2f5238950c377fcbf6811b1cdb1c444fab447960030cea60504d"},
{file = "xxhash-3.6.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f22927652cba98c44639ffdc7aaf35828dccf679b10b31c4ad72a5b530a18eb7"},
{file = "xxhash-3.6.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b45fad44d9c5c119e9c6fbf2e1c656a46dc68e280275007bbfd3d572b21426db"},
{file = "xxhash-3.6.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:6f2580ffab1a8b68ef2b901cde7e55fa8da5e4be0977c68f78fc80f3c143de42"},
{file = "xxhash-3.6.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:40c391dd3cd041ebc3ffe6f2c862f402e306eb571422e0aa918d8070ba31da11"},
{file = "xxhash-3.6.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f205badabde7aafd1a31e8ca2a3e5a763107a71c397c4481d6a804eb5063d8bd"},
{file = "xxhash-3.6.0-cp313-cp313-win32.whl", hash = "sha256:2577b276e060b73b73a53042ea5bd5203d3e6347ce0d09f98500f418a9fcf799"},
{file = "xxhash-3.6.0-cp313-cp313-win_amd64.whl", hash = "sha256:757320d45d2fbcce8f30c42a6b2f47862967aea7bf458b9625b4bbe7ee390392"},
{file = "xxhash-3.6.0-cp313-cp313-win_arm64.whl", hash = "sha256:457b8f85dec5825eed7b69c11ae86834a018b8e3df5e77783c999663da2f96d6"},
{file = "xxhash-3.6.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:a42e633d75cdad6d625434e3468126c73f13f7584545a9cf34e883aa1710e702"},
{file = "xxhash-3.6.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:568a6d743219e717b07b4e03b0a828ce593833e498c3b64752e0f5df6bfe84db"},
{file = "xxhash-3.6.0-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:bec91b562d8012dae276af8025a55811b875baace6af510412a5e58e3121bc54"},
{file = "xxhash-3.6.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:78e7f2f4c521c30ad5e786fdd6bae89d47a32672a80195467b5de0480aa97b1f"},
{file = "xxhash-3.6.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:3ed0df1b11a79856df5ffcab572cbd6b9627034c1c748c5566fa79df9048a7c5"},
{file = "xxhash-3.6.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0e4edbfc7d420925b0dd5e792478ed393d6e75ff8fc219a6546fb446b6a417b1"},
{file = "xxhash-3.6.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fba27a198363a7ef87f8c0f6b171ec36b674fe9053742c58dd7e3201c1ab30ee"},
{file = "xxhash-3.6.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:794fe9145fe60191c6532fa95063765529770edcdd67b3d537793e8004cabbfd"},
{file = "xxhash-3.6.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:6105ef7e62b5ac73a837778efc331a591d8442f8ef5c7e102376506cb4ae2729"},
{file = "xxhash-3.6.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:f01375c0e55395b814a679b3eea205db7919ac2af213f4a6682e01220e5fe292"},
{file = "xxhash-3.6.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:d706dca2d24d834a4661619dcacf51a75c16d65985718d6a7d73c1eeeb903ddf"},
{file = "xxhash-3.6.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:5f059d9faeacd49c0215d66f4056e1326c80503f51a1532ca336a385edadd033"},
{file = "xxhash-3.6.0-cp313-cp313t-win32.whl", hash = "sha256:1244460adc3a9be84731d72b8e80625788e5815b68da3da8b83f78115a40a7ec"},
{file = "xxhash-3.6.0-cp313-cp313t-win_amd64.whl", hash = "sha256:b1e420ef35c503869c4064f4a2f2b08ad6431ab7b229a05cce39d74268bca6b8"},
{file = "xxhash-3.6.0-cp313-cp313t-win_arm64.whl", hash = "sha256:ec44b73a4220623235f67a996c862049f375df3b1052d9899f40a6382c32d746"},
{file = "xxhash-3.6.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:a40a3d35b204b7cc7643cbcf8c9976d818cb47befcfac8bbefec8038ac363f3e"},
{file = "xxhash-3.6.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a54844be970d3fc22630b32d515e79a90d0a3ddb2644d8d7402e3c4c8da61405"},
{file = "xxhash-3.6.0-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:016e9190af8f0a4e3741343777710e3d5717427f175adfdc3e72508f59e2a7f3"},
{file = "xxhash-3.6.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4f6f72232f849eb9d0141e2ebe2677ece15adfd0fa599bc058aad83c714bb2c6"},
{file = "xxhash-3.6.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:63275a8aba7865e44b1813d2177e0f5ea7eadad3dd063a21f7cf9afdc7054063"},
{file = "xxhash-3.6.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3cd01fa2aa00d8b017c97eb46b9a794fbdca53fc14f845f5a328c71254b0abb7"},
{file = "xxhash-3.6.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0226aa89035b62b6a86d3c68df4d7c1f47a342b8683da2b60cedcddb46c4d95b"},
{file = "xxhash-3.6.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c6e193e9f56e4ca4923c61238cdaced324f0feac782544eb4c6d55ad5cc99ddd"},
{file = "xxhash-3.6.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:9176dcaddf4ca963d4deb93866d739a343c01c969231dbe21680e13a5d1a5bf0"},
{file = "xxhash-3.6.0-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:c1ce4009c97a752e682b897aa99aef84191077a9433eb237774689f14f8ec152"},
{file = "xxhash-3.6.0-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:8cb2f4f679b01513b7adbb9b1b2f0f9cdc31b70007eaf9d59d0878809f385b11"},
{file = "xxhash-3.6.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:653a91d7c2ab54a92c19ccf43508b6a555440b9be1bc8be553376778be7f20b5"},
{file = "xxhash-3.6.0-cp314-cp314-win32.whl", hash = "sha256:a756fe893389483ee8c394d06b5ab765d96e68fbbfe6fde7aa17e11f5720559f"},
{file = "xxhash-3.6.0-cp314-cp314-win_amd64.whl", hash = "sha256:39be8e4e142550ef69629c9cd71b88c90e9a5db703fecbcf265546d9536ca4ad"},
{file = "xxhash-3.6.0-cp314-cp314-win_arm64.whl", hash = "sha256:25915e6000338999236f1eb68a02a32c3275ac338628a7eaa5a269c401995679"},
{file = "xxhash-3.6.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:c5294f596a9017ca5a3e3f8884c00b91ab2ad2933cf288f4923c3fd4346cf3d4"},
{file = "xxhash-3.6.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1cf9dcc4ab9cff01dfbba78544297a3a01dafd60f3bde4e2bfd016cf7e4ddc67"},
{file = "xxhash-3.6.0-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:01262da8798422d0685f7cef03b2bd3f4f46511b02830861df548d7def4402ad"},
{file = "xxhash-3.6.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:51a73fb7cb3a3ead9f7a8b583ffd9b8038e277cdb8cb87cf890e88b3456afa0b"},
{file = "xxhash-3.6.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b9c6df83594f7df8f7f708ce5ebeacfc69f72c9fbaaababf6cf4758eaada0c9b"},
{file = "xxhash-3.6.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:627f0af069b0ea56f312fd5189001c24578868643203bca1abbc2c52d3a6f3ca"},
{file = "xxhash-3.6.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:aa912c62f842dfd013c5f21a642c9c10cd9f4c4e943e0af83618b4a404d9091a"},
{file = "xxhash-3.6.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:b465afd7909db30168ab62afe40b2fcf79eedc0b89a6c0ab3123515dc0df8b99"},
{file = "xxhash-3.6.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:a881851cf38b0a70e7c4d3ce81fc7afd86fbc2a024f4cfb2a97cf49ce04b75d3"},
{file = "xxhash-3.6.0-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:9b3222c686a919a0f3253cfc12bb118b8b103506612253b5baeaac10d8027cf6"},
{file = "xxhash-3.6.0-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:c5aa639bc113e9286137cec8fadc20e9cd732b2cc385c0b7fa673b84fc1f2a93"},
{file = "xxhash-3.6.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5c1343d49ac102799905e115aee590183c3921d475356cb24b4de29a4bc56518"},
{file = "xxhash-3.6.0-cp314-cp314t-win32.whl", hash = "sha256:5851f033c3030dd95c086b4a36a2683c2ff4a799b23af60977188b057e467119"},
{file = "xxhash-3.6.0-cp314-cp314t-win_amd64.whl", hash = "sha256:0444e7967dac37569052d2409b00a8860c2135cff05502df4da80267d384849f"},
{file = "xxhash-3.6.0-cp314-cp314t-win_arm64.whl", hash = "sha256:bb79b1e63f6fd84ec778a4b1916dfe0a7c3fdb986c06addd5db3a0d413819d95"},
{file = "xxhash-3.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7dac94fad14a3d1c92affb661021e1d5cbcf3876be5f5b4d90730775ccb7ac41"},
{file = "xxhash-3.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6965e0e90f1f0e6cb78da568c13d4a348eeb7f40acfd6d43690a666a459458b8"},
{file = "xxhash-3.6.0-cp38-cp38-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:2ab89a6b80f22214b43d98693c30da66af910c04f9858dd39c8e570749593d7e"},
{file = "xxhash-3.6.0-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4903530e866b7a9c1eadfd3fa2fbe1b97d3aed4739a80abf506eb9318561c850"},
{file = "xxhash-3.6.0-cp38-cp38-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:4da8168ae52c01ac64c511d6f4a709479da8b7a4a1d7621ed51652f93747dffa"},
{file = "xxhash-3.6.0-cp38-cp38-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:97460eec202017f719e839a0d3551fbc0b2fcc9c6c6ffaa5af85bbd5de432788"},
{file = "xxhash-3.6.0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:45aae0c9df92e7fa46fbb738737324a563c727990755ec1965a6a339ea10a1df"},
{file = "xxhash-3.6.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:0d50101e57aad86f4344ca9b32d091a2135a9d0a4396f19133426c88025b09f1"},
{file = "xxhash-3.6.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:9085e798c163ce310d91f8aa6b325dda3c2944c93c6ce1edb314030d4167cc65"},
{file = "xxhash-3.6.0-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:a87f271a33fad0e5bf3be282be55d78df3a45ae457950deb5241998790326f87"},
{file = "xxhash-3.6.0-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:9e040d3e762f84500961791fa3709ffa4784d4dcd7690afc655c095e02fff05f"},
{file = "xxhash-3.6.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:b0359391c3dad6de872fefb0cf5b69d55b0655c55ee78b1bb7a568979b2ce96b"},
{file = "xxhash-3.6.0-cp38-cp38-win32.whl", hash = "sha256:e4ff728a2894e7f436b9e94c667b0f426b9c74b71f900cf37d5468c6b5da0536"},
{file = "xxhash-3.6.0-cp38-cp38-win_amd64.whl", hash = "sha256:01be0c5b500c5362871fc9cfdf58c69b3e5c4f531a82229ddb9eb1eb14138004"},
{file = "xxhash-3.6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:cc604dc06027dbeb8281aeac5899c35fcfe7c77b25212833709f0bff4ce74d2a"},
{file = "xxhash-3.6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:277175a73900ad43a8caeb8b99b9604f21fe8d7c842f2f9061a364a7e220ddb7"},
{file = "xxhash-3.6.0-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:cfbc5b91397c8c2972fdac13fb3e4ed2f7f8ccac85cd2c644887557780a9b6e2"},
{file = "xxhash-3.6.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2762bfff264c4e73c0e507274b40634ff465e025f0eaf050897e88ec8367575d"},
{file = "xxhash-3.6.0-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2f171a900d59d51511209f7476933c34a0c2c711078d3c80e74e0fe4f38680ec"},
{file = "xxhash-3.6.0-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:780b90c313348f030b811efc37b0fa1431163cb8db8064cf88a7936b6ce5f222"},
{file = "xxhash-3.6.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:18b242455eccdfcd1fa4134c431a30737d2b4f045770f8fe84356b3469d4b919"},
{file = "xxhash-3.6.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:a75ffc1bd5def584129774c158e108e5d768e10b75813f2b32650bb041066ed6"},
{file = "xxhash-3.6.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1fc1ed882d1e8df932a66e2999429ba6cc4d5172914c904ab193381fba825360"},
{file = "xxhash-3.6.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:44e342e8cc11b4e79dae5c57f2fb6360c3c20cc57d32049af8f567f5b4bcb5f4"},
{file = "xxhash-3.6.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:c2f9ccd5c4be370939a2e17602fbc49995299203da72a3429db013d44d590e86"},
{file = "xxhash-3.6.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:02ea4cb627c76f48cd9fb37cf7ab22bd51e57e1b519807234b473faebe526796"},
{file = "xxhash-3.6.0-cp39-cp39-win32.whl", hash = "sha256:6551880383f0e6971dc23e512c9ccc986147ce7bfa1cd2e4b520b876c53e9f3d"},
{file = "xxhash-3.6.0-cp39-cp39-win_amd64.whl", hash = "sha256:7c35c4cdc65f2a29f34425c446f2f5cdcd0e3c34158931e1cc927ece925ab802"},
{file = "xxhash-3.6.0-cp39-cp39-win_arm64.whl", hash = "sha256:ffc578717a347baf25be8397cb10d2528802d24f94cfc005c0e44fef44b5cdd6"},
{file = "xxhash-3.6.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0f7b7e2ec26c1666ad5fc9dbfa426a6a3367ceaf79db5dd76264659d509d73b0"},
{file = "xxhash-3.6.0-pp311-pypy311_pp73-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5dc1e14d14fa0f5789ec29a7062004b5933964bb9b02aae6622b8f530dc40296"},
{file = "xxhash-3.6.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:881b47fc47e051b37d94d13e7455131054b56749b91b508b0907eb07900d1c13"},
{file = "xxhash-3.6.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c6dc31591899f5e5666f04cc2e529e69b4072827085c1ef15294d91a004bc1bd"},
{file = "xxhash-3.6.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:15e0dac10eb9309508bfc41f7f9deaa7755c69e35af835db9cb10751adebc35d"},
{file = "xxhash-3.6.0.tar.gz", hash = "sha256:f0162a78b13a0d7617b2845b90c763339d1f1d82bb04a4b07f4ab535cc5e05d6"},
]
[[package]]
name = "yarl"
version = "1.22.0"
@@ -8625,4 +8840,4 @@ cffi = ["cffi (>=1.17,<2.0) ; platform_python_implementation != \"PyPy\" and pyt
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.14"
content-hash = "3869bc3fb8ea50e7101daffce13edbe563c8af568cb751adfa31fb9bb5c8318a"
content-hash = "e7863413fda5e0a8b236e39a4c37390b52ae8c2f572c77df732abbd4280312b6"

View File

@@ -90,6 +90,7 @@ stagehand = "^0.5.1"
gravitas-md2gdocs = "^0.1.0"
posthog = "^7.6.0"
fpdf2 = "^2.8.6"
langsmith = "^0.7.7"
[tool.poetry.group.dev.dependencies]
aiohappyeyeballs = "^2.6.1"

View File

@@ -7,7 +7,9 @@ import {
DropdownMenuTrigger,
} from "@/components/molecules/DropdownMenu/DropdownMenu";
import { SidebarProvider } from "@/components/ui/sidebar";
import { DotsThree } from "@phosphor-icons/react";
import { cn } from "@/lib/utils";
import { DotsThree, UploadSimple } from "@phosphor-icons/react";
import { useCallback, useRef, useState } from "react";
import { ChatContainer } from "./components/ChatContainer/ChatContainer";
import { ChatSidebar } from "./components/ChatSidebar/ChatSidebar";
import { DeleteChatDialog } from "./components/DeleteChatDialog/DeleteChatDialog";
@@ -17,6 +19,49 @@ import { ScaleLoader } from "./components/ScaleLoader/ScaleLoader";
import { useCopilotPage } from "./useCopilotPage";
export function CopilotPage() {
const [isDragging, setIsDragging] = useState(false);
const [droppedFiles, setDroppedFiles] = useState<File[]>([]);
const dragCounter = useRef(0);
const handleDroppedFilesConsumed = useCallback(() => {
setDroppedFiles([]);
}, []);
function handleDragEnter(e: React.DragEvent) {
e.preventDefault();
e.stopPropagation();
dragCounter.current += 1;
if (e.dataTransfer.types.includes("Files")) {
setIsDragging(true);
}
}
function handleDragOver(e: React.DragEvent) {
e.preventDefault();
e.stopPropagation();
}
function handleDragLeave(e: React.DragEvent) {
e.preventDefault();
e.stopPropagation();
dragCounter.current -= 1;
if (dragCounter.current === 0) {
setIsDragging(false);
}
}
function handleDrop(e: React.DragEvent) {
e.preventDefault();
e.stopPropagation();
dragCounter.current = 0;
setIsDragging(false);
const files = Array.from(e.dataTransfer.files);
if (files.length > 0) {
setDroppedFiles(files);
}
}
const {
sessionId,
messages,
@@ -29,6 +74,7 @@ export function CopilotPage() {
isLoadingSession,
isSessionError,
isCreatingSession,
isUploadingFiles,
isUserLoading,
isLoggedIn,
// Mobile drawer
@@ -63,8 +109,26 @@ export function CopilotPage() {
className="h-[calc(100vh-72px)] min-h-0"
>
{!isMobile && <ChatSidebar />}
<div className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0">
<div
className="relative flex h-full w-full flex-col overflow-hidden bg-[#f8f8f9] px-0"
onDragEnter={handleDragEnter}
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
>
{isMobile && <MobileHeader onOpenDrawer={handleOpenDrawer} />}
{/* Drop overlay */}
<div
className={cn(
"pointer-events-none absolute inset-0 z-50 flex flex-col items-center justify-center gap-3 rounded-lg border-2 border-dashed border-violet-400 bg-violet-500/10 transition-opacity duration-150",
isDragging ? "opacity-100" : "opacity-0",
)}
>
<UploadSimple className="h-10 w-10 text-violet-500" weight="bold" />
<span className="text-lg font-medium text-violet-600">
Drop files here
</span>
</div>
<div className="flex-1 overflow-hidden">
<ChatContainer
messages={messages}
@@ -78,6 +142,9 @@ export function CopilotPage() {
onCreateSession={createSession}
onSend={onSend}
onStop={stop}
isUploadingFiles={isUploadingFiles}
droppedFiles={droppedFiles}
onDroppedFilesConsumed={handleDroppedFilesConsumed}
headerSlot={
isMobile && sessionId ? (
<div className="flex justify-end">

View File

@@ -18,9 +18,14 @@ export interface ChatContainerProps {
/** True when backend has an active stream but we haven't reconnected yet. */
isReconnecting?: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
onSend: (message: string, files?: File[]) => void | Promise<void>;
onStop: () => void;
isUploadingFiles?: boolean;
headerSlot?: ReactNode;
/** Files dropped onto the chat window. */
droppedFiles?: File[];
/** Called after droppedFiles have been consumed by ChatInput. */
onDroppedFilesConsumed?: () => void;
}
export const ChatContainer = ({
messages,
@@ -34,7 +39,10 @@ export const ChatContainer = ({
onCreateSession,
onSend,
onStop,
isUploadingFiles,
headerSlot,
droppedFiles,
onDroppedFilesConsumed,
}: ChatContainerProps) => {
const isBusy =
status === "streaming" ||
@@ -69,8 +77,11 @@ export const ChatContainer = ({
onSend={onSend}
disabled={isBusy}
isStreaming={isBusy}
isUploadingFiles={isUploadingFiles}
onStop={onStop}
placeholder="What else can I help with?"
droppedFiles={droppedFiles}
onDroppedFilesConsumed={onDroppedFilesConsumed}
/>
</motion.div>
</div>
@@ -80,6 +91,9 @@ export const ChatContainer = ({
isCreatingSession={isCreatingSession}
onCreateSession={onCreateSession}
onSend={onSend}
isUploadingFiles={isUploadingFiles}
droppedFiles={droppedFiles}
onDroppedFilesConsumed={onDroppedFilesConsumed}
/>
)}
</div>

View File

@@ -1,46 +1,74 @@
import { Button } from "@/components/atoms/Button/Button";
import { cn } from "@/lib/utils";
import {
ArrowUpIcon,
CircleNotchIcon,
MicrophoneIcon,
StopIcon,
} from "@phosphor-icons/react";
import { ChangeEvent, useCallback } from "react";
PromptInputBody,
PromptInputButton,
PromptInputFooter,
PromptInputSubmit,
PromptInputTextarea,
PromptInputTools,
} from "@/components/ai-elements/prompt-input";
import { InputGroup } from "@/components/ui/input-group";
import { cn } from "@/lib/utils";
import { CircleNotchIcon, MicrophoneIcon } from "@phosphor-icons/react";
import { ChangeEvent, useEffect, useState } from "react";
import { AttachmentMenu } from "./components/AttachmentMenu";
import { FileChips } from "./components/FileChips";
import { RecordingIndicator } from "./components/RecordingIndicator";
import { useChatInput } from "./useChatInput";
import { useVoiceRecording } from "./useVoiceRecording";
export interface Props {
onSend: (message: string) => void | Promise<void>;
onSend: (message: string, files?: File[]) => void | Promise<void>;
disabled?: boolean;
isStreaming?: boolean;
isUploadingFiles?: boolean;
onStop?: () => void;
placeholder?: string;
className?: string;
inputId?: string;
/** Files dropped onto the chat window by the parent. */
droppedFiles?: File[];
/** Called after droppedFiles have been merged into internal state. */
onDroppedFilesConsumed?: () => void;
}
export function ChatInput({
onSend,
disabled = false,
isStreaming = false,
isUploadingFiles = false,
onStop,
placeholder = "Type your message...",
className,
inputId = "chat-input",
droppedFiles,
onDroppedFilesConsumed,
}: Props) {
const [files, setFiles] = useState<File[]>([]);
// Merge files dropped onto the chat window into internal state.
useEffect(() => {
if (droppedFiles && droppedFiles.length > 0) {
setFiles((prev) => [...prev, ...droppedFiles]);
onDroppedFilesConsumed?.();
}
}, [droppedFiles, onDroppedFilesConsumed]);
const hasFiles = files.length > 0;
const isBusy = disabled || isStreaming || isUploadingFiles;
const {
value,
setValue,
handleKeyDown: baseHandleKeyDown,
handleSubmit,
handleChange: baseHandleChange,
hasMultipleLines,
} = useChatInput({
onSend,
disabled: disabled || isStreaming,
maxRows: 4,
onSend: async (message: string) => {
await onSend(message, hasFiles ? files : undefined);
// Only clear files after successful send (onSend throws on failure)
setFiles([]);
},
disabled: isBusy,
canSendEmpty: hasFiles,
inputId,
});
@@ -55,63 +83,54 @@ export function ChatInput({
audioStream,
} = useVoiceRecording({
setValue,
disabled: disabled || isStreaming,
disabled: isBusy,
isStreaming,
value,
baseHandleKeyDown,
inputId,
});
// Block text changes when recording
const handleChange = useCallback(
(e: ChangeEvent<HTMLTextAreaElement>) => {
if (isRecording) return;
baseHandleChange(e);
},
[isRecording, baseHandleChange],
);
function handleChange(e: ChangeEvent<HTMLTextAreaElement>) {
if (isRecording) return;
baseHandleChange(e);
}
const canSend =
!disabled &&
(!!value.trim() || hasFiles) &&
!isRecording &&
!isTranscribing;
function handleFilesSelected(newFiles: File[]) {
setFiles((prev) => [...prev, ...newFiles]);
}
function handleRemoveFile(index: number) {
setFiles((prev) => prev.filter((_, i) => i !== index));
}
return (
<form onSubmit={handleSubmit} className={cn("relative flex-1", className)}>
<div className="relative">
<div
id={`${inputId}-wrapper`}
className={cn(
"relative overflow-hidden border bg-white shadow-sm",
"focus-within:ring-1",
isRecording
? "border-red-400 focus-within:border-red-400 focus-within:ring-red-400"
: "border-neutral-200 focus-within:border-zinc-400 focus-within:ring-zinc-400",
hasMultipleLines ? "rounded-xlarge" : "rounded-full",
)}
>
{!value && !isRecording && (
<div
className="pointer-events-none absolute inset-0 top-0.5 flex items-center justify-start pl-14 text-[1rem] text-zinc-400"
aria-hidden="true"
>
{isTranscribing ? "Transcribing..." : placeholder}
</div>
)}
<textarea
<InputGroup
className={cn(
"overflow-hidden has-[[data-slot=input-group-control]:focus-visible]:border-neutral-200 has-[[data-slot=input-group-control]:focus-visible]:ring-0",
isRecording &&
"border-red-400 ring-1 ring-red-400 has-[[data-slot=input-group-control]:focus-visible]:border-red-400 has-[[data-slot=input-group-control]:focus-visible]:ring-red-400",
)}
>
<FileChips
files={files}
onRemove={handleRemoveFile}
isUploading={isUploadingFiles}
/>
<PromptInputBody className="relative block w-full">
<PromptInputTextarea
id={inputId}
aria-label="Chat message input"
value={value}
onChange={handleChange}
onKeyDown={handleKeyDown}
disabled={isInputDisabled}
rows={1}
className={cn(
"w-full resize-none overflow-y-auto border-0 bg-transparent text-[1rem] leading-6 text-black",
"placeholder:text-zinc-400",
"focus:outline-none focus:ring-0",
"disabled:text-zinc-500",
hasMultipleLines
? "pb-6 pl-4 pr-4 pt-2"
: showMicButton
? "pb-4 pl-14 pr-14 pt-4"
: "pb-4 pl-4 pr-14 pt-4",
)}
placeholder={isTranscribing ? "Transcribing..." : placeholder}
/>
{isRecording && !value && (
<div className="pointer-events-none absolute inset-0 flex items-center justify-center">
@@ -121,67 +140,47 @@ export function ChatInput({
/>
</div>
)}
</div>
<span id="chat-input-hint" className="sr-only">
</PromptInputBody>
<span id={`${inputId}-hint`} className="sr-only">
Press Enter to send, Shift+Enter for new line, Space to record voice
</span>
{showMicButton && (
<div className="absolute bottom-[7px] left-2 flex items-center gap-1">
<Button
type="button"
variant="icon"
size="icon"
aria-label={isRecording ? "Stop recording" : "Start recording"}
onClick={toggleRecording}
disabled={disabled || isTranscribing || isStreaming}
className={cn(
isRecording
? "animate-pulse border-red-500 bg-red-500 text-white hover:border-red-600 hover:bg-red-600"
: isTranscribing
? "border-zinc-300 bg-zinc-100 text-zinc-400"
: "border-zinc-300 bg-white text-zinc-500 hover:border-zinc-400 hover:bg-zinc-50 hover:text-zinc-700",
isStreaming && "opacity-40",
)}
>
{isTranscribing ? (
<CircleNotchIcon className="h-4 w-4 animate-spin" />
) : (
<MicrophoneIcon className="h-4 w-4" weight="bold" />
)}
</Button>
</div>
)}
<PromptInputFooter>
<PromptInputTools>
<AttachmentMenu
onFilesSelected={handleFilesSelected}
disabled={isBusy}
/>
{showMicButton && (
<PromptInputButton
aria-label={isRecording ? "Stop recording" : "Start recording"}
onClick={toggleRecording}
disabled={disabled || isTranscribing || isStreaming}
className={cn(
"size-[2.625rem] rounded-[96px] border border-zinc-300 bg-transparent text-black hover:border-zinc-600 hover:bg-zinc-100",
isRecording &&
"animate-pulse border-red-500 bg-red-500 text-white hover:border-red-600 hover:bg-red-600",
isTranscribing && "bg-zinc-100 text-zinc-400",
isStreaming && "opacity-40",
)}
>
{isTranscribing ? (
<CircleNotchIcon className="h-4 w-4 animate-spin" />
) : (
<MicrophoneIcon className="h-4 w-4" weight="bold" />
)}
</PromptInputButton>
)}
</PromptInputTools>
<div className="absolute bottom-[7px] right-2 flex items-center gap-1">
{isStreaming ? (
<Button
type="button"
variant="icon"
size="icon"
aria-label="Stop generating"
onClick={onStop}
className="border-red-600 bg-red-600 text-white hover:border-red-800 hover:bg-red-800"
>
<StopIcon className="h-4 w-4" weight="bold" />
</Button>
<PromptInputSubmit status="streaming" onStop={onStop} />
) : (
<Button
type="submit"
variant="icon"
size="icon"
aria-label="Send message"
className={cn(
"border-zinc-800 bg-zinc-800 text-white hover:border-zinc-900 hover:bg-zinc-900",
(disabled || !value.trim() || isRecording) && "opacity-20",
)}
disabled={disabled || !value.trim() || isRecording}
>
<ArrowUpIcon className="h-4 w-4" weight="bold" />
</Button>
<PromptInputSubmit disabled={!canSend} />
)}
</div>
</div>
</PromptInputFooter>
</InputGroup>
</form>
);
}

View File

@@ -0,0 +1,55 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { cn } from "@/lib/utils";
import { Plus as PlusIcon } from "@phosphor-icons/react";
import { useRef } from "react";
interface Props {
onFilesSelected: (files: File[]) => void;
disabled?: boolean;
}
export function AttachmentMenu({ onFilesSelected, disabled }: Props) {
const fileInputRef = useRef<HTMLInputElement>(null);
function handleClick() {
fileInputRef.current?.click();
}
function handleFileChange(e: React.ChangeEvent<HTMLInputElement>) {
const files = Array.from(e.target.files ?? []);
if (files.length > 0) {
onFilesSelected(files);
}
// Reset so the same file can be re-selected
e.target.value = "";
}
return (
<>
<input
ref={fileInputRef}
type="file"
multiple
className="hidden"
onChange={handleFileChange}
tabIndex={-1}
/>
<Button
type="button"
variant="icon"
size="icon"
aria-label="Attach file"
disabled={disabled}
onClick={handleClick}
className={cn(
"border-zinc-300 bg-white text-zinc-500 hover:border-zinc-400 hover:bg-zinc-50 hover:text-zinc-700",
disabled && "opacity-40",
)}
>
<PlusIcon className="h-4 w-4" weight="bold" />
</Button>
</>
);
}

View File

@@ -0,0 +1,45 @@
"use client";
import { cn } from "@/lib/utils";
import {
CircleNotch as CircleNotchIcon,
X as XIcon,
} from "@phosphor-icons/react";
interface Props {
files: File[];
onRemove: (index: number) => void;
isUploading?: boolean;
}
export function FileChips({ files, onRemove, isUploading }: Props) {
if (files.length === 0) return null;
return (
<div className="flex w-full flex-wrap gap-2 px-3 pb-2 pt-1">
{files.map((file, index) => (
<span
key={`${file.name}-${file.size}-${index}`}
className={cn(
"inline-flex items-center gap-1 rounded-full bg-zinc-100 px-3 py-1 text-sm text-zinc-700",
isUploading && "opacity-70",
)}
>
<span className="max-w-[160px] truncate">{file.name}</span>
{isUploading ? (
<CircleNotchIcon className="ml-0.5 h-3 w-3 animate-spin text-zinc-400" />
) : (
<button
type="button"
aria-label={`Remove ${file.name}`}
onClick={() => onRemove(index)}
className="ml-0.5 rounded-full p-0.5 text-zinc-400 transition-colors hover:bg-zinc-200 hover:text-zinc-600"
>
<XIcon className="h-3 w-3" weight="bold" />
</button>
)}
</span>
))}
</div>
);
}

View File

@@ -1,26 +1,20 @@
import {
ChangeEvent,
FormEvent,
KeyboardEvent,
useEffect,
useState,
} from "react";
import { ChangeEvent, FormEvent, useEffect, useState } from "react";
interface Args {
onSend: (message: string) => void;
disabled?: boolean;
maxRows?: number;
/** Allow sending when text is empty (e.g. when files are attached). */
canSendEmpty?: boolean;
inputId?: string;
}
export function useChatInput({
onSend,
disabled = false,
maxRows = 5,
canSendEmpty = false,
inputId = "chat-input",
}: Args) {
const [value, setValue] = useState("");
const [hasMultipleLines, setHasMultipleLines] = useState(false);
const [isSending, setIsSending] = useState(false);
useEffect(
@@ -40,98 +34,18 @@ export function useChatInput({
[disabled, inputId],
);
useEffect(() => {
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (!textarea || !wrapper) return;
const isEmpty = !value.trim();
const lines = value.split("\n").length;
const hasExplicitNewlines = lines > 1;
const computedStyle = window.getComputedStyle(textarea);
const lineHeight = parseInt(computedStyle.lineHeight, 10);
const paddingTop = parseInt(computedStyle.paddingTop, 10);
const paddingBottom = parseInt(computedStyle.paddingBottom, 10);
const singleLinePadding = paddingTop + paddingBottom;
textarea.style.height = "auto";
const scrollHeight = textarea.scrollHeight;
const singleLineHeight = lineHeight + singleLinePadding;
const isMultiLine =
hasExplicitNewlines || scrollHeight > singleLineHeight + 2;
setHasMultipleLines(isMultiLine);
if (isEmpty) {
wrapper.style.height = `${singleLineHeight}px`;
wrapper.style.maxHeight = "";
textarea.style.height = `${singleLineHeight}px`;
textarea.style.maxHeight = "";
textarea.style.overflowY = "hidden";
return;
}
if (isMultiLine) {
const wrapperMaxHeight = 196;
const currentMultilinePadding = paddingTop + paddingBottom;
const contentMaxHeight = wrapperMaxHeight - currentMultilinePadding;
const minMultiLineHeight = lineHeight * 2 + currentMultilinePadding;
const contentHeight = scrollHeight;
const targetWrapperHeight = Math.min(
Math.max(contentHeight + currentMultilinePadding, minMultiLineHeight),
wrapperMaxHeight,
);
wrapper.style.height = `${targetWrapperHeight}px`;
wrapper.style.maxHeight = `${wrapperMaxHeight}px`;
textarea.style.height = `${contentHeight}px`;
textarea.style.maxHeight = `${contentMaxHeight}px`;
textarea.style.overflowY =
contentHeight > contentMaxHeight ? "auto" : "hidden";
} else {
wrapper.style.height = `${singleLineHeight}px`;
wrapper.style.maxHeight = "";
textarea.style.height = `${singleLineHeight}px`;
textarea.style.maxHeight = "";
textarea.style.overflowY = "hidden";
}
}, [value, maxRows, inputId]);
async function handleSend() {
if (disabled || isSending || !value.trim()) return;
if (disabled || isSending || (!value.trim() && !canSendEmpty)) return;
setIsSending(true);
try {
await onSend(value.trim());
setValue("");
setHasMultipleLines(false);
const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
const wrapper = document.getElementById(
`${inputId}-wrapper`,
) as HTMLDivElement;
if (textarea) {
textarea.style.height = "auto";
}
if (wrapper) {
wrapper.style.height = "";
wrapper.style.maxHeight = "";
}
} finally {
setIsSending(false);
}
}
function handleKeyDown(event: KeyboardEvent<HTMLTextAreaElement>) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
void handleSend();
}
}
function handleSubmit(e: FormEvent<HTMLFormElement>) {
e.preventDefault();
void handleSend();
@@ -144,11 +58,9 @@ export function useChatInput({
return {
value,
setValue,
handleKeyDown,
handleSend,
handleSubmit,
handleChange,
hasMultipleLines,
isSending,
};
}

View File

@@ -14,7 +14,6 @@ interface Args {
disabled?: boolean;
isStreaming?: boolean;
value: string;
baseHandleKeyDown: (event: KeyboardEvent<HTMLTextAreaElement>) => void;
inputId?: string;
}
@@ -23,7 +22,6 @@ export function useVoiceRecording({
disabled = false,
isStreaming = false,
value,
baseHandleKeyDown,
inputId,
}: Args) {
const [isRecording, setIsRecording] = useState(false);
@@ -237,9 +235,9 @@ export function useVoiceRecording({
event.preventDefault();
return;
}
baseHandleKeyDown(event);
// Let PromptInputTextarea handle remaining keys (Enter → submit, etc.)
},
[value, isTranscribing, stopRecording, startRecording, baseHandleKeyDown],
[value, isTranscribing, stopRecording, startRecording],
);
const showMicButton = isSupported;

View File

@@ -1,188 +1,16 @@
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import {
Message,
MessageContent,
MessageResponse,
} from "@/components/ai-elements/message";
import { Message, MessageContent } from "@/components/ai-elements/message";
import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
import { useEffect, useState } from "react";
import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../tools/EditAgent/EditAgent";
import {
CreateFeatureRequestTool,
SearchFeatureRequestsTool,
} from "../../tools/FeatureRequests/FeatureRequests";
import { FindAgentsTool } from "../../tools/FindAgents/FindAgents";
import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks";
import { RunAgentTool } from "../../tools/RunAgent/RunAgent";
import { RunBlockTool } from "../../tools/RunBlock/RunBlock";
import { SearchDocsTool } from "../../tools/SearchDocs/SearchDocs";
import { GenericTool } from "../../tools/GenericTool/GenericTool";
import { ViewAgentOutputTool } from "../../tools/ViewAgentOutput/ViewAgentOutput";
import { FileUIPart, UIDataTypes, UIMessage, UITools } from "ai";
import { MessageAttachments } from "./components/MessageAttachments";
import { MessagePartRenderer } from "./components/MessagePartRenderer";
import { ThinkingIndicator } from "./components/ThinkingIndicator";
// ---------------------------------------------------------------------------
// Special text parsing (error markers, workspace URLs, etc.)
// ---------------------------------------------------------------------------
// Special message prefixes for text-based markers (set by backend)
const COPILOT_ERROR_PREFIX = "[COPILOT_ERROR]";
const COPILOT_SYSTEM_PREFIX = "[COPILOT_SYSTEM]";
type MarkerType = "error" | "system" | null;
/**
* Parse special markers from message content (error, system).
*
* Detects markers added by the backend for special rendering:
* - `[COPILOT_ERROR] message` → ErrorCard
* - `[COPILOT_SYSTEM] message` → System info message
*
* Returns marker type, marker text, and cleaned text.
*/
function parseSpecialMarkers(text: string): {
markerType: MarkerType;
markerText: string;
cleanText: string;
} {
// Check for error marker
const errorMatch = text.match(
new RegExp(`\\${COPILOT_ERROR_PREFIX}\\s*(.+?)$`, "s"),
);
if (errorMatch) {
return {
markerType: "error",
markerText: errorMatch[1].trim(),
cleanText: text.replace(errorMatch[0], "").trim(),
};
}
// Check for system marker
const systemMatch = text.match(
new RegExp(`\\${COPILOT_SYSTEM_PREFIX}\\s*(.+?)$`, "s"),
);
if (systemMatch) {
return {
markerType: "system",
markerText: systemMatch[1].trim(),
cleanText: text.replace(systemMatch[0], "").trim(),
};
}
return { markerType: null, markerText: "", cleanText: text };
}
/**
* Resolve workspace:// URLs in markdown text to proxy download URLs.
*
* Handles both image syntax `![alt](workspace://id#mime)` and regular link
* syntax `[text](workspace://id)`. For images the MIME type hash fragment is
* inspected so that videos can be rendered with a `<video>` element via the
* custom img component.
*/
function resolveWorkspaceUrls(text: string): string {
// Handle image links: ![alt](workspace://id#mime)
let resolved = text.replace(
/!\[([^\]]*)\]\(workspace:\/\/([^)#\s]+)(?:#([^)#\s]*))?\)/g,
(_match, alt: string, fileId: string, mimeHint?: string) => {
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const url = `/api/proxy${apiPath}`;
if (mimeHint?.startsWith("video/")) {
return `![video:${alt || "Video"}](${url})`;
}
return `![${alt || "Image"}](${url})`;
},
);
// Handle regular links: [text](workspace://id) — without the leading "!"
// These are blocked by Streamdown's rehype-harden sanitizer because
// "workspace://" is not in the allowed URL-scheme whitelist, which causes
// "[blocked]" to appear next to the link text.
// Use an absolute URL so Streamdown's "Copy link" button copies the full
// URL (including host) rather than just the path.
resolved = resolved.replace(
/(?<!!)\[([^\]]*)\]\(workspace:\/\/([^)#\s]+)(?:#[^)#\s]*)?\)/g,
(_match, linkText: string, fileId: string) => {
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const origin =
typeof window !== "undefined" ? window.location.origin : "";
const url = `${origin}/api/proxy${apiPath}`;
return `[${linkText || "Download file"}](${url})`;
},
);
return resolved;
}
/**
* Custom img component for Streamdown that renders <video> elements
* for workspace video files (detected via "video:" alt-text prefix).
* Falls back to <video> when an <img> fails to load for workspace files.
*/
function WorkspaceMediaImage(props: React.JSX.IntrinsicElements["img"]) {
const { src, alt, ...rest } = props;
const [imgFailed, setImgFailed] = useState(false);
const isWorkspace = src?.includes("/workspace/files/") ?? false;
if (!src) return null;
if (alt?.startsWith("video:") || (imgFailed && isWorkspace)) {
return (
<span className="my-2 inline-block">
<video
controls
className="h-auto max-w-full rounded-md border border-zinc-200"
preload="metadata"
>
<source src={src} />
Your browser does not support the video tag.
</video>
</span>
);
}
return (
// eslint-disable-next-line @next/next/no-img-element
<img
src={src}
alt={alt || "Image"}
className="h-auto max-w-full rounded-md border border-zinc-200"
loading="lazy"
onError={() => {
if (isWorkspace) setImgFailed(true);
}}
{...rest}
/>
);
}
/** Stable components override for Streamdown (avoids re-creating on every render). */
const STREAMDOWN_COMPONENTS = { img: WorkspaceMediaImage };
const THINKING_PHRASES = [
"Thinking...",
"Considering this...",
"Working through this...",
"Analyzing your request...",
"Reasoning...",
"Looking into it...",
"Processing your request...",
"Mulling this over...",
"Piecing it together...",
"On it...",
];
function getRandomPhrase() {
return THINKING_PHRASES[Math.floor(Math.random() * THINKING_PHRASES.length)];
}
interface ChatMessagesContainerProps {
interface Props {
messages: UIMessage<unknown, UIDataTypes, UITools>[];
status: string;
error: Error | undefined;
@@ -190,15 +18,13 @@ interface ChatMessagesContainerProps {
headerSlot?: React.ReactNode;
}
export const ChatMessagesContainer = ({
export function ChatMessagesContainer({
messages,
status,
error,
isLoading,
headerSlot,
}: ChatMessagesContainerProps) => {
const [thinkingPhrase, setThinkingPhrase] = useState(getRandomPhrase);
}: Props) {
const lastMessage = messages[messages.length - 1];
// Determine if something is visibly "in-flight" in the last assistant message:
@@ -230,12 +56,6 @@ export const ChatMessagesContainer = ({
const showThinking =
status === "submitted" || (status === "streaming" && !hasInflight);
useEffect(() => {
if (showThinking) {
setThinkingPhrase(getRandomPhrase());
}
}, [showThinking]);
return (
<Conversation className="min-h-0 flex-1">
<ConversationContent className="flex flex-1 flex-col gap-6 px-3 py-6">
@@ -253,6 +73,10 @@ export const ChatMessagesContainer = ({
messageIndex === messages.length - 1 &&
message.role === "assistant";
const fileParts = message.parts.filter(
(p): p is FileUIPart => p.type === "file",
);
return (
<Message from={message.role} key={message.id}>
<MessageContent
@@ -262,145 +86,31 @@ export const ChatMessagesContainer = ({
"group-[.is-assistant]:bg-transparent group-[.is-assistant]:text-slate-900"
}
>
{message.parts.map((part, i) => {
switch (part.type) {
case "text": {
// Check for special markers (error, system)
const { markerType, markerText, cleanText } =
parseSpecialMarkers(part.text);
if (markerType === "error") {
return (
<ErrorCard
key={`${message.id}-${i}`}
responseError={{ message: markerText }}
context="execution"
/>
);
}
if (markerType === "system") {
return (
<div
key={`${message.id}-${i}`}
className="my-2 rounded-lg bg-neutral-100 px-3 py-2 text-sm italic text-neutral-600"
>
{markerText}
</div>
);
}
return (
<MessageResponse
key={`${message.id}-${i}`}
components={STREAMDOWN_COMPONENTS}
>
{resolveWorkspaceUrls(cleanText)}
</MessageResponse>
);
}
case "tool-find_block":
return (
<FindBlocksTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-find_agent":
case "tool-find_library_agent":
return (
<FindAgentsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-search_docs":
case "tool-get_doc_page":
return (
<SearchDocsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_block":
return (
<RunBlockTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-run_agent":
case "tool-schedule_agent":
return (
<RunAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-create_agent":
return (
<CreateAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-edit_agent":
return (
<EditAgentTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-view_agent_output":
return (
<ViewAgentOutputTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-search_feature_requests":
return (
<SearchFeatureRequestsTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
case "tool-create_feature_request":
return (
<CreateFeatureRequestTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
default:
// Render a generic tool indicator for SDK built-in
// tools (Read, Glob, Grep, etc.) or any unrecognized tool
if (part.type.startsWith("tool-")) {
return (
<GenericTool
key={`${message.id}-${i}`}
part={part as ToolUIPart}
/>
);
}
return null;
}
})}
{message.parts.map((part, i) => (
<MessagePartRenderer
key={`${message.id}-${i}`}
part={part}
messageID={message.id}
partIndex={i}
/>
))}
{isLastAssistant && showThinking && (
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
{thinkingPhrase}
</span>
<ThinkingIndicator active={showThinking} />
)}
</MessageContent>
{fileParts.length > 0 && (
<MessageAttachments
files={fileParts}
isUser={message.role === "user"}
/>
)}
</Message>
);
})}
{showThinking && lastMessage?.role !== "assistant" && (
<Message from="assistant">
<MessageContent className="text-[1rem] leading-relaxed">
<span className="inline-block animate-shimmer bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-[length:200%_100%] bg-clip-text text-transparent">
{thinkingPhrase}
</span>
<ThinkingIndicator active={showThinking} />
</MessageContent>
</Message>
)}
@@ -419,4 +129,4 @@ export const ChatMessagesContainer = ({
<ConversationScrollButton />
</Conversation>
);
};
}

View File

@@ -0,0 +1,84 @@
import {
FileText as FileTextIcon,
DownloadSimple as DownloadIcon,
} from "@phosphor-icons/react";
import type { FileUIPart } from "ai";
import {
ContentCard,
ContentCardHeader,
ContentCardTitle,
ContentCardSubtitle,
} from "../../ToolAccordion/AccordionContent";
interface Props {
files: FileUIPart[];
isUser?: boolean;
}
export function MessageAttachments({ files, isUser }: Props) {
if (files.length === 0) return null;
return (
<div className="mt-2 flex flex-col gap-2">
{files.map((file, i) =>
isUser ? (
<div
key={`${file.filename}-${i}`}
className="min-w-0 rounded-lg border border-purple-300 bg-purple-100 p-3"
>
<div className="flex items-start justify-between gap-2">
<div className="flex min-w-0 items-center gap-2">
<FileTextIcon className="h-5 w-5 shrink-0 text-neutral-400" />
<div className="min-w-0">
<p className="truncate text-sm font-medium text-zinc-800">
{file.filename || "file"}
</p>
<p className="mt-0.5 truncate font-mono text-xs text-zinc-800">
{file.mediaType || "file"}
</p>
</div>
</div>
{file.url && (
<a
href={file.url}
download
aria-label="Download file"
className="shrink-0 text-purple-400 hover:text-purple-600"
>
<DownloadIcon className="h-5 w-5" />
</a>
)}
</div>
</div>
) : (
<ContentCard key={`${file.filename}-${i}`}>
<ContentCardHeader
action={
file.url ? (
<a
href={file.url}
download
aria-label="Download file"
className="shrink-0 text-neutral-400 hover:text-neutral-600"
>
<DownloadIcon className="h-5 w-5" />
</a>
) : undefined
}
>
<div className="flex items-center gap-2">
<FileTextIcon className="h-5 w-5 shrink-0 text-neutral-400" />
<div className="min-w-0">
<ContentCardTitle>{file.filename || "file"}</ContentCardTitle>
<ContentCardSubtitle>
{file.mediaType || "file"}
</ContentCardSubtitle>
</div>
</div>
</ContentCardHeader>
</ContentCard>
),
)}
</div>
);
}

View File

@@ -0,0 +1,153 @@
import { MessageResponse } from "@/components/ai-elements/message";
import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard";
import { ExclamationMarkIcon } from "@phosphor-icons/react";
import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai";
import { useState } from "react";
import { CreateAgentTool } from "../../../tools/CreateAgent/CreateAgent";
import { EditAgentTool } from "../../../tools/EditAgent/EditAgent";
import {
CreateFeatureRequestTool,
SearchFeatureRequestsTool,
} from "../../../tools/FeatureRequests/FeatureRequests";
import { FindAgentsTool } from "../../../tools/FindAgents/FindAgents";
import { FindBlocksTool } from "../../../tools/FindBlocks/FindBlocks";
import { GenericTool } from "../../../tools/GenericTool/GenericTool";
import { RunAgentTool } from "../../../tools/RunAgent/RunAgent";
import { RunBlockTool } from "../../../tools/RunBlock/RunBlock";
import { SearchDocsTool } from "../../../tools/SearchDocs/SearchDocs";
import { ViewAgentOutputTool } from "../../../tools/ViewAgentOutput/ViewAgentOutput";
import { parseSpecialMarkers, resolveWorkspaceUrls } from "../helpers";
/**
* Custom img component for Streamdown that renders <video> elements
* for workspace video files (detected via "video:" alt-text prefix).
* Falls back to <video> when an <img> fails to load for workspace files.
*/
function WorkspaceMediaImage(props: React.JSX.IntrinsicElements["img"]) {
const { src, alt, ...rest } = props;
const [imgFailed, setImgFailed] = useState(false);
const isWorkspace = src?.includes("/workspace/files/") ?? false;
if (!src) return null;
if (alt?.startsWith("video:") || (imgFailed && isWorkspace)) {
return (
<span className="my-2 inline-block">
<video
controls
className="h-auto max-w-full rounded-md border border-zinc-200"
preload="metadata"
>
<source src={src} />
Your browser does not support the video tag.
</video>
</span>
);
}
return (
// eslint-disable-next-line @next/next/no-img-element
<img
src={src}
alt={alt || "Image"}
className="h-auto max-w-full rounded-md border border-zinc-200"
loading="lazy"
onError={() => {
if (isWorkspace) setImgFailed(true);
}}
{...rest}
/>
);
}
/** Stable components override for Streamdown (avoids re-creating on every render). */
const STREAMDOWN_COMPONENTS = { img: WorkspaceMediaImage };
interface Props {
part: UIMessage<unknown, UIDataTypes, UITools>["parts"][number];
messageID: string;
partIndex: number;
}
export function MessagePartRenderer({ part, messageID, partIndex }: Props) {
const key = `${messageID}-${partIndex}`;
switch (part.type) {
case "text": {
const { markerType, markerText, cleanText } = parseSpecialMarkers(
part.text,
);
if (markerType === "error") {
const lowerMarker = markerText.toLowerCase();
const isCancellation =
lowerMarker === "operation cancelled" ||
lowerMarker === "execution stopped by user";
if (isCancellation) {
return (
<div
key={key}
className="my-2 flex items-center gap-1 rounded-lg bg-neutral-200/50 px-3 py-2 text-sm text-neutral-600"
>
<ExclamationMarkIcon size={16} /> You manually stopped this chat
</div>
);
}
return (
<ErrorCard
key={key}
responseError={{ message: markerText }}
context="execution"
/>
);
}
if (markerType === "system") {
return (
<div
key={key}
className="my-2 rounded-lg bg-neutral-100 px-3 py-2 text-sm italic text-neutral-600"
>
{markerText}
</div>
);
}
return (
<MessageResponse key={key} components={STREAMDOWN_COMPONENTS}>
{resolveWorkspaceUrls(cleanText)}
</MessageResponse>
);
}
case "tool-find_block":
return <FindBlocksTool key={key} part={part as ToolUIPart} />;
case "tool-find_agent":
case "tool-find_library_agent":
return <FindAgentsTool key={key} part={part as ToolUIPart} />;
case "tool-search_docs":
case "tool-get_doc_page":
return <SearchDocsTool key={key} part={part as ToolUIPart} />;
case "tool-run_block":
return <RunBlockTool key={key} part={part as ToolUIPart} />;
case "tool-run_agent":
case "tool-schedule_agent":
return <RunAgentTool key={key} part={part as ToolUIPart} />;
case "tool-create_agent":
return <CreateAgentTool key={key} part={part as ToolUIPart} />;
case "tool-edit_agent":
return <EditAgentTool key={key} part={part as ToolUIPart} />;
case "tool-view_agent_output":
return <ViewAgentOutputTool key={key} part={part as ToolUIPart} />;
case "tool-search_feature_requests":
return <SearchFeatureRequestsTool key={key} part={part as ToolUIPart} />;
case "tool-create_feature_request":
return <CreateFeatureRequestTool key={key} part={part as ToolUIPart} />;
default:
// Render a generic tool indicator for SDK built-in
// tools (Read, Glob, Grep, etc.) or any unrecognized tool
if (part.type.startsWith("tool-")) {
return <GenericTool key={key} part={part as ToolUIPart} />;
}
return null;
}
}

View File

@@ -0,0 +1,93 @@
import { useEffect, useRef, useState } from "react";
import { ScaleLoader } from "../../ScaleLoader/ScaleLoader";
const THINKING_PHRASES = [
"Thinking...",
"Considering this...",
"Working through this...",
"Analyzing your request...",
"Reasoning...",
"Looking into it...",
"Processing your request...",
"Mulling this over...",
"Piecing it together...",
"On it...",
"Connecting the dots...",
"Exploring possibilities...",
"Weighing options...",
"Diving deeper...",
"Gathering thoughts...",
"Almost there...",
"Figuring this out...",
"Putting it together...",
"Running through ideas...",
"Wrapping my head around this...",
];
const PHRASE_CYCLE_MS = 6_000;
const FADE_DURATION_MS = 300;
/**
* Cycles through thinking phrases sequentially with a fade-out/in transition.
* Returns the current phrase and whether it's visible (for opacity).
*/
function useCyclingPhrase(active: boolean) {
const indexRef = useRef(0);
const [phrase, setPhrase] = useState(THINKING_PHRASES[0]);
const [visible, setVisible] = useState(true);
const fadeTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
// Reset to the first phrase when thinking restarts
const prevActive = useRef(active);
useEffect(() => {
if (active && !prevActive.current) {
indexRef.current = 0;
setPhrase(THINKING_PHRASES[0]);
setVisible(true);
}
prevActive.current = active;
}, [active]);
useEffect(() => {
if (!active) return;
const id = setInterval(() => {
setVisible(false);
fadeTimeoutRef.current = setTimeout(() => {
indexRef.current = (indexRef.current + 1) % THINKING_PHRASES.length;
setPhrase(THINKING_PHRASES[indexRef.current]);
setVisible(true);
}, FADE_DURATION_MS);
}, PHRASE_CYCLE_MS);
return () => {
clearInterval(id);
if (fadeTimeoutRef.current) {
clearTimeout(fadeTimeoutRef.current);
fadeTimeoutRef.current = null;
}
};
}, [active]);
return { phrase, visible };
}
interface Props {
active: boolean;
}
export function ThinkingIndicator({ active }: Props) {
const { phrase, visible } = useCyclingPhrase(active);
return (
<span className="inline-flex items-center gap-1.5 text-neutral-500">
<ScaleLoader size={16} />
<span
className="transition-opacity duration-300"
style={{ opacity: visible ? 1 : 0 }}
>
<span className="animate-pulse [animation-duration:1.5s]">
{phrase}
</span>
</span>
</span>
);
}

View File

@@ -0,0 +1,92 @@
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
// Special message prefixes for text-based markers (set by backend).
// The hex suffix makes it virtually impossible for an LLM to accidentally
// produce these strings in normal conversation.
const COPILOT_ERROR_PREFIX = "[__COPILOT_ERROR_f7a1__]";
const COPILOT_SYSTEM_PREFIX = "[__COPILOT_SYSTEM_e3b0__]";
export type MarkerType = "error" | "system" | null;
/** Escape all regex special characters in a string. */
function escapeRegExp(s: string): string {
return s.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
}
// Pre-compiled marker regexes (avoids re-creating on every call / render)
const ERROR_MARKER_RE = new RegExp(
`${escapeRegExp(COPILOT_ERROR_PREFIX)}\\s*(.+?)$`,
"s",
);
const SYSTEM_MARKER_RE = new RegExp(
`${escapeRegExp(COPILOT_SYSTEM_PREFIX)}\\s*(.+?)$`,
"s",
);
export function parseSpecialMarkers(text: string): {
markerType: MarkerType;
markerText: string;
cleanText: string;
} {
const errorMatch = text.match(ERROR_MARKER_RE);
if (errorMatch) {
return {
markerType: "error",
markerText: errorMatch[1].trim(),
cleanText: text.replace(errorMatch[0], "").trim(),
};
}
const systemMatch = text.match(SYSTEM_MARKER_RE);
if (systemMatch) {
return {
markerType: "system",
markerText: systemMatch[1].trim(),
cleanText: text.replace(systemMatch[0], "").trim(),
};
}
return { markerType: null, markerText: "", cleanText: text };
}
/**
* Resolve workspace:// URLs in markdown text to proxy download URLs.
*
* Handles both image syntax `![alt](workspace://id#mime)` and regular link
* syntax `[text](workspace://id)`. For images the MIME type hash fragment is
* inspected so that videos can be rendered with a `<video>` element via the
* custom img component.
*/
export function resolveWorkspaceUrls(text: string): string {
// Handle image links: ![alt](workspace://id#mime)
let resolved = text.replace(
/!\[([^\]]*)\]\(workspace:\/\/([^)#\s]+)(?:#([^)#\s]*))?\)/g,
(_match, alt: string, fileId: string, mimeHint?: string) => {
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const url = `/api/proxy${apiPath}`;
if (mimeHint?.startsWith("video/")) {
return `![video:${alt || "Video"}](${url})`;
}
return `![${alt || "Image"}](${url})`;
},
);
// Handle regular links: [text](workspace://id) — without the leading "!"
// These are blocked by Streamdown's rehype-harden sanitizer because
// "workspace://" is not in the allowed URL-scheme whitelist, which causes
// "[blocked]" to appear next to the link text.
// Use an absolute URL so Streamdown's "Copy link" button copies the full
// URL (including host) rather than just the path.
resolved = resolved.replace(
/(?<!!)\[([^\]]*)\]\(workspace:\/\/([^)#\s]+)(?:#[^)#\s]*)?\)/g,
(_match, linkText: string, fileId: string) => {
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
const origin =
typeof window !== "undefined" ? window.location.origin : "";
const url = `${origin}/api/proxy${apiPath}`;
return `[${linkText || "Download file"}](${url})`;
},
);
return resolved;
}

View File

@@ -27,17 +27,14 @@ import { DotsThree, PlusCircleIcon, PlusIcon } from "@phosphor-icons/react";
import { useQueryClient } from "@tanstack/react-query";
import { motion } from "framer-motion";
import { parseAsString, useQueryState } from "nuqs";
import { useState } from "react";
import { useCopilotUIStore } from "../../store";
import { DeleteChatDialog } from "../DeleteChatDialog/DeleteChatDialog";
export function ChatSidebar() {
const { state } = useSidebar();
const isCollapsed = state === "collapsed";
const [sessionId, setSessionId] = useQueryState("sessionId", parseAsString);
const [sessionToDelete, setSessionToDelete] = useState<{
id: string;
title: string | null | undefined;
} | null>(null);
const { sessionToDelete, setSessionToDelete } = useCopilotUIStore();
const queryClient = useQueryClient();
@@ -48,11 +45,9 @@ export function ChatSidebar() {
useDeleteV2DeleteSession({
mutation: {
onSuccess: () => {
// Invalidate sessions list to refetch
queryClient.invalidateQueries({
queryKey: getGetV2ListSessionsQueryKey(),
});
// If we deleted the current session, clear selection
if (sessionToDelete?.id === sessionId) {
setSessionId(null);
}
@@ -86,8 +81,8 @@ export function ChatSidebar() {
id: string,
title: string | null | undefined,
) {
e.stopPropagation(); // Prevent session selection
if (isDeleting) return; // Prevent double-click during deletion
e.stopPropagation();
if (isDeleting) return;
setSessionToDelete({ id, title });
}

View File

@@ -17,13 +17,19 @@ interface Props {
inputLayoutId: string;
isCreatingSession: boolean;
onCreateSession: () => void | Promise<string>;
onSend: (message: string) => void | Promise<void>;
onSend: (message: string, files?: File[]) => void | Promise<void>;
isUploadingFiles?: boolean;
droppedFiles?: File[];
onDroppedFilesConsumed?: () => void;
}
export function EmptySession({
inputLayoutId,
isCreatingSession,
onSend,
isUploadingFiles,
droppedFiles,
onDroppedFilesConsumed,
}: Props) {
const { user } = useSupabase();
const greetingName = getGreetingName(user);
@@ -51,12 +57,12 @@ export function EmptySession({
return (
<div className="flex h-full flex-1 items-center justify-center overflow-y-auto bg-[#f8f8f9] px-0 py-5 md:px-6 md:py-10">
<motion.div
className="w-full max-w-3xl text-center"
className="w-full max-w-[52rem] text-center"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.3 }}
>
<div className="mx-auto max-w-3xl">
<div className="mx-auto max-w-[52rem]">
<Text variant="h3" className="mb-1 !text-[1.375rem] text-zinc-700">
Hey, <span className="text-violet-600">{greetingName}</span>
</Text>
@@ -74,8 +80,11 @@ export function EmptySession({
inputId="chat-input-empty"
onSend={onSend}
disabled={isCreatingSession}
isUploadingFiles={isUploadingFiles}
placeholder={inputPlaceholder}
className="w-full"
droppedFiles={droppedFiles}
onDroppedFilesConsumed={onDroppedFilesConsumed}
/>
</motion.div>
</div>

View File

@@ -17,6 +17,7 @@
left: 0;
top: 0;
animation: animloader 2s linear infinite;
animation-fill-mode: backwards;
}
.loader::after {

View File

@@ -0,0 +1,59 @@
import type { UIMessage } from "ai";
/** Mark any in-progress tool parts as completed/errored so spinners stop. */
export function resolveInProgressTools(
messages: UIMessage[],
outcome: "completed" | "cancelled",
): UIMessage[] {
return messages.map((msg) => ({
...msg,
parts: msg.parts.map((part) =>
"state" in part &&
(part.state === "input-streaming" || part.state === "input-available")
? outcome === "cancelled"
? { ...part, state: "output-error" as const, errorText: "Cancelled" }
: { ...part, state: "output-available" as const, output: "" }
: part,
),
}));
}
/**
* Deduplicate messages by ID and by consecutive content fingerprint.
*
* ID dedup catches exact duplicates within the same source.
* Content dedup only compares each assistant message to its **immediate
* predecessor** — this catches hydration/stream boundary duplicates (where
* the same content appears under different IDs) without accidentally
* removing legitimately repeated assistant responses that are far apart.
*/
export function deduplicateMessages(messages: UIMessage[]): UIMessage[] {
const seenIds = new Set<string>();
let lastAssistantFingerprint = "";
return messages.filter((msg) => {
if (seenIds.has(msg.id)) return false;
seenIds.add(msg.id);
if (msg.role === "assistant") {
const fingerprint = msg.parts
.map(
(p) =>
("text" in p && p.text) ||
("toolCallId" in p && p.toolCallId) ||
"",
)
.join("|");
// Only dedup if this assistant message is identical to the previous one
if (fingerprint && fingerprint === lastAssistantFingerprint) return false;
if (fingerprint) lastAssistantFingerprint = fingerprint;
} else {
// Reset on non-assistant messages so that identical assistant responses
// separated by a user message (e.g. "Done!" → user → "Done!") are kept.
lastAssistantFingerprint = "";
}
return true;
});
}

View File

@@ -1,4 +1,5 @@
import type { UIMessage, UIDataTypes, UITools } from "ai";
import { getGetWorkspaceDownloadFileByIdUrl } from "@/app/api/__generated__/endpoints/workspace/workspace";
import type { FileUIPart, UIMessage, UIDataTypes, UITools } from "ai";
interface SessionChatMessage {
role: string;
@@ -38,6 +39,48 @@ function coerceSessionChatMessages(
.filter((m): m is SessionChatMessage => m !== null);
}
/**
* Parse the `[Attached files]` block appended by the backend and return
* the cleaned text plus reconstructed FileUIPart objects.
*
* Backend format:
* ```
* \n\n[Attached files]
* - name.jpg (image/jpeg, 191.0 KB), file_id=<uuid>
* Use read_workspace_file with the file_id to access file contents.
* ```
*/
const ATTACHED_FILES_RE =
/\n?\n?\[Attached files\]\n([\s\S]*?)Use read_workspace_file with the file_id to access file contents\./;
const FILE_LINE_RE = /^- (.+) \(([^,]+),\s*[\d.]+ KB\), file_id=([0-9a-f-]+)$/;
function extractFileParts(content: string): {
cleanText: string;
fileParts: FileUIPart[];
} {
const match = content.match(ATTACHED_FILES_RE);
if (!match) return { cleanText: content, fileParts: [] };
const cleanText = content.replace(match[0], "").trim();
const lines = match[1].trim().split("\n");
const fileParts: FileUIPart[] = [];
for (const line of lines) {
const m = line.trim().match(FILE_LINE_RE);
if (!m) continue;
const [, filename, mimeType, fileId] = m;
const apiPath = getGetWorkspaceDownloadFileByIdUrl(fileId);
fileParts.push({
type: "file",
filename,
mediaType: mimeType,
url: `/api/proxy${apiPath}`,
});
}
return { cleanText, fileParts };
}
function safeJsonParse(value: string): unknown {
try {
return JSON.parse(value) as unknown;
@@ -79,7 +122,17 @@ export function convertChatSessionMessagesToUiMessages(
const parts: UIMessage<unknown, UIDataTypes, UITools>["parts"] = [];
if (typeof msg.content === "string" && msg.content.trim()) {
parts.push({ type: "text", text: msg.content, state: "done" });
if (msg.role === "user") {
const { cleanText, fileParts } = extractFileParts(msg.content);
if (cleanText) {
parts.push({ type: "text", text: cleanText, state: "done" });
}
for (const fp of fileParts) {
parts.push(fp);
}
} else {
parts.push({ type: "text", text: msg.content, state: "done" });
}
}
if (msg.role === "assistant" && Array.isArray(msg.tool_calls)) {
@@ -116,10 +169,12 @@ export function convertChatSessionMessagesToUiMessages(
output: "",
});
} else {
// Active stream exists: Skip incomplete tool calls during hydration.
// The resume stream will deliver them fresh with proper SDK state.
// This prevents "No tool invocation found" errors on page refresh.
continue;
parts.push({
type: `tool-${toolName}`,
toolCallId,
state: "input-available",
input,
});
}
}
}

View File

@@ -0,0 +1,22 @@
import { create } from "zustand";
export interface DeleteTarget {
id: string;
title: string | null | undefined;
}
interface CopilotUIState {
sessionToDelete: DeleteTarget | null;
setSessionToDelete: (target: DeleteTarget | null) => void;
isDrawerOpen: boolean;
setDrawerOpen: (open: boolean) => void;
}
export const useCopilotUIStore = create<CopilotUIState>((set) => ({
sessionToDelete: null,
setSessionToDelete: (target) => set({ sessionToDelete: target }),
isDrawerOpen: false,
setDrawerOpen: (open) => set({ isDrawerOpen: open }),
}));

View File

@@ -16,6 +16,10 @@ import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { ClarificationQuestionsCard } from "./components/ClarificationQuestionsCard";
import { MiniGame } from "../../components/MiniGame/MiniGame";
import { SuggestedGoalCard } from "./components/SuggestedGoalCard";
import {
buildClarificationAnswersMessage,
normalizeClarifyingQuestions,
} from "../clarifying-questions";
import {
AccordionIcon,
formatMaybeJson,
@@ -28,7 +32,6 @@ import {
isSuggestedGoalOutput,
ToolIcon,
truncateText,
normalizeClarifyingQuestions,
type CreateAgentToolOutput,
} from "./helpers";
@@ -110,16 +113,7 @@ export function CreateAgentTool({ part }: Props) {
? (output.questions ?? [])
: [];
const contextMessage = questions
.map((q) => {
const answer = answers[q.keyword] || "";
return `> ${q.question}\n\n${answer}`;
})
.join("\n\n");
onSend(
`**Here are my answers:**\n\n${contextMessage}\n\nPlease proceed with creating the agent.`,
);
onSend(buildClarificationAnswersMessage(answers, questions, "create"));
}
return (

View File

@@ -7,7 +7,7 @@ import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils";
import { ChatTeardropDotsIcon, CheckCircleIcon } from "@phosphor-icons/react";
import { useEffect, useRef, useState } from "react";
import type { ClarifyingQuestion } from "../helpers";
import type { ClarifyingQuestion } from "../../clarifying-questions";
interface Props {
questions: ClarifyingQuestion[];
@@ -149,20 +149,20 @@ export function ClarificationQuestionsCard({
<div className="space-y-6">
{questions.map((q, index) => {
const isAnswered = !!answers[q.keyword]?.trim();
const hasAnswer = !!answers[q.keyword]?.trim();
return (
<div
key={`${q.keyword}-${index}`}
className={cn(
"relative rounded-lg border border-dotted p-3",
isAnswered
hasAnswer
? "border-green-500 bg-green-50/50"
: "border-slate-100 bg-slate-50/50",
)}
>
<div className="mb-2 flex items-start gap-2">
{isAnswered ? (
{hasAnswer ? (
<CheckCircleIcon
size={20}
className="mt-0.5 text-green-500"

View File

@@ -157,41 +157,3 @@ export function truncateText(text: string, maxChars: number): string {
if (trimmed.length <= maxChars) return trimmed;
return `${trimmed.slice(0, maxChars).trimEnd()}`;
}
export interface ClarifyingQuestion {
question: string;
keyword: string;
example?: string;
}
export function normalizeClarifyingQuestions(
questions: Array<{ question: string; keyword: string; example?: unknown }>,
): ClarifyingQuestion[] {
const seen = new Set<string>();
return questions.map((q, index) => {
let keyword = q.keyword?.trim().toLowerCase() || "";
if (!keyword) {
keyword = `question-${index}`;
}
let unique = keyword;
let suffix = 1;
while (seen.has(unique)) {
unique = `${keyword}-${suffix}`;
suffix++;
}
seen.add(unique);
const item: ClarifyingQuestion = {
question: q.question,
keyword: unique,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
});
}

View File

@@ -14,8 +14,11 @@ import {
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import {
buildClarificationAnswersMessage,
normalizeClarifyingQuestions,
} from "../clarifying-questions";
import { ClarificationQuestionsCard } from "../CreateAgent/components/ClarificationQuestionsCard";
import { normalizeClarifyingQuestions } from "../CreateAgent/helpers";
import {
AccordionIcon,
formatMaybeJson,
@@ -99,16 +102,7 @@ export function EditAgentTool({ part }: Props) {
? (output.questions ?? [])
: [];
const contextMessage = questions
.map((q) => {
const answer = answers[q.keyword] || "";
return `> ${q.question}\n\n${answer}`;
})
.join("\n\n");
onSend(
`**Here are my answers:**\n\n${contextMessage}\n\nPlease proceed with editing the agent.`,
);
onSend(buildClarificationAnswersMessage(answers, questions, "edit"));
}
return (

View File

@@ -3,7 +3,7 @@
import type { AgentDetailsResponse } from "@/app/api/__generated__/models/agentDetailsResponse";
import { Button } from "@/components/atoms/Button/Button";
import { FormRenderer } from "@/components/renderers/InputRenderer/FormRenderer";
import { useState } from "react";
import { useEffect, useState } from "react";
import { useCopilotChatActions } from "../../../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { ContentMessage } from "../../../../components/ToolAccordion/AccordionContent";
import { buildInputSchema, extractDefaults, isFormValid } from "./helpers";
@@ -24,6 +24,14 @@ export function AgentDetailsCard({ output }: Props) {
schema ? isFormValid(schema, defaults) : false,
);
// Reset form state when the agent changes (e.g. during mid-conversation switches)
useEffect(() => {
const newSchema = buildInputSchema(output.agent.inputs);
const newDefaults = newSchema ? extractDefaults(newSchema) : {};
setInputValues(newDefaults);
setValid(newSchema ? isFormValid(newSchema, newDefaults) : false);
}, [output.agent.id]);
function handleChange(v: { formData?: Record<string, unknown> }) {
const data = v.formData ?? {};
setInputValues(data);

View File

@@ -0,0 +1,106 @@
import type { RJSFSchema } from "@rjsf/utils";
import { describe, expect, it } from "vitest";
import { buildInputSchema, extractDefaults, isFormValid } from "./helpers";
describe("buildInputSchema", () => {
it("returns null for falsy input", () => {
expect(buildInputSchema(null)).toBeNull();
expect(buildInputSchema(undefined)).toBeNull();
expect(buildInputSchema("")).toBeNull();
});
it("returns null for empty properties object", () => {
expect(buildInputSchema({})).toBeNull();
});
it("returns the schema when properties exist", () => {
const schema = { name: { type: "string" as const } };
expect(buildInputSchema(schema)).toBe(schema);
});
});
describe("extractDefaults", () => {
it("returns an empty object when no properties exist", () => {
expect(extractDefaults({})).toEqual({});
expect(extractDefaults({ properties: null as never })).toEqual({});
});
it("extracts default values from property definitions", () => {
const schema: RJSFSchema = {
properties: {
name: { type: "string", default: "Alice" },
age: { type: "number", default: 30 },
},
};
expect(extractDefaults(schema)).toEqual({ name: "Alice", age: 30 });
});
it("falls back to the first example when no default is defined", () => {
const schema: RJSFSchema = {
properties: {
query: { type: "string", examples: ["hello", "world"] },
},
};
expect(extractDefaults(schema)).toEqual({ query: "hello" });
});
it("prefers default over examples", () => {
const schema: RJSFSchema = {
properties: {
value: { type: "string", default: "def", examples: ["ex"] },
},
};
expect(extractDefaults(schema)).toEqual({ value: "def" });
});
it("skips properties without default or examples", () => {
const schema: RJSFSchema = {
properties: {
name: { type: "string" },
title: { type: "string", default: "Mr." },
},
};
expect(extractDefaults(schema)).toEqual({ title: "Mr." });
});
it("skips properties that are not objects", () => {
const schema: RJSFSchema = {
properties: {
bad: true,
alsobad: false,
},
};
expect(extractDefaults(schema)).toEqual({});
});
});
describe("isFormValid", () => {
it("returns true for a valid form", () => {
const schema: RJSFSchema = {
type: "object",
properties: {
name: { type: "string" },
},
};
expect(isFormValid(schema, { name: "Alice" })).toBe(true);
});
it("returns false when required fields are missing", () => {
const schema: RJSFSchema = {
type: "object",
required: ["name"],
properties: {
name: { type: "string" },
},
};
expect(isFormValid(schema, {})).toBe(false);
});
it("returns true for empty schema with empty data", () => {
const schema: RJSFSchema = {
type: "object",
properties: {},
};
expect(isFormValid(schema, {})).toBe(true);
});
});

View File

@@ -0,0 +1,93 @@
import { describe, expect, it } from "vitest";
import {
buildClarificationAnswersMessage,
normalizeClarifyingQuestions,
} from "./clarifying-questions";
describe("normalizeClarifyingQuestions", () => {
it("returns normalized questions with trimmed lowercase keywords", () => {
const result = normalizeClarifyingQuestions([
{ question: "What is your goal?", keyword: " Goal ", example: "test" },
]);
expect(result).toEqual([
{ question: "What is your goal?", keyword: "goal", example: "test" },
]);
});
it("deduplicates keywords by appending a numeric suffix", () => {
const result = normalizeClarifyingQuestions([
{ question: "Q1", keyword: "topic" },
{ question: "Q2", keyword: "topic" },
{ question: "Q3", keyword: "topic" },
]);
expect(result.map((q) => q.keyword)).toEqual([
"topic",
"topic-1",
"topic-2",
]);
});
it("falls back to question-{index} when keyword is empty", () => {
const result = normalizeClarifyingQuestions([
{ question: "First?", keyword: "" },
{ question: "Second?", keyword: " " },
]);
expect(result[0].keyword).toBe("question-0");
expect(result[1].keyword).toBe("question-1");
});
it("coerces non-string examples to undefined", () => {
const result = normalizeClarifyingQuestions([
{ question: "Q1", keyword: "k1", example: 42 },
{ question: "Q2", keyword: "k2", example: null },
{ question: "Q3", keyword: "k3", example: { nested: true } },
]);
expect(result[0].example).toBeUndefined();
expect(result[1].example).toBeUndefined();
expect(result[2].example).toBeUndefined();
});
it("trims string examples and omits empty ones", () => {
const result = normalizeClarifyingQuestions([
{ question: "Q1", keyword: "k1", example: " valid " },
{ question: "Q2", keyword: "k2", example: " " },
]);
expect(result[0].example).toBe("valid");
expect(result[1].example).toBeUndefined();
});
it("returns an empty array for empty input", () => {
expect(normalizeClarifyingQuestions([])).toEqual([]);
});
});
describe("buildClarificationAnswersMessage", () => {
it("formats answers with create mode", () => {
const result = buildClarificationAnswersMessage(
{ goal: "automate tasks" },
[{ question: "What is your goal?", keyword: "goal" }],
"create",
);
expect(result).toContain("> What is your goal?");
expect(result).toContain("automate tasks");
expect(result).toContain("Please proceed with creating the agent.");
});
it("formats answers with edit mode", () => {
const result = buildClarificationAnswersMessage(
{ goal: "fix bugs" },
[{ question: "What should change?", keyword: "goal" }],
"edit",
);
expect(result).toContain("Please proceed with editing the agent.");
});
it("uses empty string for missing answers", () => {
const result = buildClarificationAnswersMessage(
{},
[{ question: "Q?", keyword: "missing" }],
"create",
);
expect(result).toContain("> Q?\n\n");
});
});

View File

@@ -0,0 +1,56 @@
export interface ClarifyingQuestion {
question: string;
keyword: string;
example?: string;
}
export function normalizeClarifyingQuestions(
questions: Array<{ question: string; keyword: string; example?: unknown }>,
): ClarifyingQuestion[] {
const seen = new Set<string>();
return questions.map((q, index) => {
let keyword = q.keyword?.trim().toLowerCase() || "";
if (!keyword) {
keyword = `question-${index}`;
}
let unique = keyword;
let suffix = 1;
while (seen.has(unique)) {
unique = `${keyword}-${suffix}`;
suffix++;
}
seen.add(unique);
const item: ClarifyingQuestion = {
question: q.question,
keyword: unique,
};
const example =
typeof q.example === "string" && q.example.trim()
? q.example.trim()
: null;
if (example) item.example = example;
return item;
});
}
/**
* Formats clarification answers as a context message and sends it via onSend.
*/
export function buildClarificationAnswersMessage(
answers: Record<string, string>,
rawQuestions: Array<{ question: string; keyword: string }>,
mode: "create" | "edit",
): string {
const contextMessage = rawQuestions
.map((q) => {
const answer = answers[q.keyword] || "";
return `> ${q.question}\n\n${answer}`;
})
.join("\n\n");
const action = mode === "create" ? "creating" : "editing";
return `**Here are my answers:**\n\n${contextMessage}\n\nPlease proceed with ${action} the agent.`;
}

View File

@@ -25,10 +25,12 @@ export function useChatSession() {
},
});
// When the user navigates away from a session, invalidate its query cache.
// Invalidate query cache on session switch.
// useChat destroys its Chat instance on id change, so messages are lost.
// Invalidating ensures the next visit fetches fresh data from the API
// instead of hydrating from stale cache that's missing recent messages.
// We invalidate BOTH the old session (stale after leaving) and the new
// session (may have been cached before the user sent their last message,
// so active_stream and messages could be outdated). This guarantees a
// fresh fetch that gives the resume effect accurate hasActiveStream state.
const prevSessionIdRef = useRef(sessionId);
useEffect(() => {
@@ -39,15 +41,21 @@ export function useChatSession() {
queryKey: getGetV2GetSessionQueryKey(prev),
});
}
if (sessionId) {
queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(sessionId),
});
}
}, [sessionId, queryClient]);
// Expose active_stream info so the caller can trigger manual resume
// after hydration completes (rather than relying on AI SDK's built-in
// resume which fires before hydration).
const hasActiveStream = useMemo(() => {
if (sessionQuery.isFetching) return false;
if (sessionQuery.data?.status !== 200) return false;
return !!sessionQuery.data.data.active_stream;
}, [sessionQuery.data, sessionId]);
}, [sessionQuery.data, sessionQuery.isFetching, sessionId]);
// Memoize so the effect in useCopilotPage doesn't infinite-loop on a new
// array reference every render. Re-derives only when query data changes.

View File

@@ -1,62 +1,35 @@
import {
getGetV2GetSessionQueryKey,
getGetV2ListSessionsQueryKey,
postV2CancelSessionTask,
useDeleteV2DeleteSession,
useGetV2ListSessions,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { toast } from "@/components/molecules/Toast/use-toast";
import { useBreakpoint } from "@/lib/hooks/useBreakpoint";
import { getWebSocketToken } from "@/lib/supabase/actions";
import { useSupabase } from "@/lib/supabase/hooks/useSupabase";
import { useChat } from "@ai-sdk/react";
import { environment } from "@/services/environment";
import { useQueryClient } from "@tanstack/react-query";
import { DefaultChatTransport } from "ai";
import type { UIMessage } from "ai";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import type { FileUIPart } from "ai";
import { useEffect, useRef, useState } from "react";
import { useCopilotUIStore } from "./store";
import { useChatSession } from "./useChatSession";
import { useCopilotStream } from "./useCopilotStream";
const RECONNECT_BASE_DELAY_MS = 1_000;
const RECONNECT_MAX_DELAY_MS = 30_000;
const RECONNECT_MAX_ATTEMPTS = 5;
/** Mark any in-progress tool parts as completed/errored so spinners stop. */
function resolveInProgressTools(
messages: UIMessage[],
outcome: "completed" | "cancelled",
): UIMessage[] {
return messages.map((msg) => ({
...msg,
parts: msg.parts.map((part) =>
"state" in part &&
(part.state === "input-streaming" || part.state === "input-available")
? outcome === "cancelled"
? { ...part, state: "output-error" as const, errorText: "Cancelled" }
: { ...part, state: "output-available" as const, output: "" }
: part,
),
}));
}
/** Simple ID-based deduplication - trust backend for correctness */
function deduplicateMessages(messages: UIMessage[]): UIMessage[] {
const seenIds = new Set<string>();
return messages.filter((msg) => {
if (seenIds.has(msg.id)) return false;
seenIds.add(msg.id);
return true;
});
interface UploadedFile {
file_id: string;
name: string;
mime_type: string;
}
export function useCopilotPage() {
const { isUserLoading, isLoggedIn } = useSupabase();
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
const [isUploadingFiles, setIsUploadingFiles] = useState(false);
const [pendingMessage, setPendingMessage] = useState<string | null>(null);
const [sessionToDelete, setSessionToDelete] = useState<{
id: string;
title: string | null | undefined;
} | null>(null);
const queryClient = useQueryClient();
const { sessionToDelete, setSessionToDelete, isDrawerOpen, setDrawerOpen } =
useCopilotUIStore();
const {
sessionId,
setSessionId,
@@ -69,6 +42,22 @@ export function useCopilotPage() {
refetchSession,
} = useChatSession();
const {
messages,
sendMessage,
stop,
status,
error,
isReconnecting,
isUserStoppingRef,
} = useCopilotStream({
sessionId,
hydratedMessages,
hasActiveStream,
refetchSession,
});
// --- Delete session ---
const { mutate: deleteSessionMutation, isPending: isDeleting } =
useDeleteV2DeleteSession({
mutation: {
@@ -93,244 +82,173 @@ export function useCopilotPage() {
},
});
// --- Responsive ---
const breakpoint = useBreakpoint();
const isMobile =
breakpoint === "base" || breakpoint === "sm" || breakpoint === "md";
const transport = useMemo(
() =>
sessionId
? new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
prepareSendMessagesRequest: ({ messages }) => {
const last = messages[messages.length - 1];
return {
body: {
message: (
last.parts?.map((p) => (p.type === "text" ? p.text : "")) ??
[]
).join(""),
is_user_message: last.role === "user",
context: null,
},
};
},
// Resume: GET goes to the same URL as POST (backend uses
// method to distinguish). Override the default formula which
// would append /{chatId}/stream to the existing path.
prepareReconnectToStreamRequest: () => ({
api: `/api/chat/sessions/${sessionId}/stream`,
}),
})
: null,
[sessionId],
);
const pendingFilesRef = useRef<File[]>([]);
// Reconnect state
const [reconnectAttempts, setReconnectAttempts] = useState(0);
const [isReconnectScheduled, setIsReconnectScheduled] = useState(false);
const reconnectTimerRef = useRef<ReturnType<typeof setTimeout>>();
const hasShownDisconnectToast = useRef(false);
// --- Send pending message after session creation ---
useEffect(() => {
if (!sessionId || pendingMessage === null) return;
const msg = pendingMessage;
const files = pendingFilesRef.current;
setPendingMessage(null);
pendingFilesRef.current = [];
// Consolidated reconnect logic
function handleReconnect(sid: string) {
if (isReconnectScheduled || !sid) return;
if (files.length > 0) {
setIsUploadingFiles(true);
void uploadFiles(files, sessionId)
.then((uploaded) => {
if (uploaded.length === 0) {
toast({
title: "File upload failed",
description: "Could not upload any files. Please try again.",
variant: "destructive",
});
return;
}
const fileParts = buildFileParts(uploaded);
sendMessage({
text: msg,
files: fileParts.length > 0 ? fileParts : undefined,
});
})
.finally(() => setIsUploadingFiles(false));
} else {
sendMessage({ text: msg });
}
}, [sessionId, pendingMessage, sendMessage]);
const nextAttempt = reconnectAttempts + 1;
if (nextAttempt > RECONNECT_MAX_ATTEMPTS) {
async function uploadFiles(
files: File[],
sid: string,
): Promise<UploadedFile[]> {
// Upload directly to the Python backend, bypassing the Next.js serverless
// proxy. Vercel's 4.5 MB function payload limit would reject larger files
// when routed through /api/workspace/files/upload.
const { token, error: tokenError } = await getWebSocketToken();
if (tokenError || !token) {
toast({
title: "Connection lost",
description: "Unable to reconnect. Please refresh the page.",
title: "Authentication error",
description: "Please sign in again.",
variant: "destructive",
});
return;
return [];
}
setIsReconnectScheduled(true);
setReconnectAttempts(nextAttempt);
const backendBase = environment.getAGPTServerBaseUrl();
if (!hasShownDisconnectToast.current) {
hasShownDisconnectToast.current = true;
toast({
title: "Connection lost",
description: "Reconnecting...",
});
}
const delay = Math.min(
RECONNECT_BASE_DELAY_MS * 2 ** reconnectAttempts,
RECONNECT_MAX_DELAY_MS,
const results = await Promise.allSettled(
files.map(async (file) => {
const formData = new FormData();
formData.append("file", file);
const url = new URL("/api/workspace/files/upload", backendBase);
url.searchParams.set("session_id", sid);
const res = await fetch(url.toString(), {
method: "POST",
headers: { Authorization: `Bearer ${token}` },
body: formData,
});
if (!res.ok) {
const err = await res.text();
console.error("File upload failed:", err);
toast({
title: "File upload failed",
description: file.name,
variant: "destructive",
});
throw new Error(err);
}
const data = await res.json();
if (!data.file_id) throw new Error("No file_id returned");
return {
file_id: data.file_id,
name: data.name || file.name,
mime_type: data.mime_type || "application/octet-stream",
} as UploadedFile;
}),
);
reconnectTimerRef.current = setTimeout(() => {
setIsReconnectScheduled(false);
resumeStream();
}, delay);
return results
.filter(
(r): r is PromiseFulfilledResult<UploadedFile> =>
r.status === "fulfilled",
)
.map((r) => r.value);
}
const {
messages: rawMessages,
sendMessage,
stop: sdkStop,
status,
error,
setMessages,
resumeStream,
} = useChat({
id: sessionId ?? undefined,
transport: transport ?? undefined,
onFinish: async ({ isDisconnect, isAbort }) => {
if (isAbort || !sessionId) return;
function buildFileParts(uploaded: UploadedFile[]): FileUIPart[] {
return uploaded.map((f) => ({
type: "file" as const,
mediaType: f.mime_type,
filename: f.name,
url: `/api/proxy/api/workspace/files/${f.file_id}/download`,
}));
}
if (isDisconnect) {
handleReconnect(sessionId);
async function onSend(message: string, files?: File[]) {
const trimmed = message.trim();
if (!trimmed && (!files || files.length === 0)) return;
// Client-side file limits
if (files && files.length > 0) {
const MAX_FILES = 10;
const MAX_FILE_SIZE_BYTES = 100 * 1024 * 1024; // 100 MB
if (files.length > MAX_FILES) {
toast({
title: "Too many files",
description: `You can attach up to ${MAX_FILES} files at once.`,
variant: "destructive",
});
return;
}
// Check if backend executor is still running after clean close
const result = await refetchSession();
const backendActive =
result.data?.status === 200 && !!result.data.data.active_stream;
if (backendActive) {
handleReconnect(sessionId);
}
},
onError: (error) => {
if (!sessionId) return;
// Only reconnect on network errors (not HTTP errors)
const isNetworkError =
error.name === "TypeError" || error.name === "AbortError";
if (isNetworkError) {
handleReconnect(sessionId);
}
},
});
// Deduplicate messages continuously to prevent duplicates when resuming streams
const messages = useMemo(
() => deduplicateMessages(rawMessages),
[rawMessages],
);
// Wrap AI SDK's stop() to also cancel the backend executor task.
// sdkStop() aborts the SSE fetch instantly (UI feedback), then we fire
// the cancel API to actually stop the executor and wait for confirmation.
async function stop() {
sdkStop();
setMessages((prev) => resolveInProgressTools(prev, "cancelled"));
if (!sessionId) return;
try {
const res = await postV2CancelSessionTask(sessionId);
if (
res.status === 200 &&
"reason" in res.data &&
res.data.reason === "cancel_published_not_confirmed"
) {
const oversized = files.filter((f) => f.size > MAX_FILE_SIZE_BYTES);
if (oversized.length > 0) {
toast({
title: "Stop may take a moment",
description:
"The cancel was sent but not yet confirmed. The task should stop shortly.",
title: "File too large",
description: `${oversized[0].name} exceeds the 100 MB limit.`,
variant: "destructive",
});
}
} catch {
toast({
title: "Could not stop the task",
description: "The task may still be running in the background.",
variant: "destructive",
});
}
}
// Hydrate messages from REST API when not actively streaming
useEffect(() => {
if (!hydratedMessages || hydratedMessages.length === 0) return;
if (status === "streaming" || status === "submitted") return;
if (isReconnectScheduled) return;
setMessages((prev) => {
if (prev.length >= hydratedMessages.length) return prev;
return deduplicateMessages(hydratedMessages);
});
}, [hydratedMessages, setMessages, status, isReconnectScheduled]);
// Track resume state per session
const hasResumedRef = useRef<Map<string, boolean>>(new Map());
// Clean up reconnect state on session switch
useEffect(() => {
clearTimeout(reconnectTimerRef.current);
reconnectTimerRef.current = undefined;
setReconnectAttempts(0);
setIsReconnectScheduled(false);
hasShownDisconnectToast.current = false;
prevStatusRef.current = status; // Reset to avoid cross-session state bleeding
}, [sessionId, status]);
// Invalidate session cache when stream completes
const prevStatusRef = useRef(status);
useEffect(() => {
const prev = prevStatusRef.current;
prevStatusRef.current = status;
const wasActive = prev === "streaming" || prev === "submitted";
const isIdle = status === "ready" || status === "error";
if (wasActive && isIdle && sessionId && !isReconnectScheduled) {
queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(sessionId),
});
if (status === "ready") {
setReconnectAttempts(0);
hasShownDisconnectToast.current = false;
return;
}
}
}, [status, sessionId, queryClient, isReconnectScheduled]);
// Resume an active stream AFTER hydration completes.
// IMPORTANT: Only runs when page loads with existing active stream (reconnection).
// Does NOT run when new streams start during active conversation.
useEffect(() => {
if (!sessionId) return;
if (!hasActiveStream) return;
if (!hydratedMessages || hydratedMessages.length === 0) return;
// Never resume if currently streaming
if (status === "streaming" || status === "submitted") return;
// Only resume once per session
if (hasResumedRef.current.get(sessionId)) return;
// Mark as resumed immediately to prevent race conditions
hasResumedRef.current.set(sessionId, true);
resumeStream();
}, [sessionId, hasActiveStream, hydratedMessages, status, resumeStream]);
// Clear messages when session is null
useEffect(() => {
if (!sessionId) setMessages([]);
}, [sessionId, setMessages]);
useEffect(() => {
if (!sessionId || !pendingMessage) return;
const msg = pendingMessage;
setPendingMessage(null);
sendMessage({ text: msg });
}, [sessionId, pendingMessage, sendMessage]);
async function onSend(message: string) {
const trimmed = message.trim();
if (!trimmed) return;
isUserStoppingRef.current = false;
if (sessionId) {
sendMessage({ text: trimmed });
if (files && files.length > 0) {
setIsUploadingFiles(true);
try {
const uploaded = await uploadFiles(files, sessionId);
if (uploaded.length === 0) {
// All uploads failed — abort send so chips revert to editable
throw new Error("All file uploads failed");
}
const fileParts = buildFileParts(uploaded);
sendMessage({
text: trimmed || "",
files: fileParts.length > 0 ? fileParts : undefined,
});
} finally {
setIsUploadingFiles(false);
}
} else {
sendMessage({ text: trimmed });
}
return;
}
setPendingMessage(trimmed);
setPendingMessage(trimmed || "");
if (files && files.length > 0) {
pendingFilesRef.current = files;
}
await createSession();
}
// --- Session list (for mobile drawer & sidebar) ---
const { data: sessionsResponse, isLoading: isLoadingSessions } =
useGetV2ListSessions(
{ limit: 50 },
@@ -340,63 +258,58 @@ export function useCopilotPage() {
const sessions =
sessionsResponse?.status === 200 ? sessionsResponse.data.sessions : [];
// --- Mobile drawer handlers ---
function handleOpenDrawer() {
setIsDrawerOpen(true);
setDrawerOpen(true);
}
function handleCloseDrawer() {
setIsDrawerOpen(false);
setDrawerOpen(false);
}
function handleDrawerOpenChange(open: boolean) {
setIsDrawerOpen(open);
setDrawerOpen(open);
}
function handleSelectSession(id: string) {
setSessionId(id);
if (isMobile) setIsDrawerOpen(false);
if (isMobile) setDrawerOpen(false);
}
function handleNewChat() {
setSessionId(null);
if (isMobile) setIsDrawerOpen(false);
if (isMobile) setDrawerOpen(false);
}
const handleDeleteClick = useCallback(
(id: string, title: string | null | undefined) => {
if (isDeleting) return;
setSessionToDelete({ id, title });
},
[isDeleting],
);
// --- Delete handlers ---
function handleDeleteClick(id: string, title: string | null | undefined) {
if (isDeleting) return;
setSessionToDelete({ id, title });
}
const handleConfirmDelete = useCallback(() => {
function handleConfirmDelete() {
if (sessionToDelete) {
deleteSessionMutation({ sessionId: sessionToDelete.id });
}
}, [sessionToDelete, deleteSessionMutation]);
}
const handleCancelDelete = useCallback(() => {
function handleCancelDelete() {
if (!isDeleting) {
setSessionToDelete(null);
}
}, [isDeleting]);
// True while reconnecting or backend has active stream but we haven't connected yet
const isReconnecting =
isReconnectScheduled ||
(hasActiveStream && status !== "streaming" && status !== "submitted");
}
return {
sessionId,
messages,
status,
error: isReconnecting ? undefined : error,
error,
stop,
isReconnecting,
isLoadingSession,
isSessionError,
isCreatingSession,
isUploadingFiles,
isUserLoading,
isLoggedIn,
createSession,

View File

@@ -0,0 +1,377 @@
import {
getGetV2GetSessionQueryKey,
postV2CancelSessionTask,
} from "@/app/api/__generated__/endpoints/chat/chat";
import { toast } from "@/components/molecules/Toast/use-toast";
import { getWebSocketToken } from "@/lib/supabase/actions";
import { environment } from "@/services/environment";
import { useChat } from "@ai-sdk/react";
import { useQueryClient } from "@tanstack/react-query";
import { DefaultChatTransport } from "ai";
import type { FileUIPart, UIMessage } from "ai";
import { useEffect, useMemo, useRef, useState } from "react";
import { deduplicateMessages, resolveInProgressTools } from "./helpers";
const RECONNECT_BASE_DELAY_MS = 1_000;
const RECONNECT_MAX_ATTEMPTS = 3;
/** Fetch a fresh JWT for direct backend requests (same pattern as WebSocket). */
async function getAuthHeaders(): Promise<Record<string, string>> {
const { token, error } = await getWebSocketToken();
if (error || !token) {
console.warn("[Copilot] Failed to get auth token:", error);
throw new Error("Authentication failed — please sign in again.");
}
return { Authorization: `Bearer ${token}` };
}
interface UseCopilotStreamArgs {
sessionId: string | null;
hydratedMessages: UIMessage[] | undefined;
hasActiveStream: boolean;
refetchSession: () => Promise<{ data?: unknown }>;
}
export function useCopilotStream({
sessionId,
hydratedMessages,
hasActiveStream,
refetchSession,
}: UseCopilotStreamArgs) {
const queryClient = useQueryClient();
// Connect directly to the Python backend for SSE, bypassing the Next.js
// serverless proxy. This eliminates the Vercel 800s function timeout that
// was the primary cause of stream disconnections on long-running tasks.
// Auth uses the same server-action token pattern as the WebSocket connection.
const transport = useMemo(
() =>
sessionId
? new DefaultChatTransport({
api: `${environment.getAGPTServerBaseUrl()}/api/chat/sessions/${sessionId}/stream`,
prepareSendMessagesRequest: async ({ messages }) => {
const last = messages[messages.length - 1];
// Extract file_ids from FileUIPart entries on the message
const fileIds = last.parts
?.filter((p): p is FileUIPart => p.type === "file")
.map((p) => {
// URL is like /api/proxy/api/workspace/files/{id}/download
const match = p.url.match(/\/workspace\/files\/([^/]+)\//);
return match?.[1];
})
.filter(Boolean) as string[] | undefined;
return {
body: {
message: (
last.parts?.map((p) => (p.type === "text" ? p.text : "")) ??
[]
).join(""),
is_user_message: last.role === "user",
context: null,
file_ids: fileIds && fileIds.length > 0 ? fileIds : null,
},
headers: await getAuthHeaders(),
};
},
prepareReconnectToStreamRequest: async () => ({
api: `${environment.getAGPTServerBaseUrl()}/api/chat/sessions/${sessionId}/stream`,
headers: await getAuthHeaders(),
}),
})
: null,
[sessionId],
);
// Reconnect state — use refs for values read inside callbacks to avoid
// stale closures when multiple reconnect cycles fire in quick succession.
const reconnectAttemptsRef = useRef(0);
const isReconnectScheduledRef = useRef(false);
const [isReconnectScheduled, setIsReconnectScheduled] = useState(false);
const reconnectTimerRef = useRef<ReturnType<typeof setTimeout>>();
const hasShownDisconnectToast = useRef(false);
// Set when the user explicitly clicks stop — prevents onError from
// triggering a reconnect cycle for the resulting AbortError.
const isUserStoppingRef = useRef(false);
function handleReconnect(sid: string) {
if (isReconnectScheduledRef.current || !sid) return;
const nextAttempt = reconnectAttemptsRef.current + 1;
if (nextAttempt > RECONNECT_MAX_ATTEMPTS) {
toast({
title: "Connection lost",
description: "Unable to reconnect. Please refresh the page.",
variant: "destructive",
});
return;
}
isReconnectScheduledRef.current = true;
setIsReconnectScheduled(true);
reconnectAttemptsRef.current = nextAttempt;
if (!hasShownDisconnectToast.current) {
hasShownDisconnectToast.current = true;
toast({
title: "Connection lost",
description: "Reconnecting...",
});
}
const delay = RECONNECT_BASE_DELAY_MS * 2 ** (nextAttempt - 1);
reconnectTimerRef.current = setTimeout(() => {
isReconnectScheduledRef.current = false;
setIsReconnectScheduled(false);
resumeStream();
}, delay);
}
const {
messages: rawMessages,
sendMessage,
stop: sdkStop,
status,
error,
setMessages,
resumeStream,
} = useChat({
id: sessionId ?? undefined,
transport: transport ?? undefined,
onFinish: async ({ isDisconnect, isAbort }) => {
if (isAbort || !sessionId) return;
if (isDisconnect) {
handleReconnect(sessionId);
return;
}
// Check if backend executor is still running after clean close
const result = await refetchSession();
const d = result.data;
const backendActive =
d != null &&
typeof d === "object" &&
"status" in d &&
d.status === 200 &&
"data" in d &&
d.data != null &&
typeof d.data === "object" &&
"active_stream" in d.data &&
!!d.data.active_stream;
if (backendActive) {
handleReconnect(sessionId);
}
},
onError: (error) => {
if (!sessionId) return;
// Detect authentication failures (from getAuthHeaders or 401 responses)
const isAuthError =
error.message.includes("Authentication failed") ||
error.message.includes("Unauthorized") ||
error.message.includes("Not authenticated") ||
error.message.toLowerCase().includes("401");
if (isAuthError) {
toast({
title: "Authentication error",
description: "Your session may have expired. Please sign in again.",
variant: "destructive",
});
return;
}
// Only reconnect on network errors (not HTTP errors), and never
// reconnect when the user explicitly stopped the stream.
if (isUserStoppingRef.current) return;
const isNetworkError =
error.name === "TypeError" || error.name === "AbortError";
if (isNetworkError) {
handleReconnect(sessionId);
}
},
});
// Deduplicate messages continuously to prevent duplicates when resuming streams
const messages = useMemo(
() => deduplicateMessages(rawMessages),
[rawMessages],
);
// Wrap AI SDK's stop() to also cancel the backend executor task.
// sdkStop() aborts the SSE fetch instantly (UI feedback), then we fire
// the cancel API to actually stop the executor and wait for confirmation.
async function stop() {
isUserStoppingRef.current = true;
sdkStop();
// Resolve pending tool calls and inject a cancellation marker so the UI
// shows "You manually stopped this chat" immediately (the backend writes
// the same marker to the DB, but the SSE connection is already aborted).
// Marker must match COPILOT_ERROR_PREFIX in ChatMessagesContainer/helpers.ts.
setMessages((prev) => {
const resolved = resolveInProgressTools(prev, "cancelled");
const last = resolved[resolved.length - 1];
if (last?.role === "assistant") {
return [
...resolved.slice(0, -1),
{
...last,
parts: [
...last.parts,
{
type: "text" as const,
text: "[__COPILOT_ERROR_f7a1__] Operation cancelled",
},
],
},
];
}
return resolved;
});
if (!sessionId) return;
try {
const res = await postV2CancelSessionTask(sessionId);
if (
res.status === 200 &&
"reason" in res.data &&
res.data.reason === "cancel_published_not_confirmed"
) {
toast({
title: "Stop may take a moment",
description:
"The cancel was sent but not yet confirmed. The task should stop shortly.",
});
}
} catch {
toast({
title: "Could not stop the task",
description: "The task may still be running in the background.",
variant: "destructive",
});
}
}
// Hydrate messages from REST API when not actively streaming
useEffect(() => {
if (!hydratedMessages || hydratedMessages.length === 0) return;
if (status === "streaming" || status === "submitted") return;
if (isReconnectScheduled) return;
setMessages((prev) => {
if (prev.length >= hydratedMessages.length) return prev;
return deduplicateMessages(hydratedMessages);
});
}, [hydratedMessages, setMessages, status, isReconnectScheduled]);
// Track resume state per session
const hasResumedRef = useRef<Map<string, boolean>>(new Map());
// Clean up reconnect state on session switch
useEffect(() => {
clearTimeout(reconnectTimerRef.current);
reconnectTimerRef.current = undefined;
reconnectAttemptsRef.current = 0;
isReconnectScheduledRef.current = false;
setIsReconnectScheduled(false);
hasShownDisconnectToast.current = false;
isUserStoppingRef.current = false;
hasResumedRef.current.clear();
return () => {
clearTimeout(reconnectTimerRef.current);
reconnectTimerRef.current = undefined;
};
}, [sessionId]);
// Invalidate session cache when stream completes
const prevStatusRef = useRef(status);
useEffect(() => {
const prev = prevStatusRef.current;
prevStatusRef.current = status;
const wasActive = prev === "streaming" || prev === "submitted";
const isIdle = status === "ready" || status === "error";
if (wasActive && isIdle && sessionId && !isReconnectScheduled) {
queryClient.invalidateQueries({
queryKey: getGetV2GetSessionQueryKey(sessionId),
});
if (status === "ready") {
reconnectAttemptsRef.current = 0;
hasShownDisconnectToast.current = false;
}
}
}, [status, sessionId, queryClient, isReconnectScheduled]);
// Resume an active stream AFTER hydration completes.
// IMPORTANT: Only runs when page loads with existing active stream (reconnection).
// Does NOT run when new streams start during active conversation.
useEffect(() => {
if (!sessionId) return;
if (!hasActiveStream) return;
if (!hydratedMessages || hydratedMessages.length === 0) return;
// Never resume if currently streaming
if (status === "streaming" || status === "submitted") return;
// Only resume once per session
if (hasResumedRef.current.get(sessionId)) return;
// Don't resume a stream the user just cancelled
if (isUserStoppingRef.current) return;
// Mark as resumed immediately to prevent race conditions
hasResumedRef.current.set(sessionId, true);
// Remove the in-progress assistant message before resuming.
// The backend replays the stream from "0-0", so keeping the hydrated
// version would cause the old parts to overlap with replayed parts.
// Previous turns are preserved; the stream recreates the current turn.
setMessages((prev) => {
if (prev.length > 0 && prev[prev.length - 1].role === "assistant") {
return prev.slice(0, -1);
}
return prev;
});
resumeStream();
}, [
sessionId,
hasActiveStream,
hydratedMessages,
status,
resumeStream,
setMessages,
]);
// Clear messages when session is null
useEffect(() => {
if (!sessionId) setMessages([]);
}, [sessionId, setMessages]);
// Reset the user-stop flag once the backend confirms the stream is no
// longer active — this prevents the flag from staying stale forever.
useEffect(() => {
if (!hasActiveStream && isUserStoppingRef.current) {
isUserStoppingRef.current = false;
}
}, [hasActiveStream]);
// True while reconnecting or backend has active stream but we haven't connected yet.
// Suppressed when the user explicitly stopped — the backend may take a moment
// to clear active_stream but the UI should be responsive immediately.
const isReconnecting =
!isUserStoppingRef.current &&
(isReconnectScheduled ||
(hasActiveStream && status !== "streaming" && status !== "submitted"));
return {
messages,
sendMessage,
stop,
status,
error: isReconnecting || isUserStoppingRef.current ? undefined : error,
isReconnecting,
isUserStoppingRef,
};
}

View File

@@ -3,6 +3,8 @@ import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers";
import { NextRequest } from "next/server";
import { normalizeSSEStream, SSE_HEADERS } from "../../../sse-helpers";
// Legacy SSE proxy fallback. Primary transport is direct backend SSE.
// See useCopilotStream.ts for active transport logic.
export const maxDuration = 800;
const DEBUG_SSE_TIMEOUT_MS = process.env.NEXT_PUBLIC_SSE_TIMEOUT_MS
@@ -25,9 +27,9 @@ export async function POST(
try {
const body = await request.json();
const { message, is_user_message, context } = body;
const { message, is_user_message, context, file_ids } = body;
if (!message) {
if (message === undefined) {
return new Response(
JSON.stringify({ error: "Missing message parameter" }),
{ status: 400, headers: { "Content-Type": "application/json" } },
@@ -60,6 +62,7 @@ export async function POST(
message,
is_user_message: is_user_message ?? true,
context: context || null,
file_ids: file_ids || null,
}),
signal: debugSignal(),
});

View File

@@ -2039,7 +2039,9 @@
"description": "Successful Response",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/UploadFileResponse" }
"schema": {
"$ref": "#/components/schemas/backend__api__model__UploadFileResponse"
}
}
}
},
@@ -6497,6 +6499,59 @@
}
}
},
"/api/workspace/files/upload": {
"post": {
"tags": ["workspace"],
"summary": "Upload file to workspace",
"description": "Upload a file to the user's workspace.\n\nFiles are stored in session-scoped paths when session_id is provided,\nso the agent's session-scoped tools can discover them automatically.",
"operationId": "postWorkspaceUpload file to workspace",
"security": [{ "HTTPBearerJWT": [] }],
"parameters": [
{
"name": "session_id",
"in": "query",
"required": false,
"schema": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Session Id"
}
}
],
"requestBody": {
"required": true,
"content": {
"multipart/form-data": {
"schema": {
"$ref": "#/components/schemas/Body_postWorkspaceUpload_file_to_workspace"
}
}
}
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/backend__api__features__workspace__routes__UploadFileResponse"
}
}
}
},
"401": {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/HTTPValidationError" }
}
}
}
}
}
},
"/api/workspace/files/{file_id}/download": {
"get": {
"tags": ["workspace"],
@@ -6531,6 +6586,30 @@
}
}
},
"/api/workspace/storage/usage": {
"get": {
"tags": ["workspace"],
"summary": "Get workspace storage usage",
"description": "Get storage usage information for the user's workspace.",
"operationId": "getWorkspaceGet workspace storage usage",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/StorageUsageResponse"
}
}
}
},
"401": {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
}
},
"security": [{ "HTTPBearerJWT": [] }]
}
},
"/health": {
"get": {
"tags": ["health"],
@@ -7768,6 +7847,14 @@
"required": ["file"],
"title": "Body_postV2Upload submission media"
},
"Body_postWorkspaceUpload_file_to_workspace": {
"properties": {
"file": { "type": "string", "format": "binary", "title": "File" }
},
"type": "object",
"required": ["file"],
"title": "Body_postWorkspaceUpload file to workspace"
},
"BulkMoveAgentsRequest": {
"properties": {
"agent_ids": {
@@ -11167,7 +11254,6 @@
"operation_in_progress",
"input_validation_error",
"web_fetch",
"browse_web",
"bash_exec",
"feature_request_search",
"feature_request_created",
@@ -11593,6 +11679,17 @@
"type": "object",
"title": "Stats"
},
"StorageUsageResponse": {
"properties": {
"used_bytes": { "type": "integer", "title": "Used Bytes" },
"limit_bytes": { "type": "integer", "title": "Limit Bytes" },
"used_percent": { "type": "number", "title": "Used Percent" },
"file_count": { "type": "integer", "title": "File Count" }
},
"type": "object",
"required": ["used_bytes", "limit_bytes", "used_percent", "file_count"],
"title": "StorageUsageResponse"
},
"StoreAgent": {
"properties": {
"slug": { "type": "string", "title": "Slug" },
@@ -12040,6 +12137,17 @@
{ "type": "null" }
],
"title": "Context"
},
"file_ids": {
"anyOf": [
{
"items": { "type": "string" },
"type": "array",
"maxItems": 20
},
{ "type": "null" }
],
"title": "File Ids"
}
},
"type": "object",
@@ -13621,24 +13729,6 @@
"required": ["timezone"],
"title": "UpdateTimezoneRequest"
},
"UploadFileResponse": {
"properties": {
"file_uri": { "type": "string", "title": "File Uri" },
"file_name": { "type": "string", "title": "File Name" },
"size": { "type": "integer", "title": "Size" },
"content_type": { "type": "string", "title": "Content Type" },
"expires_in_hours": { "type": "integer", "title": "Expires In Hours" }
},
"type": "object",
"required": [
"file_uri",
"file_name",
"size",
"content_type",
"expires_in_hours"
],
"title": "UploadFileResponse"
},
"UserHistoryResponse": {
"properties": {
"history": {
@@ -13967,6 +14057,36 @@
"url"
],
"title": "Webhook"
},
"backend__api__features__workspace__routes__UploadFileResponse": {
"properties": {
"file_id": { "type": "string", "title": "File Id" },
"name": { "type": "string", "title": "Name" },
"path": { "type": "string", "title": "Path" },
"mime_type": { "type": "string", "title": "Mime Type" },
"size_bytes": { "type": "integer", "title": "Size Bytes" }
},
"type": "object",
"required": ["file_id", "name", "path", "mime_type", "size_bytes"],
"title": "UploadFileResponse"
},
"backend__api__model__UploadFileResponse": {
"properties": {
"file_uri": { "type": "string", "title": "File Uri" },
"file_name": { "type": "string", "title": "File Name" },
"size": { "type": "integer", "title": "Size" },
"content_type": { "type": "string", "title": "Content Type" },
"expires_in_hours": { "type": "integer", "title": "Expires In Hours" }
},
"type": "object",
"required": [
"file_uri",
"file_name",
"size",
"content_type",
"expires_in_hours"
],
"title": "UploadFileResponse"
}
},
"securitySchemes": {

View File

@@ -0,0 +1,48 @@
import { environment } from "@/services/environment";
import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers";
import { NextRequest, NextResponse } from "next/server";
export async function POST(request: NextRequest) {
try {
const formData = await request.formData();
const sessionId = request.nextUrl.searchParams.get("session_id");
const token = await getServerAuthToken();
const backendUrl = environment.getAGPTServerBaseUrl();
const uploadUrl = new URL("/api/workspace/files/upload", backendUrl);
if (sessionId) {
uploadUrl.searchParams.set("session_id", sessionId);
}
const headers: Record<string, string> = {};
if (token && token !== "no-token-found") {
headers["Authorization"] = `Bearer ${token}`;
}
const response = await fetch(uploadUrl.toString(), {
method: "POST",
headers,
body: formData,
});
if (!response.ok) {
const errorText = await response.text();
return new NextResponse(errorText, {
status: response.status,
});
}
const data = await response.json();
return NextResponse.json(data);
} catch (error) {
console.error("File upload proxy error:", error);
return NextResponse.json(
{
error: "Failed to upload file",
detail: error instanceof Error ? error.message : String(error),
},
{ status: 500 },
);
}
}

View File

@@ -61,15 +61,11 @@ export const ConversationEmptyState = ({
>
{children ?? (
<>
{icon && (
<div className="text-neutral-500 dark:text-neutral-400">{icon}</div>
)}
{icon && <div className="text-neutral-500">{icon}</div>}
<div className="space-y-1">
<h3 className="text-sm font-medium">{title}</h3>
{description && (
<p className="text-sm text-neutral-500 dark:text-neutral-400">
{description}
</p>
<p className="text-sm text-neutral-500">{description}</p>
)}
</div>
</>
@@ -93,7 +89,7 @@ export const ConversationScrollButton = ({
!isAtBottom && (
<Button
className={cn(
"absolute bottom-4 left-[50%] translate-x-[-50%] rounded-full dark:bg-white dark:dark:bg-neutral-950 dark:dark:hover:bg-neutral-800 dark:hover:bg-neutral-100",
"absolute bottom-4 left-[50%] translate-x-[-50%] rounded-full",
className,
)}
onClick={handleScrollToBottom}

View File

@@ -45,8 +45,8 @@ export const MessageContent = ({
className={cn(
"is-user:dark flex w-full min-w-0 max-w-full flex-col gap-2 overflow-hidden text-sm",
"group-[.is-user]:w-fit",
"group-[.is-user]:ml-auto group-[.is-user]:rounded-lg group-[.is-user]:bg-neutral-100 group-[.is-user]:px-4 group-[.is-user]:py-3 group-[.is-user]:text-neutral-950 dark:group-[.is-user]:bg-neutral-800 dark:group-[.is-user]:text-neutral-50",
"group-[.is-assistant]:text-neutral-950 dark:group-[.is-assistant]:text-neutral-50",
"group-[.is-user]:ml-auto group-[.is-user]:rounded-lg group-[.is-user]:bg-neutral-100 group-[.is-user]:px-4 group-[.is-user]:py-3 group-[.is-user]:text-neutral-950",
"group-[.is-assistant]:text-neutral-950",
className,
)}
{...props}
@@ -291,7 +291,7 @@ export const MessageBranchPage = ({
return (
<ButtonGroupText
className={cn(
"border-none bg-transparent text-neutral-500 shadow-none dark:text-neutral-400",
"border-none bg-transparent text-neutral-500 shadow-none",
className,
)}
{...props}

View File

@@ -0,0 +1,345 @@
"use client";
/**
* Adapted from AI SDK Elements `prompt-input` component.
* @see https://elements.ai-sdk.dev/components/prompt-input
*
* Stripped down to only the sub-components used by the copilot ChatInput:
* PromptInput, PromptInputBody, PromptInputTextarea, PromptInputFooter,
* PromptInputTools, PromptInputButton, PromptInputSubmit.
*/
import type { ChatStatus } from "ai";
import type {
ComponentProps,
FormEvent,
FormEventHandler,
HTMLAttributes,
KeyboardEventHandler,
ReactNode,
} from "react";
import {
InputGroup,
InputGroupAddon,
InputGroupButton,
InputGroupTextarea,
} from "@/components/ui/input-group";
import { Spinner } from "@/components/ui/spinner";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { cn } from "@/lib/utils";
import {
ArrowUp as ArrowUpIcon,
Stop as StopIcon,
} from "@phosphor-icons/react";
import { Children, useCallback, useEffect, useRef, useState } from "react";
// ============================================================================
// PromptInput — form wrapper
// ============================================================================
export type PromptInputProps = Omit<
HTMLAttributes<HTMLFormElement>,
"onSubmit"
> & {
onSubmit: (
text: string,
event: FormEvent<HTMLFormElement>,
) => void | Promise<void>;
};
export function PromptInput({
className,
onSubmit,
children,
...props
}: PromptInputProps) {
const formRef = useRef<HTMLFormElement | null>(null);
const handleSubmit: FormEventHandler<HTMLFormElement> = useCallback(
async (event) => {
event.preventDefault();
const form = event.currentTarget;
const formData = new FormData(form);
const text = (formData.get("message") as string) || "";
const result = onSubmit(text, event);
if (result instanceof Promise) {
await result;
}
},
[onSubmit],
);
return (
<form
className={cn("w-full", className)}
onSubmit={handleSubmit}
ref={formRef}
{...props}
>
<InputGroup className="overflow-hidden">{children}</InputGroup>
</form>
);
}
// ============================================================================
// PromptInputBody — content wrapper
// ============================================================================
export type PromptInputBodyProps = HTMLAttributes<HTMLDivElement>;
export function PromptInputBody({ className, ...props }: PromptInputBodyProps) {
return <div className={cn("contents", className)} {...props} />;
}
// ============================================================================
// PromptInputTextarea — auto-resize textarea with Enter-to-submit
// ============================================================================
export type PromptInputTextareaProps = ComponentProps<
typeof InputGroupTextarea
>;
export function PromptInputTextarea({
onKeyDown,
onChange,
className,
placeholder = "Type your message...",
value,
...props
}: PromptInputTextareaProps) {
const [isComposing, setIsComposing] = useState(false);
const textareaRef = useRef<HTMLTextAreaElement | null>(null);
function autoResize(el: HTMLTextAreaElement) {
el.style.height = "auto";
el.style.height = `${el.scrollHeight}px`;
}
// Resize when value changes externally (e.g. cleared after send)
useEffect(() => {
if (textareaRef.current) autoResize(textareaRef.current);
}, [value]);
const handleChange = useCallback(
(e: React.ChangeEvent<HTMLTextAreaElement>) => {
autoResize(e.currentTarget);
onChange?.(e);
},
[onChange],
);
const handleKeyDown: KeyboardEventHandler<HTMLTextAreaElement> = useCallback(
(e) => {
// Call external handler first
onKeyDown?.(e);
if (e.defaultPrevented) return;
if (e.key === "Enter") {
if (isComposing || e.nativeEvent.isComposing) return;
if (e.shiftKey) return;
e.preventDefault();
const { form } = e.currentTarget;
const submitButton = form?.querySelector(
'button[type="submit"]',
) as HTMLButtonElement | null;
if (submitButton?.disabled) return;
form?.requestSubmit();
}
},
[onKeyDown, isComposing],
);
const handleCompositionEnd = useCallback(() => setIsComposing(false), []);
const handleCompositionStart = useCallback(() => setIsComposing(true), []);
return (
<InputGroupTextarea
ref={textareaRef}
rows={1}
className={cn(
"max-h-48 min-h-0 text-base leading-6 md:text-base",
className,
)}
name="message"
value={value}
onChange={handleChange}
onCompositionEnd={handleCompositionEnd}
onCompositionStart={handleCompositionStart}
onKeyDown={handleKeyDown}
placeholder={placeholder}
{...props}
/>
);
}
// ============================================================================
// PromptInputFooter — bottom bar
// ============================================================================
export type PromptInputFooterProps = Omit<
ComponentProps<typeof InputGroupAddon>,
"align"
>;
export function PromptInputFooter({
className,
...props
}: PromptInputFooterProps) {
return (
<InputGroupAddon
align="block-end"
className={cn("justify-between gap-1", className)}
{...props}
/>
);
}
// ============================================================================
// PromptInputTools — left-side button group
// ============================================================================
export type PromptInputToolsProps = HTMLAttributes<HTMLDivElement>;
export function PromptInputTools({
className,
...props
}: PromptInputToolsProps) {
return (
<div
className={cn("flex min-w-0 items-center gap-1", className)}
{...props}
/>
);
}
// ============================================================================
// PromptInputButton — tool button with optional tooltip
// ============================================================================
export type PromptInputButtonTooltip =
| string
| {
content: ReactNode;
shortcut?: string;
side?: ComponentProps<typeof TooltipContent>["side"];
};
export type PromptInputButtonProps = ComponentProps<typeof InputGroupButton> & {
tooltip?: PromptInputButtonTooltip;
};
export function PromptInputButton({
variant = "ghost",
className,
size,
tooltip,
...props
}: PromptInputButtonProps) {
const newSize =
size ?? (Children.count(props.children) > 1 ? "sm" : "icon-sm");
const button = (
<InputGroupButton
className={cn(className)}
size={newSize}
type="button"
variant={variant}
{...props}
/>
);
if (!tooltip) return button;
const tooltipContent =
typeof tooltip === "string" ? tooltip : tooltip.content;
const shortcut = typeof tooltip === "string" ? undefined : tooltip.shortcut;
const side = typeof tooltip === "string" ? "top" : (tooltip.side ?? "top");
return (
<Tooltip>
<TooltipTrigger asChild>{button}</TooltipTrigger>
<TooltipContent side={side}>
{tooltipContent}
{shortcut && (
<span className="ml-2 text-muted-foreground">{shortcut}</span>
)}
</TooltipContent>
</Tooltip>
);
}
// ============================================================================
// PromptInputSubmit — send / stop button
// ============================================================================
export type PromptInputSubmitProps = ComponentProps<typeof InputGroupButton> & {
status?: ChatStatus;
onStop?: () => void;
};
export function PromptInputSubmit({
className,
variant = "default",
size = "icon-sm",
status,
onStop,
onClick,
disabled,
children,
...props
}: PromptInputSubmitProps) {
const isGenerating = status === "submitted" || status === "streaming";
const canStop = isGenerating && Boolean(onStop);
const isDisabled = Boolean(disabled) || (isGenerating && !canStop);
let Icon = <ArrowUpIcon className="size-4" weight="bold" />;
if (status === "submitted") {
Icon = <Spinner />;
} else if (status === "streaming") {
Icon = <StopIcon className="size-4" weight="bold" />;
}
const handleClick = useCallback(
(e: React.MouseEvent<HTMLButtonElement>) => {
if (canStop && onStop) {
e.preventDefault();
onStop();
return;
}
if (isGenerating) {
e.preventDefault();
return;
}
onClick?.(e);
},
[canStop, isGenerating, onStop, onClick],
);
return (
<InputGroupButton
aria-label={canStop ? "Stop" : "Submit"}
className={cn(
"size-[2.625rem] rounded-full border-zinc-800 bg-zinc-800 text-white hover:border-zinc-900 hover:bg-zinc-900 disabled:border-zinc-200 disabled:bg-zinc-200 disabled:text-white disabled:opacity-100",
className,
)}
disabled={isDisabled}
onClick={handleClick}
size={size}
type={canStop ? "button" : "submit"}
variant={variant}
{...props}
>
{children ?? Icon}
</InputGroupButton>
);
}

View File

@@ -0,0 +1,129 @@
"use client";
import * as React from "react";
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
import { Textarea } from "@/components/ui/textarea";
function InputGroup({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="input-group"
role="group"
className={cn(
"group/input-group relative flex w-full items-center rounded-xlarge border border-neutral-200 bg-white shadow-sm outline-none transition-[color,box-shadow]",
"min-w-0 has-[>textarea]:h-auto",
// Variants based on alignment.
"has-[>[data-align=block-start]]:h-auto has-[>[data-align=block-start]]:flex-col",
"has-[>[data-align=block-end]]:h-auto has-[>[data-align=block-end]]:flex-col",
// Focus state.
"has-[[data-slot=input-group-control]:focus-visible]:border-zinc-400 has-[[data-slot=input-group-control]:focus-visible]:ring-1 has-[[data-slot=input-group-control]:focus-visible]:ring-zinc-400",
className,
)}
{...props}
/>
);
}
const inputGroupAddonVariants = cva(
"text-muted-foreground flex h-auto cursor-text items-center justify-center gap-2 py-1.5 text-sm font-medium select-none group-data-[disabled=true]/input-group:opacity-50",
{
variants: {
align: {
"inline-start": "order-first pl-3",
"inline-end": "order-last pr-3",
"block-start": "order-first w-full justify-start px-3 pt-3",
"block-end": "order-last w-full justify-start px-3 pb-3",
},
},
defaultVariants: {
align: "inline-start",
},
},
);
function InputGroupAddon({
className,
align = "inline-start",
onClick,
...props
}: React.ComponentProps<"div"> & VariantProps<typeof inputGroupAddonVariants>) {
return (
<div
role="group"
data-slot="input-group-addon"
data-align={align}
className={cn(inputGroupAddonVariants({ align }), className)}
onClick={(e) => {
onClick?.(e);
if (e.defaultPrevented) return;
if ((e.target as HTMLElement).closest("button")) {
return;
}
e.currentTarget.parentElement?.querySelector("textarea")?.focus();
}}
{...props}
/>
);
}
const inputGroupButtonVariants = cva(
"text-sm shadow-none flex min-w-0 gap-2 items-center",
{
variants: {
size: {
xs: "h-6 gap-1 px-2 rounded-md has-[>svg]:px-2",
sm: "h-8 px-2.5 gap-1.5 rounded-md has-[>svg]:px-2.5",
"icon-xs": "size-6 rounded-md p-0 has-[>svg]:p-0",
"icon-sm": "size-8 rounded-md p-0 has-[>svg]:p-0",
},
},
defaultVariants: {
size: "xs",
},
},
);
function InputGroupButton({
className,
type = "button",
variant = "ghost",
size = "xs",
...props
}: Omit<React.ComponentProps<typeof Button>, "size"> &
VariantProps<typeof inputGroupButtonVariants>) {
return (
<Button
type={type}
data-size={size}
variant={variant}
className={cn(inputGroupButtonVariants({ size }), className)}
{...props}
/>
);
}
const InputGroupTextarea = React.forwardRef<
HTMLTextAreaElement,
React.ComponentProps<"textarea">
>(({ className, ...props }, ref) => {
return (
<Textarea
ref={ref}
data-slot="input-group-control"
className={cn(
"flex-1 resize-none rounded-none border-0 bg-transparent py-3 shadow-none focus-visible:ring-0",
className,
)}
{...props}
/>
);
});
InputGroupTextarea.displayName = "InputGroupTextarea";
export { InputGroup, InputGroupAddon, InputGroupButton, InputGroupTextarea };

View File

@@ -0,0 +1,16 @@
import { CircleNotch as CircleNotchIcon } from "@phosphor-icons/react";
import { cn } from "@/lib/utils";
function Spinner({ className, ...props }: React.ComponentProps<"svg">) {
return (
<CircleNotchIcon
role="status"
aria-label="Loading"
className={cn("size-4 animate-spin", className)}
{...(props as Record<string, unknown>)}
/>
);
}
export { Spinner };

View File

@@ -0,0 +1,22 @@
import * as React from "react";
import { cn } from "@/lib/utils";
const Textarea = React.forwardRef<
HTMLTextAreaElement,
React.ComponentProps<"textarea">
>(({ className, ...props }, ref) => {
return (
<textarea
className={cn(
"flex min-h-[60px] w-full rounded-md border border-neutral-200 bg-transparent px-3 py-2 text-base shadow-sm placeholder:text-neutral-500 focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-neutral-950 disabled:cursor-not-allowed disabled:opacity-50 dark:border-neutral-800 dark:placeholder:text-neutral-400 dark:focus-visible:ring-neutral-300 md:text-sm",
className,
)}
ref={ref}
{...props}
/>
);
});
Textarea.displayName = "Textarea";
export { Textarea };

View File

@@ -1,94 +0,0 @@
/**
* Unit tests for helpers.ts
*
* These tests validate the error handling in handleFetchError, specifically
* the fix for the issue where calling response.json() on non-JSON responses
* would throw: "Failed to execute 'json' on 'Response': Unexpected token 'A',
* "A server e"... is not valid JSON"
*
* To run these tests, you'll need to set up a unit test framework like Jest or Vitest.
*
* Test cases to cover:
*
* 1. JSON error responses should be parsed correctly
* - Given: Response with content-type: application/json
* - When: handleFetchError is called
* - Then: Should parse JSON and return ApiError with parsed response
*
* 2. Non-JSON error responses (e.g., HTML) should be handled gracefully
* - Given: Response with content-type: text/html
* - When: handleFetchError is called
* - Then: Should read as text and return ApiError with text response
*
* 3. Response without content-type header should be handled
* - Given: Response without content-type header
* - When: handleFetchError is called
* - Then: Should default to reading as text
*
* 4. JSON parsing errors should not throw
* - Given: Response with content-type: application/json but HTML body
* - When: handleFetchError is called and json() throws
* - Then: Should catch error, log warning, and return ApiError with null response
*
* 5. Specific validation for the fixed bug
* - Given: 502 Bad Gateway with content-type: application/json but HTML body
* - When: response.json() throws "Unexpected token 'A'" error
* - Then: Should NOT propagate the error, should return ApiError with null response
*/
import { handleFetchError } from "./helpers";
// Manual test function - can be run in browser console or Node
export async function testHandleFetchError() {
console.log("Testing handleFetchError...");
// Test 1: JSON response
const jsonResponse = new Response(
JSON.stringify({ error: "Internal server error" }),
{
status: 500,
headers: { "content-type": "application/json" },
},
);
const error1 = await handleFetchError(jsonResponse);
console.assert(
error1.status === 500 && error1.response?.error === "Internal server error",
"Test 1 failed: JSON response",
);
// Test 2: HTML response
const htmlResponse = new Response("<html><body>Server Error</body></html>", {
status: 502,
headers: { "content-type": "text/html" },
});
const error2 = await handleFetchError(htmlResponse);
console.assert(
error2.status === 502 &&
typeof error2.response === "string" &&
error2.response.includes("Server Error"),
"Test 2 failed: HTML response",
);
// Test 3: Mismatched content-type (claims JSON but is HTML)
// This simulates the bug that was fixed
const mismatchedResponse = new Response(
"<html><body>A server error occurred</body></html>",
{
status: 502,
headers: { "content-type": "application/json" }, // Claims JSON but isn't
},
);
try {
const error3 = await handleFetchError(mismatchedResponse);
console.assert(
error3.status === 502 && error3.response === null,
"Test 3 failed: Mismatched content-type should return null response",
);
console.log("✓ All tests passed!");
} catch (e) {
console.error("✗ Test 3 failed: Should not throw error", e);
}
}
// Uncomment to run manual tests
// testHandleFetchError();

View File

@@ -180,7 +180,7 @@ const config = {
"accordion-down": "accordion-down 0.2s ease-out",
"accordion-up": "accordion-up 0.2s ease-out",
"fade-in": "fade-in 0.2s ease-out",
shimmer: "shimmer 2s ease-in-out infinite",
shimmer: "shimmer 4s ease-in-out infinite",
loader: "loader 1s infinite",
},
transitionDuration: {

View File

@@ -6,7 +6,7 @@ export default defineConfig({
plugins: [tsconfigPaths(), react()],
test: {
environment: "happy-dom",
include: ["src/**/*.test.tsx"],
include: ["src/**/*.test.tsx", "src/**/*.test.ts"],
setupFiles: ["./src/tests/integrations/vitest.setup.tsx"],
},
});