Compare commits

...

10 Commits

Author SHA1 Message Date
Nicholas Tindle
db014cf7a2 fix(frontend/copilot): close artifact panel on copilot page unmount
The artifact panel lived in the Zustand store, so its `isOpen` state survived
copilot page unmounts. Navigating to /profile, /home, or a new chat and
coming back would re-render the panel open with the prior session's
artifact. Tickets SECRT-2254, SECRT-2223, SECRT-2220 all trace to this.

Reset the panel in a useAutoOpenArtifacts unmount cleanup so leaving the
copilot page (or session-less limbo) always returns users to a clean
default-closed panel. Session-change reset was already handled; this covers
the nav-away → nav-back case.

Two new failing-first tests drive it:
- SECRT-2254 repro: open panel → unmount hook → panel must be closed.
- SECRT-2220 repro: seed store with a stale `isOpen: true`, mount+unmount,
  remount → panel must be closed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 16:00:23 -05:00
Joe Munene
3a01874911 fix(frontend/builder): preserve agent name in AgentExecutor node title after reload (#12805)
## Summary

Fixes #11041

When an `AgentExecutorBlock` is placed in the builder, it initially
displays the agent's name (e.g., "Researcher v2"). After saving and
reloading the page, the title reverts to the generic "Agent Executor."

## Root Cause

The backend correctly persists `agent_name` and `graph_version` in
`hardcodedValues` (via `input_default` in `AgentExecutorBlock`).
However, `NodeHeader.tsx` always resolves the display title from
`data.title` (the generic block name), ignoring the persisted agent
name.

## Fix

Modified the title resolution chain in `NodeHeader.tsx` to check
`data.hardcodedValues.agent_name` between the user's custom name and the
generic block title:

1. `data.metadata.customized_name` (user's manual rename) — highest
priority
2. `agent_name` + ` v{graph_version}` from `hardcodedValues` — **new**
3. `data.title` (generic block name) — fallback

This is a frontend-only change. No backend modifications needed.

## Files Changed

-
`autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/NodeHeader.tsx`
(+11, -1)

## Test Plan

- [x] Place an AgentExecutorBlock, select an agent — title shows agent
name
- [x] Save graph, reload page — title still shows agent name (was "Agent
Executor" before)
- [x] Double-click to rename — custom name takes priority over agent
name
- [x] Clear custom name — falls back to agent name
- [x] Non-AgentExecutor blocks — unaffected, show generic title as
before

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-04-17 15:20:32 +00:00
Zamil Majdy
6d770d9917 fix(platform/copilot): revert forward pagination, add visibility guarantee for blank chat (#12831)
## Why / What / How

**Why:** PR #12796 changed completed copilot sessions to load messages
from sequence 0 forward (ascending), which broke the standard chat UX —
users now land at the beginning of the conversation instead of the most
recent messages. Reported in Discord.

**What:** Reverts the forward pagination approach and replaces it with a
visibility guarantee that ensures every page contains at least one
user/assistant message.

**How:**
- **Backend**: Removed after_sequence, from_start, forward_paginated,
newest_sequence — always use backward (newest-first) pagination. Added
_expand_for_visibility() helper: after fetching, if the entire page is
tool messages (invisible in UI), expand backward up to 200 messages
until a visible user/assistant message is found.
- **Frontend**: Removed all forwardPaginated/newestSequence plumbing
from hooks and components. Removed bottom LoadMoreSentinel. Simplified
message merge to always prepend paged messages.

### Changes
- routes.py: Reverted to simple backward pagination, removed TOCTOU
re-fetch logic
- db.py: Removed forward mode, extracted _expand_tool_boundary() and
added _expand_for_visibility()
- SessionDetailResponse: Removed newest_sequence and forward_paginated
fields
- openapi.json: Removed after_sequence param and forward pagination
response fields
- Frontend hooks/components: Removed forward pagination props and logic
(-1000 lines)
- Updated all tests (backend: 63 pass, frontend: 1517 pass)

### Checklist
- [x] I have clearly listed my changes in the PR description
- [x] Backend unit tests: 63 pass
- [x] Frontend unit tests: 1517 pass
- [x] Frontend lint + types: clean
- [x] Backend format + pyright: clean
2026-04-17 19:23:28 +07:00
slepybear
334ec18c31 docs: convert in-code comments to MkDocs admonitions in block-sdk-gui… (#12819)
### Why / What / How

<!-- Why: Why does this PR exist? What problem does it solve, or what's
broken/missing without it? -->
This PR converts inline Python comments in code examples within
`block-sdk-guide.md` into MkDocs `!!! note` admonitions. This makes code
examples cleaner and more copy-paste friendly while preserving all
explanatory content.

<!-- What: What does this PR change? Summarize the changes at a high
level. -->
Converts inline comments in code blocks to admonitions following the
pattern established in PR #12396 (new_blocks.md) and PR #12313.

<!-- How: How does it work? Describe the approach, key implementation
details, or architecture decisions. -->
- Wrapped code examples with `!!! note` admonitions
- Removed inline comments from code blocks for clean copy-paste
- Added explanatory admonitions after each code block

### Changes 🏗️

- Provider configuration examples (API key and OAuth)
- Block class Input/Output schema annotations
- Block initialization parameters
- Test configuration
- OAuth and webhook handler implementations
- Authentication types and file handling patterns

### Checklist 📋

#### For documentation changes:
- [x] Follows the admonition pattern from PR #12396
- [x] No code changes, documentation only
- [x] Admonition syntax verified correct

#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes

---

**Related Issues**: Closes #8946

Co-authored-by: slepybear <slepybear@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-04-17 07:47:52 +00:00
slepybear
ea5cfdfa2e fix(frontend): remove debug console.log statements (#12823)
## Why
Debug console.log statements were left in production code, which can
leak
sensitive information and pollute browser developer consoles.

## What
Removed console.log from 4 non-legacy frontend components:
- useNavbar.ts: isLoggedIn debug log
- WalletRefill.tsx: autoRefillForm debug log  
- EditAgentForm.tsx: category field debug log
- TimezoneForm.tsx: currentTimezone debug log

## How
Simply deleted the console.log lines as they served no purpose 
other than debugging during development.

## Checklist
- [x] Code follows project conventions
- [x] Only frontend changes (4 files, 6 lines removed)
- [x] No functionality changes

Co-authored-by: slepybear <slepybear@users.noreply.github.com>
2026-04-17 07:31:51 +00:00
Ubbe
d13a85bef7 feat(frontend): surface scheduled agents in library & copilot briefings (#12818)
## Why

Scheduled agents weren't well-surfaced in the Library and Copilot
briefings:

- The Library fleet summary didn't count agents that are scheduled
purely via the scheduler (only those with a `recommended_schedule_cron`
set at the agent level).
- Sitrep items didn't distinguish scheduled or listening (trigger-based)
agents, so they often fell back to a generic "idle" state.
- Scheduled chips showed a generic message with no indication of when
the next run would happen.
- The Copilot Agent Briefing surfaced every scheduled agent regardless
of how far out the next run was — an agent scheduled a month away would
take a slot from something actually happening soon.
- Long sitrep messages overflowed the row.

## What

- Add `is_scheduled` to `LibraryAgent` (sourced from the scheduler) so
the frontend can reliably detect schedule-only agents.
- Count scheduled agents in `useLibraryFleetSummary`.
- Include scheduled and listening agents in sitrep items, with a
priority ordering (error → running → stale → success → listening →
scheduled → idle).
- Show a relative next-run time on scheduled sitrep chips (e.g.
"Scheduled to run in 2h" / "in 3d").
- Filter the Copilot Agent Briefing to scheduled agents whose next run
is within the next 3 days.
- Truncate long sitrep messages to 1 line with `OverflowText` and show
the full text in a tooltip on hover.

## How

- Scheduler → `LibraryAgent` mapping populates `is_scheduled` /
`next_scheduled_run`.
- `useSitrepItems` gains an optional `scheduledWithinMs` parameter.
Copilot's `usePulseChips` passes `3 * 24 * 60 * 60 * 1000`; the Library
briefing omits it to keep its existing (unbounded) behavior.
- Scheduled config-based sitrep items are skipped when
`next_scheduled_run` is missing or outside the window.
- `SitrepItem` wraps the message in `OverflowText` so a single-line
ellipsis + hover tooltip replaces raw overflow.

## Test plan

- [ ] `/library` — scheduled and listening agents appear in the sitrep
with accurate copy; fleet summary counts scheduled agents correctly;
long messages truncate with a tooltip on hover.
- [ ] `/copilot` — on an empty session with the `AGENT_BRIEFING` flag
on, the briefing only shows scheduled agents whose next run is within 3
days; agents scheduled further out no longer appear as "scheduled"
chips.
- [ ] Scheduled chip text reads "Scheduled to run in {Nm|Nh|Nd}"
matching `next_scheduled_run`.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 14:36:15 +07:00
Zamil Majdy
60b85640e7 fix(backend/copilot): replace dedup lock with idempotent append_and_save_message (#12814)
## Why

The Redis dedup lock (`chat:msg_dedup:{session}:{content_hash}`, 30s
TTL) was solving the wrong problem:

- Its purpose: block infra/nginx retries from calling
`append_and_save_message` twice after a client disconnect, writing a
duplicate user message to the DB.
- The approach: deliberately hold the lock for 30s on `GeneratorExit`.
- Why unnecessary: the executor's cluster lock already prevents
duplicate *execution*. The only real gap was duplicate *DB writes* in
the ~1s before the executor picks up the turn.

## What

- **Deleted** `message_dedup.py` and `message_dedup_test.py` (~150 lines
removed).
- **Removed** all dedup lock code from `routes.py` (~40 lines removed).
- **`append_and_save_message`** is now idempotent and self-contained:
- Uses redis-py's built-in `Lock(timeout=10, blocking_timeout=2)` —
Lua-script atomic acquire/release, no manual poll/sleep loop.
- Lock context manager yields `bool` (`True` = acquired, `False` =
degraded). When degraded (Redis down or 2s timeout), reads from DB
directly instead of cache to avoid stale-state duplicates.
- Idempotency check: if `session.messages[-1]` already matches the
incoming role+content, returns `None` instead of the session.
- Lock released explicitly as soon as the write completes; `try/except`
in `finally` so a cleanup error after a successful write never surfaces
a false 500.
- On cache-write failure, the stale cache entry is invalidated so future
reads fall back to the authoritative DB.
- **`routes.py`** uses the `None` signal: `is_duplicate_message = (await
append_and_save_message(...)) is None`
- Skips `create_session` and `enqueue_copilot_turn` for duplicates —
client re-attaches to the existing turn's Redis stream.
- `track_user_message` and `turn_id` generation only happen when
`is_duplicate_message` is false.
- **`subscribe_to_session`** retry window increased from 1×50ms to
3×100ms — covers the window where a duplicate request subscribes before
the original's `create_session` hset completes.
- **Cleaned up** `routes_test.py`: removed 5 dedup-specific tests and
the `mock_redis` setup from `_mock_stream_internals`; added
duplicate-skips-enqueue test.

## How

The idempotency guard distinguishes legit same-text messages from
retries via the **assistant turn between them**: if the user said "yes",
got a response, and says "yes" again, `session.messages[-1]` is the
assistant reply, so the role check fails and the second message goes
through. A retry (no response yet) sees the user message as the last
entry and is blocked.

```python
if (
    session.messages
    and session.messages[-1].role == message.role
    and session.messages[-1].content == message.content
):
    return None  # duplicate — caller skips enqueue
```

The Redis lock ensures this check always sees authoritative state even
in multi-replica deployments. When the lock is unavailable (Redis down
or contention), reading from DB directly (bypassing potentially stale
cache) provides the same safety guarantee at the cost of a DB
round-trip.

## Checklist

- [x] PR targets `dev`
- [x] Conventional commit title with scope
- [x] Tests added/updated (duplicate detection, lock degradation, DB
error, cache invalidation paths)
- [x] `poetry run format` and `poetry run pyright` pass clean
- [x] No new linter suppressors
2026-04-16 22:12:30 +07:00
Zamil Majdy
87e4d42750 fix(backend/copilot): fix initial load missing messages + forward pagination for completed sessions (#12796)
### Why / What / How

**Why:** Completed copilot sessions with many messages showed a
completely empty chat view. A user reported a 158-message session that
appeared blank on reload.

**What:** Two bugs fixed:
1. **Backend** — initial page load always returned the newest 50
messages in DESC order. For sessions heavy in tool calls, the user's
original messages (seq 0–5) were never included; all 50 slots consumed
by mid-session tool outputs.
2. **Frontend** — convertChatSessionToUiMessages silently dropped user
messages with null/empty content.

**How:** For completed sessions (no active stream), the backend now
loads from sequence 0 in ASC order. Active/streaming sessions keep
newest-first for streaming context. A new after_sequence forward cursor
enables infinite-scroll for subsequent pages (sentinel moves to bottom).
The frontend wires forward_paginated + newest_sequence end-to-end.

### Changes 🏗️

- db.py: added from_start (ASC) and after_sequence (forward cursor)
modes; added newest_sequence to PaginatedMessages
- routes.py: detect completed vs active on initial load; pass
from_start=True for completed; expose newest_sequence +
forward_paginated; accept after_sequence param
- convertChatSessionToUiMessages.ts: never drop user messages with empty
content
- useLoadMoreMessages.ts: forward pagination via after_sequence; append
pages to end
- ChatMessagesContainer.tsx: LoadMoreSentinel at bottom for
forward-paginated sessions
- Wire newestSequence + forwardPaginated end-to-end through
useChatSession/useCopilotPage/ChatContainer
- openapi.json: add after_sequence + newest_sequence/forward_paginated;
regenerate types
- db_test.py: 9 new unit tests for from_start and after_sequence modes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open a completed session with many messages — first user message
visible on initial load
- [x] Scroll to bottom of completed session — load more appends next
page
- [x] Open active/streaming session — newest messages shown first,
streaming unaffected
  - [x] Backend unit tests: all 28 pass
  - [x] Frontend lint/format: clean, no new type errors

---------

Co-authored-by: chernistry <73943355+chernistry@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-04-16 14:16:54 +00:00
Ubbe
0339d95d12 fix(frontend): small UI fixes, sort menu bg, name update auth, stats grid overflow, pulse chips (#12815)
## Summary
- **LibrarySortMenu / AgentFilterMenu**: Force `!bg-transparent` and
neutralise legacy `SelectTrigger` styles (`m-0.5`, `ring-offset-white`,
`shadow-sm`) that caused a white background around the trigger
- **EditNameDialog**: Replace client-side `supabase.auth.updateUser()`
with server-side `PUT /api/auth/user` route — fixes "Auth session
missing!" error caused by `httpOnly` cookies being inaccessible to
browser JS
- **StatsGrid**: Swap label `Text` for `OverflowText` so tile labels
truncate with `…` and show a tooltip instead of wrapping when the grid
is squeezed
- **PulseChips**: Set fixed `15rem` chip width with `shrink-0`,
horizontal scroll, and styled thin scrollbar
- **Tests**: Updated `EditNameDialog` tests to use MSW instead of
mocking Supabase client; added 7 new `PulseChips` integration tests

## Test plan
- [x] `pnpm test:unit` — all 1495 tests pass (91 files)
- [x] `pnpm format && pnpm lint` — clean
- [x] `pnpm types` — no new errors (pre-existing only)
- [ ] QA `/library?sort=updatedAt` — sort menu trigger has no white bg
- [ ] QA `/library` — StatsGrid labels truncate with tooltip on narrow
viewports
- [ ] QA `/copilot` — PulseChips scroll horizontally at fixed width
- [ ] QA `/copilot` — Edit name dialog saves successfully (no "Auth
session missing!")

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 20:11:21 +07:00
Toran Bruce Richards
f410929560 feat(platform): Add xAI Grok 4.20 models from OpenRouter (#12620)
Requested by @Torantulino

Adds the 2 xAI Grok 4.20 models available on OpenRouter that are missing
from the platform.

## Why

`x-ai/grok-4.20` and `x-ai/grok-4.20-multi-agent` are xAI's current
flagship models (released March 2026) and are available via OpenRouter,
but weren't accessible from the platform's LLM blocks.

## Changes

**`autogpt_platform/backend/backend/blocks/llm.py`**
- Added `GROK_4_20` and `GROK_4_20_MULTI_AGENT` enum members
- Added corresponding `MODEL_METADATA` entries (open_router provider, 2M
context window, price tier 3)

**`autogpt_platform/backend/backend/data/block_cost_config.py`**
- Added `MODEL_COST` entries at 5 credits each (flagship tier, $2/M in)

**`docs/integrations/block-integrations/llm.md`**
- Added new model IDs to all LLM block tables

| Model | Pricing | Context |
|-------|---------|---------|
| `x-ai/grok-4.20` | $2/M in, $6/M out | 2M |
| `x-ai/grok-4.20-multi-agent` | $2/M in, $6/M out | 2M |

Both models use the standard OpenRouter chat completions API — no
special handling needed.

Resolves: SECRT-2196

---------

Co-authored-by: Torantulino <22963551+Torantulino@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Otto (AGPT) <otto@agpt.co>
2026-04-16 12:14:56 +00:00
59 changed files with 2444 additions and 853 deletions

View File

@@ -18,7 +18,6 @@ from backend.copilot import stream_registry
from backend.copilot.config import ChatConfig, CopilotLlmModel, CopilotMode
from backend.copilot.db import get_chat_messages_paginated
from backend.copilot.executor.utils import enqueue_cancel_task, enqueue_copilot_turn
from backend.copilot.message_dedup import acquire_dedup_lock
from backend.copilot.model import (
ChatMessage,
ChatSession,
@@ -463,22 +462,13 @@ async def get_session(
Supports cursor-based pagination via ``limit`` and ``before_sequence``.
When no pagination params are provided, returns the most recent messages.
Args:
session_id: The unique identifier for the desired chat session.
user_id: The authenticated user's ID.
limit: Maximum number of messages to return (1-200, default 50).
before_sequence: Return messages with sequence < this value (cursor).
Returns:
SessionDetailResponse: Details for the requested session, including
active_stream info and pagination metadata.
"""
page = await get_chat_messages_paginated(
session_id, limit, before_sequence, user_id=user_id
)
if page is None:
raise NotFoundError(f"Session {session_id} not found.")
messages = [
_strip_injected_context(message.model_dump()) for message in page.messages
]
@@ -489,10 +479,6 @@ async def get_session(
active_session, last_message_id = await stream_registry.get_active_session(
session_id, user_id
)
logger.info(
f"[GET_SESSION] session={session_id}, active_session={active_session is not None}, "
f"msg_count={len(messages)}, last_role={messages[-1].get('role') if messages else 'none'}"
)
if active_session:
active_stream_info = ActiveStreamInfo(
turn_id=active_session.turn_id,
@@ -846,9 +832,6 @@ async def stream_chat_post(
# Also sanitise file_ids so only validated, workspace-scoped IDs are
# forwarded downstream (e.g. to the executor via enqueue_copilot_turn).
sanitized_file_ids: list[str] | None = None
# Capture the original message text BEFORE any mutation (attachment enrichment)
# so the idempotency hash is stable across retries.
original_message = request.message
if request.file_ids and user_id:
# Filter to valid UUIDs only to prevent DB abuse
valid_ids = [fid for fid in request.file_ids if _UUID_RE.match(fid)]
@@ -877,58 +860,36 @@ async def stream_chat_post(
)
request.message += files_block
# ── Idempotency guard ────────────────────────────────────────────────────
# Blocks duplicate executor tasks from concurrent/retried POSTs.
# See backend/copilot/message_dedup.py for the full lifecycle description.
dedup_lock = None
if request.is_user_message:
dedup_lock = await acquire_dedup_lock(
session_id, original_message, sanitized_file_ids
)
if dedup_lock is None and (original_message or sanitized_file_ids):
async def _empty_sse() -> AsyncGenerator[str, None]:
yield StreamFinish().to_sse()
yield "data: [DONE]\n\n"
return StreamingResponse(
_empty_sse(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"X-Accel-Buffering": "no",
"Connection": "keep-alive",
"x-vercel-ai-ui-message-stream": "v1",
},
)
# Atomically append user message to session BEFORE creating task to avoid
# race condition where GET_SESSION sees task as "running" but message isn't
# saved yet. append_and_save_message re-fetches inside a lock to prevent
# message loss from concurrent requests.
#
# If any of these operations raises, release the dedup lock before propagating
# so subsequent retries are not blocked for 30 s.
try:
if request.message:
message = ChatMessage(
role="user" if request.is_user_message else "assistant",
content=request.message,
)
if request.is_user_message:
track_user_message(
user_id=user_id,
session_id=session_id,
message_length=len(request.message),
)
logger.info(f"[STREAM] Saving user message to session {session_id}")
# saved yet. append_and_save_message returns None when a duplicate is
# detected — in that case skip enqueue to avoid processing the message twice.
is_duplicate_message = False
if request.message:
message = ChatMessage(
role="user" if request.is_user_message else "assistant",
content=request.message,
)
logger.info(f"[STREAM] Saving user message to session {session_id}")
is_duplicate_message = (
await append_and_save_message(session_id, message)
logger.info(f"[STREAM] User message saved for session {session_id}")
) is None
logger.info(f"[STREAM] User message saved for session {session_id}")
if not is_duplicate_message and request.is_user_message:
track_user_message(
user_id=user_id,
session_id=session_id,
message_length=len(request.message),
)
# Create a task in the stream registry for reconnection support
# Create a task in the stream registry for reconnection support.
# For duplicate messages, skip create_session entirely so the infra-retry
# client subscribes to the *existing* turn's Redis stream and receives the
# in-progress executor output rather than an empty stream.
turn_id = ""
if not is_duplicate_message:
turn_id = str(uuid4())
log_meta["turn_id"] = turn_id
session_create_start = time.perf_counter()
await stream_registry.create_session(
session_id=session_id,
@@ -946,7 +907,6 @@ async def stream_chat_post(
}
},
)
await enqueue_copilot_turn(
session_id=session_id,
user_id=user_id,
@@ -958,10 +918,10 @@ async def stream_chat_post(
mode=request.mode,
model=request.model,
)
except Exception:
if dedup_lock:
await dedup_lock.release()
raise
else:
logger.info(
f"[STREAM] Duplicate message detected for session {session_id}, skipping enqueue"
)
setup_time = (time.perf_counter() - stream_start_time) * 1000
logger.info(
@@ -985,12 +945,6 @@ async def stream_chat_post(
subscriber_queue = None
first_chunk_yielded = False
chunks_yielded = 0
# True for every exit path except GeneratorExit (client disconnect).
# On disconnect the backend turn is still running — releasing the lock
# there would reopen the infra-retry duplicate window. The 30 s TTL
# is the fallback. All other exits (normal finish, early return, error)
# should release so the user can re-send the same message.
release_dedup_lock_on_exit = True
try:
# Subscribe from the position we captured before enqueuing
# This avoids replaying old messages while catching all new ones
@@ -1002,7 +956,7 @@ async def stream_chat_post(
if subscriber_queue is None:
yield StreamFinish().to_sse()
return # finally releases dedup_lock
return
# Read from the subscriber queue and yield to SSE
logger.info(
@@ -1044,7 +998,7 @@ async def stream_chat_post(
}
},
)
break # finally releases dedup_lock
break
except asyncio.TimeoutError:
yield StreamHeartbeat().to_sse()
@@ -1060,7 +1014,6 @@ async def stream_chat_post(
}
},
)
release_dedup_lock_on_exit = False
except Exception as e:
elapsed = (time_module.perf_counter() - event_gen_start) * 1000
logger.error(
@@ -1075,10 +1028,7 @@ async def stream_chat_post(
code="stream_error",
).to_sse()
yield StreamFinish().to_sse()
# finally releases dedup_lock
finally:
if dedup_lock and release_dedup_lock_on_exit:
await dedup_lock.release()
# Unsubscribe when client disconnects or stream ends
if subscriber_queue is not None:
try:

View File

@@ -133,21 +133,12 @@ def test_stream_chat_rejects_too_many_file_ids():
assert response.status_code == 422
def _mock_stream_internals(
mocker: pytest_mock.MockerFixture,
*,
redis_set_returns: object = True,
):
def _mock_stream_internals(mocker: pytest_mock.MockerFixture):
"""Mock the async internals of stream_chat_post so tests can exercise
validation and enrichment logic without needing Redis/RabbitMQ.
Args:
redis_set_returns: Value returned by the mocked Redis ``set`` call.
``True`` (default) simulates a fresh key (new message);
``None`` simulates a collision (duplicate blocked).
validation and enrichment logic without needing RabbitMQ.
Returns:
A namespace with ``redis``, ``save``, and ``enqueue`` mock objects so
A namespace with ``save`` and ``enqueue`` mock objects so
callers can make additional assertions about side-effects.
"""
import types
@@ -158,7 +149,7 @@ def _mock_stream_internals(
)
mock_save = mocker.patch(
"backend.api.features.chat.routes.append_and_save_message",
return_value=None,
return_value=MagicMock(), # non-None = message was saved (not a duplicate)
)
mock_registry = mocker.MagicMock()
mock_registry.create_session = mocker.AsyncMock(return_value=None)
@@ -174,15 +165,9 @@ def _mock_stream_internals(
"backend.api.features.chat.routes.track_user_message",
return_value=None,
)
mock_redis = AsyncMock()
mock_redis.set = AsyncMock(return_value=redis_set_returns)
mocker.patch(
"backend.copilot.message_dedup.get_redis_async",
new_callable=AsyncMock,
return_value=mock_redis,
return types.SimpleNamespace(
save=mock_save, enqueue=mock_enqueue, registry=mock_registry
)
ns = types.SimpleNamespace(redis=mock_redis, save=mock_save, enqueue=mock_enqueue)
return ns
def test_stream_chat_accepts_20_file_ids(mocker: pytest_mock.MockerFixture):
@@ -211,6 +196,29 @@ def test_stream_chat_accepts_20_file_ids(mocker: pytest_mock.MockerFixture):
assert response.status_code == 200
# ─── Duplicate message dedup ──────────────────────────────────────────
def test_stream_chat_skips_enqueue_for_duplicate_message(
mocker: pytest_mock.MockerFixture,
):
"""When append_and_save_message returns None (duplicate detected),
enqueue_copilot_turn and stream_registry.create_session must NOT be called
to avoid double-processing and to prevent overwriting the active stream's
turn_id in Redis (which would cause reconnecting clients to miss the response)."""
mocks = _mock_stream_internals(mocker)
# Override save to return None — signalling a duplicate
mocks.save.return_value = None
response = client.post(
"/sessions/sess-1/stream",
json={"message": "hello"},
)
assert response.status_code == 200
mocks.enqueue.assert_not_called()
mocks.registry.create_session.assert_not_called()
# ─── UUID format filtering ─────────────────────────────────────────────
@@ -706,237 +714,6 @@ class TestStripInjectedContext:
assert result["content"] == "hello"
# ─── Idempotency / duplicate-POST guard ──────────────────────────────
def test_stream_chat_blocks_duplicate_post_returns_empty_sse(
mocker: pytest_mock.MockerFixture,
) -> None:
"""A second POST with the same message within the 30-s window must return
an empty SSE stream (StreamFinish + [DONE]) so the frontend marks the
turn complete without creating a ghost response."""
# redis_set_returns=None simulates a collision: the NX key already exists.
ns = _mock_stream_internals(mocker, redis_set_returns=None)
response = client.post(
"/sessions/sess-dup/stream",
json={"message": "duplicate message", "is_user_message": True},
)
assert response.status_code == 200
body = response.text
# The response must contain StreamFinish (type=finish) and the SSE [DONE] terminator.
assert '"finish"' in body
assert "[DONE]" in body
# The empty SSE response must include the AI SDK protocol header so the
# frontend treats it as a valid stream and marks the turn complete.
assert response.headers.get("x-vercel-ai-ui-message-stream") == "v1"
# The duplicate guard must prevent save/enqueue side effects.
ns.save.assert_not_called()
ns.enqueue.assert_not_called()
def test_stream_chat_first_post_proceeds_normally(
mocker: pytest_mock.MockerFixture,
) -> None:
"""The first POST (Redis NX key set successfully) must proceed through the
normal streaming path — no early return."""
ns = _mock_stream_internals(mocker, redis_set_returns=True)
response = client.post(
"/sessions/sess-new/stream",
json={"message": "first message", "is_user_message": True},
)
assert response.status_code == 200
# Redis set must have been called once with the NX flag.
ns.redis.set.assert_called_once()
call_kwargs = ns.redis.set.call_args
assert call_kwargs.kwargs.get("nx") is True
def test_stream_chat_dedup_skipped_for_non_user_messages(
mocker: pytest_mock.MockerFixture,
) -> None:
"""System/assistant messages (is_user_message=False) bypass the dedup
guard — they are injected programmatically and must always be processed."""
ns = _mock_stream_internals(mocker, redis_set_returns=None)
response = client.post(
"/sessions/sess-sys/stream",
json={"message": "system context", "is_user_message": False},
)
# Even though redis_set_returns=None (would block a user message),
# the endpoint must proceed because is_user_message=False.
assert response.status_code == 200
ns.redis.set.assert_not_called()
def test_stream_chat_dedup_hash_uses_original_message_not_mutated(
mocker: pytest_mock.MockerFixture,
) -> None:
"""The dedup hash must be computed from the original request message,
not the mutated version that has the [Attached files] block appended.
A file_id is sent so the route actually appends the [Attached files] block,
exercising the mutation path — the hash must still match the original text."""
import hashlib
ns = _mock_stream_internals(mocker, redis_set_returns=True)
file_id = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
# Mock workspace + prisma so the attachment block is actually appended.
mocker.patch(
"backend.api.features.chat.routes.get_or_create_workspace",
return_value=type("W", (), {"id": "ws-1"})(),
)
fake_file = type(
"F",
(),
{
"id": file_id,
"name": "doc.pdf",
"mimeType": "application/pdf",
"sizeBytes": 1024,
},
)()
mock_prisma = mocker.MagicMock()
mock_prisma.find_many = mocker.AsyncMock(return_value=[fake_file])
mocker.patch(
"prisma.models.UserWorkspaceFile.prisma",
return_value=mock_prisma,
)
response = client.post(
"/sessions/sess-hash/stream",
json={
"message": "plain message",
"is_user_message": True,
"file_ids": [file_id],
},
)
assert response.status_code == 200
ns.redis.set.assert_called_once()
call_args = ns.redis.set.call_args
dedup_key = call_args.args[0]
# Hash must use the original message + sorted file IDs, not the mutated text.
expected_hash = hashlib.sha256(
f"sess-hash:plain message:{file_id}".encode()
).hexdigest()[:16]
expected_key = f"chat:msg_dedup:sess-hash:{expected_hash}"
assert dedup_key == expected_key, (
f"Dedup key {dedup_key!r} does not match expected {expected_key!r}"
"hash may be using mutated message or wrong inputs"
)
def test_stream_chat_dedup_key_released_after_stream_finish(
mocker: pytest_mock.MockerFixture,
) -> None:
"""The dedup Redis key must be deleted after the turn completes (when
subscriber_queue is None the route yields StreamFinish immediately and
should release the key so the user can re-send the same message)."""
from unittest.mock import AsyncMock as _AsyncMock
# Set up all internals manually so we can control subscribe_to_session.
mocker.patch(
"backend.api.features.chat.routes._validate_and_get_session",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.append_and_save_message",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.enqueue_copilot_turn",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.track_user_message",
return_value=None,
)
mock_registry = mocker.MagicMock()
mock_registry.create_session = _AsyncMock(return_value=None)
# None → early-finish path: StreamFinish yielded immediately, dedup key released.
mock_registry.subscribe_to_session = _AsyncMock(return_value=None)
mocker.patch(
"backend.api.features.chat.routes.stream_registry",
mock_registry,
)
mock_redis = mocker.AsyncMock()
mock_redis.set = _AsyncMock(return_value=True)
mocker.patch(
"backend.copilot.message_dedup.get_redis_async",
new_callable=_AsyncMock,
return_value=mock_redis,
)
response = client.post(
"/sessions/sess-finish/stream",
json={"message": "hello", "is_user_message": True},
)
assert response.status_code == 200
body = response.text
assert '"finish"' in body
# The dedup key must be released so intentional re-sends are allowed.
mock_redis.delete.assert_called_once()
def test_stream_chat_dedup_key_released_even_when_redis_delete_raises(
mocker: pytest_mock.MockerFixture,
) -> None:
"""The route must not crash when the dedup Redis delete fails on the
subscriber_queue-is-None early-finish path (except Exception: pass)."""
from unittest.mock import AsyncMock as _AsyncMock
mocker.patch(
"backend.api.features.chat.routes._validate_and_get_session",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.append_and_save_message",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.enqueue_copilot_turn",
return_value=None,
)
mocker.patch(
"backend.api.features.chat.routes.track_user_message",
return_value=None,
)
mock_registry = mocker.MagicMock()
mock_registry.create_session = _AsyncMock(return_value=None)
mock_registry.subscribe_to_session = _AsyncMock(return_value=None)
mocker.patch(
"backend.api.features.chat.routes.stream_registry",
mock_registry,
)
mock_redis = mocker.AsyncMock()
mock_redis.set = _AsyncMock(return_value=True)
# Make the delete raise so the except-pass branch is exercised.
mock_redis.delete = _AsyncMock(side_effect=RuntimeError("redis gone"))
mocker.patch(
"backend.copilot.message_dedup.get_redis_async",
new_callable=_AsyncMock,
return_value=mock_redis,
)
# Should not raise even though delete fails.
response = client.post(
"/sessions/sess-finish-err/stream",
json={"message": "hello", "is_user_message": True},
)
assert response.status_code == 200
assert '"finish"' in response.text
# delete must have been attempted — the except-pass branch silenced the error.
mock_redis.delete.assert_called_once()
# ─── DELETE /sessions/{id}/stream — disconnect listeners ──────────────
@@ -980,3 +757,59 @@ def test_disconnect_stream_returns_404_when_session_missing(
assert response.status_code == 404
mock_disconnect.assert_not_awaited()
# ─── GET /sessions/{session_id} — backward pagination ─────────────────────────
def _make_paginated_messages(
mocker: pytest_mock.MockerFixture, *, has_more: bool = False
):
"""Return a mock PaginatedMessages and configure the DB patch."""
from datetime import UTC, datetime
from backend.copilot.db import PaginatedMessages
from backend.copilot.model import ChatMessage, ChatSessionInfo, ChatSessionMetadata
now = datetime.now(UTC)
session_info = ChatSessionInfo(
session_id="sess-1",
user_id=TEST_USER_ID,
usage=[],
started_at=now,
updated_at=now,
metadata=ChatSessionMetadata(),
)
page = PaginatedMessages(
messages=[ChatMessage(role="user", content="hello", sequence=0)],
has_more=has_more,
oldest_sequence=0,
session=session_info,
)
mock_paginate = mocker.patch(
"backend.api.features.chat.routes.get_chat_messages_paginated",
new_callable=AsyncMock,
return_value=page,
)
return page, mock_paginate
def test_get_session_returns_backward_paginated(
mocker: pytest_mock.MockerFixture,
test_user_id: str,
) -> None:
"""All sessions use backward (newest-first) pagination."""
_make_paginated_messages(mocker)
mocker.patch(
"backend.api.features.chat.routes.stream_registry.get_active_session",
new_callable=AsyncMock,
return_value=(None, None),
)
response = client.get("/sessions/sess-1")
assert response.status_code == 200
data = response.json()
assert data["oldest_sequence"] == 0
assert "forward_paginated" not in data
assert "newest_sequence" not in data

View File

@@ -12,6 +12,7 @@ import prisma.models
import backend.api.features.library.model as library_model
import backend.data.graph as graph_db
from backend.api.features.library.db import _fetch_schedule_info
from backend.data.graph import GraphModel, GraphSettings
from backend.data.includes import library_agent_include
from backend.util.exceptions import NotFoundError
@@ -117,4 +118,5 @@ async def add_graph_to_library(
f"for store listing version #{store_listing_version_id} "
f"to library for user #{user_id}"
)
return library_model.LibraryAgent.from_db(added_agent)
schedule_info = await _fetch_schedule_info(user_id, graph_id=graph_model.id)
return library_model.LibraryAgent.from_db(added_agent, schedule_info=schedule_info)

View File

@@ -21,13 +21,17 @@ async def test_add_graph_to_library_create_new_agent() -> None:
"backend.api.features.library._add_to_library.library_model.LibraryAgent.from_db",
return_value=converted_agent,
) as mock_from_db,
patch(
"backend.api.features.library._add_to_library._fetch_schedule_info",
new=AsyncMock(return_value={}),
),
):
mock_prisma.return_value.create = AsyncMock(return_value=created_agent)
result = await add_graph_to_library("slv-id", graph_model, "user-id")
assert result is converted_agent
mock_from_db.assert_called_once_with(created_agent)
mock_from_db.assert_called_once_with(created_agent, schedule_info={})
# Verify create was called with correct data
create_call = mock_prisma.return_value.create.call_args
create_data = create_call.kwargs["data"]
@@ -54,6 +58,10 @@ async def test_add_graph_to_library_unique_violation_updates_existing() -> None:
"backend.api.features.library._add_to_library.library_model.LibraryAgent.from_db",
return_value=converted_agent,
) as mock_from_db,
patch(
"backend.api.features.library._add_to_library._fetch_schedule_info",
new=AsyncMock(return_value={}),
),
):
mock_prisma.return_value.create = AsyncMock(
side_effect=prisma.errors.UniqueViolationError(
@@ -65,7 +73,7 @@ async def test_add_graph_to_library_unique_violation_updates_existing() -> None:
result = await add_graph_to_library("slv-id", graph_model, "user-id")
assert result is converted_agent
mock_from_db.assert_called_once_with(updated_agent)
mock_from_db.assert_called_once_with(updated_agent, schedule_info={})
# Verify update was called with correct where and data
update_call = mock_prisma.return_value.update.call_args
assert update_call.kwargs["where"] == {

View File

@@ -1,6 +1,7 @@
import asyncio
import itertools
import logging
from datetime import datetime, timezone
from typing import Literal, Optional
import fastapi
@@ -62,6 +63,46 @@ async def _fetch_execution_counts(user_id: str, graph_ids: list[str]) -> dict[st
}
async def _fetch_schedule_info(
user_id: str, graph_id: Optional[str] = None
) -> dict[str, str]:
"""Fetch a map of graph_id → earliest next_run_time ISO string.
When `graph_id` is provided, the scheduler query is narrowed to that graph,
which is cheaper for single-agent lookups (detail page, post-update, etc.).
"""
try:
scheduler_client = get_scheduler_client()
schedules = await scheduler_client.get_execution_schedules(
graph_id=graph_id,
user_id=user_id,
)
earliest: dict[str, tuple[datetime, str]] = {}
for s in schedules:
parsed = _parse_iso_datetime(s.next_run_time)
if parsed is None:
continue
current = earliest.get(s.graph_id)
if current is None or parsed < current[0]:
earliest[s.graph_id] = (parsed, s.next_run_time)
return {graph_id: iso for graph_id, (_, iso) in earliest.items()}
except Exception:
logger.warning("Failed to fetch schedules for library agents", exc_info=True)
return {}
def _parse_iso_datetime(value: str) -> Optional[datetime]:
"""Parse an ISO 8601 datetime, tolerating `Z` and naive forms (assumed UTC)."""
try:
parsed = datetime.fromisoformat(value.replace("Z", "+00:00"))
except ValueError:
logger.warning("Failed to parse schedule next_run_time: %s", value)
return None
if parsed.tzinfo is None:
parsed = parsed.replace(tzinfo=timezone.utc)
return parsed
async def list_library_agents(
user_id: str,
search_term: Optional[str] = None,
@@ -157,7 +198,10 @@ async def list_library_agents(
logger.debug(f"Retrieved {len(library_agents)} library agents for user #{user_id}")
graph_ids = [a.agentGraphId for a in library_agents if a.agentGraphId]
execution_counts = await _fetch_execution_counts(user_id, graph_ids)
execution_counts, schedule_info = await asyncio.gather(
_fetch_execution_counts(user_id, graph_ids),
_fetch_schedule_info(user_id),
)
# Only pass valid agents to the response
valid_library_agents: list[library_model.LibraryAgent] = []
@@ -167,6 +211,7 @@ async def list_library_agents(
library_agent = library_model.LibraryAgent.from_db(
agent,
execution_count_override=execution_counts.get(agent.agentGraphId),
schedule_info=schedule_info,
)
valid_library_agents.append(library_agent)
except Exception as e:
@@ -240,7 +285,10 @@ async def list_favorite_library_agents(
)
graph_ids = [a.agentGraphId for a in library_agents if a.agentGraphId]
execution_counts = await _fetch_execution_counts(user_id, graph_ids)
execution_counts, schedule_info = await asyncio.gather(
_fetch_execution_counts(user_id, graph_ids),
_fetch_schedule_info(user_id),
)
# Only pass valid agents to the response
valid_library_agents: list[library_model.LibraryAgent] = []
@@ -250,6 +298,7 @@ async def list_favorite_library_agents(
library_agent = library_model.LibraryAgent.from_db(
agent,
execution_count_override=execution_counts.get(agent.agentGraphId),
schedule_info=schedule_info,
)
valid_library_agents.append(library_agent)
except Exception as e:
@@ -316,6 +365,12 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent
where={"userId": store_listing.owningUserId}
)
schedule_info = (
await _fetch_schedule_info(user_id, graph_id=library_agent.AgentGraph.id)
if library_agent.AgentGraph
else {}
)
return library_model.LibraryAgent.from_db(
library_agent,
sub_graphs=(
@@ -325,6 +380,7 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent
),
store_listing=store_listing,
profile=profile,
schedule_info=schedule_info,
)
@@ -360,7 +416,10 @@ async def get_library_agent_by_store_version_id(
},
include=library_agent_include(user_id),
)
return library_model.LibraryAgent.from_db(agent) if agent else None
if not agent:
return None
schedule_info = await _fetch_schedule_info(user_id, graph_id=agent.agentGraphId)
return library_model.LibraryAgent.from_db(agent, schedule_info=schedule_info)
async def get_library_agent_by_graph_id(
@@ -389,7 +448,10 @@ async def get_library_agent_by_graph_id(
assert agent.AgentGraph # make type checker happy
# Include sub-graphs so we can make a full credentials input schema
sub_graphs = await graph_db.get_sub_graphs(agent.AgentGraph)
return library_model.LibraryAgent.from_db(agent, sub_graphs=sub_graphs)
schedule_info = await _fetch_schedule_info(user_id, graph_id=agent.agentGraphId)
return library_model.LibraryAgent.from_db(
agent, sub_graphs=sub_graphs, schedule_info=schedule_info
)
async def add_generated_agent_image(
@@ -531,7 +593,11 @@ async def create_library_agent(
for agent, graph in zip(library_agents, graph_entries):
asyncio.create_task(add_generated_agent_image(graph, user_id, agent.id))
return [library_model.LibraryAgent.from_db(agent) for agent in library_agents]
schedule_info = await _fetch_schedule_info(user_id)
return [
library_model.LibraryAgent.from_db(agent, schedule_info=schedule_info)
for agent in library_agents
]
async def update_agent_version_in_library(
@@ -593,7 +659,8 @@ async def update_agent_version_in_library(
f"Failed to update library agent for {agent_graph_id} v{agent_graph_version}"
)
return library_model.LibraryAgent.from_db(lib)
schedule_info = await _fetch_schedule_info(user_id, graph_id=agent_graph_id)
return library_model.LibraryAgent.from_db(lib, schedule_info=schedule_info)
async def create_graph_in_library(
@@ -1498,7 +1565,11 @@ async def bulk_move_agents_to_folder(
),
)
return [library_model.LibraryAgent.from_db(agent) for agent in agents]
schedule_info = await _fetch_schedule_info(user_id)
return [
library_model.LibraryAgent.from_db(agent, schedule_info=schedule_info)
for agent in agents
]
def collect_tree_ids(

View File

@@ -214,6 +214,14 @@ class LibraryAgent(pydantic.BaseModel):
folder_name: str | None = None # Denormalized for display
recommended_schedule_cron: str | None = None
is_scheduled: bool = pydantic.Field(
default=False,
description="Whether this agent has active execution schedules",
)
next_scheduled_run: str | None = pydantic.Field(
default=None,
description="ISO 8601 timestamp of the next scheduled run, if any",
)
settings: GraphSettings = pydantic.Field(default_factory=GraphSettings)
marketplace_listing: Optional["MarketplaceListing"] = None
@@ -224,6 +232,7 @@ class LibraryAgent(pydantic.BaseModel):
store_listing: Optional[prisma.models.StoreListing] = None,
profile: Optional[prisma.models.Profile] = None,
execution_count_override: Optional[int] = None,
schedule_info: Optional[dict[str, str]] = None,
) -> "LibraryAgent":
"""
Factory method that constructs a LibraryAgent from a Prisma LibraryAgent
@@ -359,6 +368,10 @@ class LibraryAgent(pydantic.BaseModel):
folder_id=agent.folderId,
folder_name=agent.Folder.name if agent.Folder else None,
recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron,
is_scheduled=bool(schedule_info and agent.agentGraphId in schedule_info),
next_scheduled_run=(
schedule_info.get(agent.agentGraphId) if schedule_info else None
),
settings=_parse_settings(agent.settings),
marketplace_listing=marketplace_listing_data,
)

View File

@@ -106,7 +106,6 @@ class LlmModelMeta(EnumMeta):
class LlmModel(str, Enum, metaclass=LlmModelMeta):
@classmethod
def _missing_(cls, value: object) -> "LlmModel | None":
"""Handle provider-prefixed model names like 'anthropic/claude-sonnet-4-6'."""
@@ -203,6 +202,8 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
GROK_4 = "x-ai/grok-4"
GROK_4_FAST = "x-ai/grok-4-fast"
GROK_4_1_FAST = "x-ai/grok-4.1-fast"
GROK_4_20 = "x-ai/grok-4.20"
GROK_4_20_MULTI_AGENT = "x-ai/grok-4.20-multi-agent"
GROK_CODE_FAST_1 = "x-ai/grok-code-fast-1"
KIMI_K2 = "moonshotai/kimi-k2"
QWEN3_235B_A22B_THINKING = "qwen/qwen3-235b-a22b-thinking-2507"
@@ -627,6 +628,18 @@ MODEL_METADATA = {
LlmModel.GROK_4_1_FAST: ModelMetadata(
"open_router", 2000000, 30000, "Grok 4.1 Fast", "OpenRouter", "xAI", 1
),
LlmModel.GROK_4_20: ModelMetadata(
"open_router", 2000000, 100000, "Grok 4.20", "OpenRouter", "xAI", 3
),
LlmModel.GROK_4_20_MULTI_AGENT: ModelMetadata(
"open_router",
2000000,
100000,
"Grok 4.20 Multi-Agent",
"OpenRouter",
"xAI",
3,
),
LlmModel.GROK_CODE_FAST_1: ModelMetadata(
"open_router", 256000, 10000, "Grok Code Fast 1", "OpenRouter", "xAI", 1
),
@@ -987,7 +1000,6 @@ async def llm_call(
reasoning=reasoning,
)
elif provider == "anthropic":
an_tools = convert_openai_tool_fmt_to_anthropic(tools)
# Cache tool definitions alongside the system prompt.
# Placing cache_control on the last tool caches all tool schemas as a

View File

@@ -10,9 +10,11 @@ from prisma.models import ChatMessage as PrismaChatMessage
from prisma.models import ChatSession as PrismaChatSession
from prisma.types import (
ChatMessageCreateInput,
ChatMessageWhereInput,
ChatSessionCreateInput,
ChatSessionUpdateInput,
ChatSessionWhereInput,
FindManyChatMessageArgsFromChatSession,
)
from pydantic import BaseModel
@@ -30,6 +32,8 @@ from .model import get_chat_session as get_chat_session_cached
logger = logging.getLogger(__name__)
_BOUNDARY_SCAN_LIMIT = 10
class PaginatedMessages(BaseModel):
"""Result of a paginated message query."""
@@ -69,12 +73,10 @@ async def get_chat_messages_paginated(
in parallel with the message query. Returns ``None`` when the session
is not found or does not belong to the user.
Args:
session_id: The chat session ID.
limit: Max messages to return.
before_sequence: Cursor — return messages with sequence < this value.
user_id: If provided, filters via ``Session.userId`` so only the
session owner's messages are returned (acts as an ownership guard).
After fetching, a visibility guarantee ensures the page contains at least
one user or assistant message. If the entire page is tool messages (which
are hidden in the UI), it expands backward until a visible message is found
so the chat never appears blank.
"""
# Build session-existence / ownership check
session_where: ChatSessionWhereInput = {"id": session_id}
@@ -82,7 +84,7 @@ async def get_chat_messages_paginated(
session_where["userId"] = user_id
# Build message include — fetch paginated messages in the same query
msg_include: dict[str, Any] = {
msg_include: FindManyChatMessageArgsFromChatSession = {
"order_by": {"sequence": "desc"},
"take": limit + 1,
}
@@ -111,42 +113,18 @@ async def get_chat_messages_paginated(
# expand backward to include the preceding assistant message that
# owns the tool_calls, so convertChatSessionMessagesToUiMessages
# can pair them correctly.
_BOUNDARY_SCAN_LIMIT = 10
if results and results[0].role == "tool":
boundary_where: dict[str, Any] = {
"sessionId": session_id,
"sequence": {"lt": results[0].sequence},
}
if user_id is not None:
boundary_where["Session"] = {"is": {"userId": user_id}}
extra = await PrismaChatMessage.prisma().find_many(
where=boundary_where,
order={"sequence": "desc"},
take=_BOUNDARY_SCAN_LIMIT,
results, has_more = await _expand_tool_boundary(
session_id, results, has_more, user_id
)
# Visibility guarantee: if the entire page has no user/assistant messages
# (all tool messages), the chat would appear blank. Expand backward
# until we find at least one visible message.
if results and not any(m.role in ("user", "assistant") for m in results):
results, has_more = await _expand_for_visibility(
session_id, results, has_more, user_id
)
# Find the first non-tool message (should be the assistant)
boundary_msgs = []
found_owner = False
for msg in extra:
boundary_msgs.append(msg)
if msg.role != "tool":
found_owner = True
break
boundary_msgs.reverse()
if not found_owner:
logger.warning(
"Boundary expansion did not find owning assistant message "
"for session=%s before sequence=%s (%d msgs scanned)",
session_id,
results[0].sequence,
len(extra),
)
if boundary_msgs:
results = boundary_msgs + results
# Only mark has_more if the expanded boundary isn't the
# very start of the conversation (sequence 0).
if boundary_msgs[0].sequence > 0:
has_more = True
messages = [ChatMessage.from_db(m) for m in results]
oldest_sequence = messages[0].sequence if messages else None
@@ -159,6 +137,98 @@ async def get_chat_messages_paginated(
)
async def _expand_tool_boundary(
session_id: str,
results: list[Any],
has_more: bool,
user_id: str | None,
) -> tuple[list[Any], bool]:
"""Expand backward from the oldest message to include the owning assistant
message when the page starts mid-tool-group."""
boundary_where: ChatMessageWhereInput = {
"sessionId": session_id,
"sequence": {"lt": results[0].sequence},
}
if user_id is not None:
boundary_where["Session"] = {"is": {"userId": user_id}}
extra = await PrismaChatMessage.prisma().find_many(
where=boundary_where,
order={"sequence": "desc"},
take=_BOUNDARY_SCAN_LIMIT,
)
# Find the first non-tool message (should be the assistant)
boundary_msgs = []
found_owner = False
for msg in extra:
boundary_msgs.append(msg)
if msg.role != "tool":
found_owner = True
break
boundary_msgs.reverse()
if not found_owner:
logger.warning(
"Boundary expansion did not find owning assistant message "
"for session=%s before sequence=%s (%d msgs scanned)",
session_id,
results[0].sequence,
len(extra),
)
if boundary_msgs:
results = boundary_msgs + results
has_more = boundary_msgs[0].sequence > 0
return results, has_more
_VISIBILITY_EXPAND_LIMIT = 200
async def _expand_for_visibility(
session_id: str,
results: list[Any],
has_more: bool,
user_id: str | None,
) -> tuple[list[Any], bool]:
"""Expand backward until the page contains at least one user or assistant
message, so the chat is never blank."""
expand_where: ChatMessageWhereInput = {
"sessionId": session_id,
"sequence": {"lt": results[0].sequence},
}
if user_id is not None:
expand_where["Session"] = {"is": {"userId": user_id}}
extra = await PrismaChatMessage.prisma().find_many(
where=expand_where,
order={"sequence": "desc"},
take=_VISIBILITY_EXPAND_LIMIT,
)
if not extra:
return results, has_more
# Collect messages until we find a visible one (user/assistant)
prepend = []
found_visible = False
for msg in extra:
prepend.append(msg)
if msg.role in ("user", "assistant"):
found_visible = True
break
if not found_visible:
logger.warning(
"Visibility expansion did not find any user/assistant message "
"for session=%s before sequence=%s (%d msgs scanned)",
session_id,
results[0].sequence,
len(extra),
)
prepend.reverse()
if prepend:
results = prepend + results
has_more = prepend[0].sequence > 0
return results, has_more
async def create_chat_session(
session_id: str,
user_id: str,

View File

@@ -175,6 +175,138 @@ async def test_no_where_on_messages_without_before_sequence(
assert "where" not in include["Messages"]
# ---------- Visibility guarantee ----------
@pytest.mark.asyncio
async def test_visibility_expands_when_all_tool_messages(
mock_db: tuple[AsyncMock, AsyncMock],
):
"""When the entire page is tool messages, expand backward to find
at least one visible (user/assistant) message so the chat isn't blank."""
find_first, find_many = mock_db
# Newest 3 messages are all tool messages (DESC → reversed to ASC)
find_first.return_value = _make_session(
messages=[
_make_msg(12, role="tool"),
_make_msg(11, role="tool"),
_make_msg(10, role="tool"),
],
)
# Boundary expansion finds the owning assistant first (boundary fix),
# then visibility expansion finds a user message further back
find_many.side_effect = [
# First call: boundary fix (oldest msg is tool → find owner)
[_make_msg(9, role="tool"), _make_msg(8, role="tool")],
# Second call: visibility expansion (still all tool → find visible)
[_make_msg(7, role="tool"), _make_msg(6, role="assistant")],
]
page = await get_chat_messages_paginated(SESSION_ID, limit=3)
assert page is not None
# Should include the expanded messages + original tool messages
roles = [m.role for m in page.messages]
assert "assistant" in roles or "user" in roles
assert page.has_more is True
@pytest.mark.asyncio
async def test_no_visibility_expansion_when_visible_messages_present(
mock_db: tuple[AsyncMock, AsyncMock],
):
"""No visibility expansion needed when page already has visible messages."""
find_first, find_many = mock_db
# Page has an assistant message among tool messages
find_first.return_value = _make_session(
messages=[
_make_msg(5, role="tool"),
_make_msg(4, role="assistant"),
_make_msg(3, role="user"),
],
)
page = await get_chat_messages_paginated(SESSION_ID, limit=3)
assert page is not None
# Boundary expansion might fire (oldest is tool), but NOT visibility
assert [m.sequence for m in page.messages][0] <= 3
@pytest.mark.asyncio
async def test_visibility_no_expansion_when_no_earlier_messages(
mock_db: tuple[AsyncMock, AsyncMock],
):
"""When the page is all tool messages but there are no earlier messages
in the DB, visibility expansion returns early without changes."""
find_first, find_many = mock_db
find_first.return_value = _make_session(
messages=[_make_msg(1, role="tool"), _make_msg(0, role="tool")],
)
# Boundary expansion: no earlier messages
# Visibility expansion: no earlier messages
find_many.side_effect = [[], []]
page = await get_chat_messages_paginated(SESSION_ID, limit=2)
assert page is not None
assert all(m.role == "tool" for m in page.messages)
@pytest.mark.asyncio
async def test_visibility_expansion_reaches_seq_zero(
mock_db: tuple[AsyncMock, AsyncMock],
):
"""When visibility expansion finds a visible message at sequence 0,
has_more should be False."""
find_first, find_many = mock_db
find_first.return_value = _make_session(
messages=[_make_msg(5, role="tool"), _make_msg(4, role="tool")],
)
find_many.side_effect = [
# Boundary expansion
[_make_msg(3, role="tool")],
# Visibility expansion — finds user at seq 0
[
_make_msg(2, role="tool"),
_make_msg(1, role="tool"),
_make_msg(0, role="user"),
],
]
page = await get_chat_messages_paginated(SESSION_ID, limit=2)
assert page is not None
assert page.messages[0].role == "user"
assert page.messages[0].sequence == 0
assert page.has_more is False
@pytest.mark.asyncio
async def test_visibility_expansion_with_user_id(
mock_db: tuple[AsyncMock, AsyncMock],
):
"""Visibility expansion passes user_id filter to the boundary query."""
find_first, find_many = mock_db
find_first.return_value = _make_session(
messages=[_make_msg(10, role="tool")],
)
find_many.side_effect = [
# Boundary expansion
[_make_msg(9, role="tool")],
# Visibility expansion
[_make_msg(8, role="assistant")],
]
await get_chat_messages_paginated(SESSION_ID, limit=1, user_id="user-abc")
# Both find_many calls should include the user_id session filter
for call in find_many.call_args_list:
where = call.kwargs.get("where") or call[1].get("where")
assert "Session" in where
assert where["Session"] == {"is": {"userId": "user-abc"}}
@pytest.mark.asyncio
async def test_user_id_filter_applied_to_session_where(
mock_db: tuple[AsyncMock, AsyncMock],
@@ -329,7 +461,8 @@ async def test_boundary_expansion_warns_when_no_owner_found(
with patch("backend.copilot.db.logger") as mock_logger:
page = await get_chat_messages_paginated(SESSION_ID, limit=5)
mock_logger.warning.assert_called_once()
# Two warnings: boundary expansion + visibility expansion (all tool msgs)
assert mock_logger.warning.call_count == 2
assert page is not None
assert page.messages[0].role == "tool"

View File

@@ -1,71 +0,0 @@
"""Per-request idempotency lock for the /stream endpoint.
Prevents duplicate executor tasks from concurrent or retried POSTs (e.g. k8s
rolling-deploy retries, nginx upstream retries, rapid double-clicks).
Lifecycle
---------
1. ``acquire()`` — computes a stable hash of (session_id, message, file_ids)
and atomically sets a Redis NX key. Returns a ``_DedupLock`` on success or
``None`` when the key already exists (duplicate request).
2. ``release()`` — deletes the key. Must be called on turn completion or turn
error so the next legitimate send is never blocked.
3. On client disconnect (``GeneratorExit``) the lock must NOT be released —
the backend turn is still running, and releasing would reopen the duplicate
window for infra-level retries. The 30 s TTL is the safety net.
"""
import hashlib
import logging
from backend.data.redis_client import get_redis_async
logger = logging.getLogger(__name__)
_KEY_PREFIX = "chat:msg_dedup"
_TTL_SECONDS = 30
class _DedupLock:
def __init__(self, key: str, redis) -> None:
self._key = key
self._redis = redis
async def release(self) -> None:
"""Best-effort key deletion. The TTL handles failures silently."""
try:
await self._redis.delete(self._key)
except Exception:
pass
async def acquire_dedup_lock(
session_id: str,
message: str | None,
file_ids: list[str] | None,
) -> _DedupLock | None:
"""Acquire the idempotency lock for this (session, message, files) tuple.
Returns a ``_DedupLock`` when the lock is freshly acquired (first request).
Returns ``None`` when a duplicate is detected (lock already held).
Returns ``None`` when there is nothing to deduplicate (no message, no files).
"""
if not message and not file_ids:
return None
sorted_ids = ":".join(sorted(file_ids or []))
content_hash = hashlib.sha256(
f"{session_id}:{message or ''}:{sorted_ids}".encode()
).hexdigest()[:16]
key = f"{_KEY_PREFIX}:{session_id}:{content_hash}"
redis = await get_redis_async()
acquired = await redis.set(key, "1", ex=_TTL_SECONDS, nx=True)
if not acquired:
logger.warning(
f"[STREAM] Duplicate user message blocked for session {session_id}, "
f"hash={content_hash} — returning empty SSE",
)
return None
return _DedupLock(key, redis)

View File

@@ -1,94 +0,0 @@
"""Unit tests for backend.copilot.message_dedup."""
from unittest.mock import AsyncMock
import pytest
import pytest_mock
from backend.copilot.message_dedup import _KEY_PREFIX, acquire_dedup_lock
def _patch_redis(mocker: pytest_mock.MockerFixture, *, set_returns):
mock_redis = AsyncMock()
mock_redis.set = AsyncMock(return_value=set_returns)
mocker.patch(
"backend.copilot.message_dedup.get_redis_async",
new_callable=AsyncMock,
return_value=mock_redis,
)
return mock_redis
@pytest.mark.asyncio
async def test_acquire_returns_none_when_no_message_no_files(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Nothing to deduplicate — no Redis call made, None returned."""
mock_redis = _patch_redis(mocker, set_returns=True)
result = await acquire_dedup_lock("sess-1", None, None)
assert result is None
mock_redis.set.assert_not_called()
@pytest.mark.asyncio
async def test_acquire_returns_lock_on_first_request(
mocker: pytest_mock.MockerFixture,
) -> None:
"""First request acquires the lock and returns a _DedupLock."""
mock_redis = _patch_redis(mocker, set_returns=True)
lock = await acquire_dedup_lock("sess-1", "hello", None)
assert lock is not None
mock_redis.set.assert_called_once()
key_arg = mock_redis.set.call_args.args[0]
assert key_arg.startswith(f"{_KEY_PREFIX}:sess-1:")
@pytest.mark.asyncio
async def test_acquire_returns_none_on_duplicate(
mocker: pytest_mock.MockerFixture,
) -> None:
"""Duplicate request (NX fails) returns None to signal the caller."""
_patch_redis(mocker, set_returns=None)
result = await acquire_dedup_lock("sess-1", "hello", None)
assert result is None
@pytest.mark.asyncio
async def test_acquire_key_stable_across_file_order(
mocker: pytest_mock.MockerFixture,
) -> None:
"""File IDs are sorted before hashing so order doesn't affect the key."""
mock_redis_1 = _patch_redis(mocker, set_returns=True)
await acquire_dedup_lock("sess-1", "msg", ["b", "a"])
key_ab = mock_redis_1.set.call_args.args[0]
mock_redis_2 = _patch_redis(mocker, set_returns=True)
await acquire_dedup_lock("sess-1", "msg", ["a", "b"])
key_ba = mock_redis_2.set.call_args.args[0]
assert key_ab == key_ba
@pytest.mark.asyncio
async def test_release_deletes_key(
mocker: pytest_mock.MockerFixture,
) -> None:
"""release() calls Redis delete exactly once."""
mock_redis = _patch_redis(mocker, set_returns=True)
lock = await acquire_dedup_lock("sess-1", "hello", None)
assert lock is not None
await lock.release()
mock_redis.delete.assert_called_once()
@pytest.mark.asyncio
async def test_release_swallows_redis_error(
mocker: pytest_mock.MockerFixture,
) -> None:
"""release() must not raise even when Redis delete fails."""
mock_redis = _patch_redis(mocker, set_returns=True)
mock_redis.delete = AsyncMock(side_effect=RuntimeError("redis down"))
lock = await acquire_dedup_lock("sess-1", "hello", None)
assert lock is not None
await lock.release() # must not raise
mock_redis.delete.assert_called_once()

View File

@@ -1,9 +1,8 @@
import asyncio
import logging
import uuid
from contextlib import asynccontextmanager
from datetime import UTC, datetime
from typing import Any, Self, cast
from weakref import WeakValueDictionary
from typing import Any, AsyncIterator, Self, cast
from openai.types.chat import (
ChatCompletionAssistantMessageParam,
@@ -522,10 +521,7 @@ async def upsert_chat_session(
callers are aware of the persistence failure.
RedisError: If the cache write fails (after successful DB write).
"""
# Acquire session-specific lock to prevent concurrent upserts
lock = await _get_session_lock(session.session_id)
async with lock:
async with _get_session_lock(session.session_id) as _:
# Always query DB for existing message count to ensure consistency
existing_message_count = await chat_db().get_next_sequence(session.session_id)
@@ -651,20 +647,50 @@ async def _save_session_to_db(
msg.sequence = existing_message_count + i
async def append_and_save_message(session_id: str, message: ChatMessage) -> ChatSession:
async def append_and_save_message(
session_id: str, message: ChatMessage
) -> ChatSession | None:
"""Atomically append a message to a session and persist it.
Acquires the session lock, re-fetches the latest session state,
appends the message, and saves — preventing message loss when
concurrent requests modify the same session.
"""
lock = await _get_session_lock(session_id)
Returns the updated session, or None if the message was detected as a
duplicate (idempotency guard). Callers must check for None and skip any
downstream work (e.g. enqueuing a new LLM turn) when a duplicate is detected.
async with lock:
session = await get_chat_session(session_id)
Uses _get_session_lock (Redis NX) to serialise concurrent writers across replicas.
The idempotency check below provides a last-resort guard when the lock degrades.
"""
async with _get_session_lock(session_id) as lock_acquired:
# When the lock degraded (Redis down or 2s timeout), bypass cache for
# the idempotency check. Stale cache could let two concurrent writers
# both see the old state, pass the check, and write the same message.
if lock_acquired:
session = await get_chat_session(session_id)
else:
session = await _get_session_from_db(session_id)
if session is None:
raise ValueError(f"Session {session_id} not found")
# Idempotency: skip if the trailing block of same-role messages already
# contains this content. Uses is_message_duplicate which checks all
# consecutive trailing messages of the same role, not just [-1].
#
# This collapses infra/nginx retries whether they land on the same pod
# (serialised by the Redis lock) or a different pod.
#
# Legit same-text messages are distinguished by the assistant turn
# between them: if the user said "yes", got a response, and says
# "yes" again, session.messages[-1] is the assistant reply, so the
# role check fails and the second message goes through normally.
#
# Edge case: if a turn dies without writing any assistant message,
# the user's next send of the same text is blocked here permanently.
# The fix is to ensure failed turns always write an error/timeout
# assistant message so the session always ends on an assistant turn.
if message.content is not None and is_message_duplicate(
session.messages, message.role, message.content
):
return None # duplicate — caller should skip enqueue
session.messages.append(message)
existing_message_count = await chat_db().get_next_sequence(session_id)
@@ -679,6 +705,9 @@ async def append_and_save_message(session_id: str, message: ChatMessage) -> Chat
await cache_chat_session(session)
except Exception as e:
logger.warning(f"Cache write failed for session {session_id}: {e}")
# Invalidate the stale entry so future reads fall back to DB,
# preventing a retry from bypassing the idempotency check above.
await invalidate_session_cache(session_id)
return session
@@ -764,10 +793,6 @@ async def delete_chat_session(session_id: str, user_id: str | None = None) -> bo
except Exception as e:
logger.warning(f"Failed to delete session {session_id} from cache: {e}")
# Clean up session lock (belt-and-suspenders with WeakValueDictionary)
async with _session_locks_mutex:
_session_locks.pop(session_id, None)
# Shut down any local browser daemon for this session (best-effort).
# Inline import required: all tool modules import ChatSession from this
# module, so any top-level import from tools.* would create a cycle.
@@ -832,25 +857,38 @@ async def update_session_title(
# ==================== Chat session locks ==================== #
_session_locks: WeakValueDictionary[str, asyncio.Lock] = WeakValueDictionary()
_session_locks_mutex = asyncio.Lock()
@asynccontextmanager
async def _get_session_lock(session_id: str) -> AsyncIterator[bool]:
"""Distributed Redis lock for a session, usable as an async context manager.
async def _get_session_lock(session_id: str) -> asyncio.Lock:
"""Get or create a lock for a specific session to prevent concurrent upserts.
Yields True if the lock was acquired, False if it timed out or Redis was
unavailable. Callers should treat False as a degraded mode and prefer fresh
DB reads over cache to avoid acting on stale state.
This was originally added to solve the specific problem of race conditions between
the session title thread and the conversation thread, which always occurs on the
same instance as we prevent rapid request sends on the frontend.
Uses WeakValueDictionary for automatic cleanup: locks are garbage collected
when no coroutine holds a reference to them, preventing memory leaks from
unbounded growth of session locks. Explicit cleanup also occurs
in `delete_chat_session()`.
Uses redis-py's built-in Lock (Lua-script acquire/release) so lock acquisition
is atomic and release is owner-verified. Blocks up to 2s for a concurrent
writer to finish; the 10s TTL ensures a dead pod never holds the lock forever.
"""
async with _session_locks_mutex:
lock = _session_locks.get(session_id)
if lock is None:
lock = asyncio.Lock()
_session_locks[session_id] = lock
return lock
_lock_key = f"copilot:session_lock:{session_id}"
lock = None
acquired = False
try:
_redis = await get_redis_async()
lock = _redis.lock(_lock_key, timeout=10, blocking_timeout=2)
acquired = await lock.acquire(blocking=True)
if not acquired:
logger.warning(
"Could not acquire session lock for %s within 2s", session_id
)
except Exception as e:
logger.warning("Redis unavailable for session lock on %s: %s", session_id, e)
try:
yield acquired
finally:
if acquired and lock is not None:
try:
await lock.release()
except Exception:
pass # TTL will expire the key

View File

@@ -11,11 +11,13 @@ from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from pytest_mock import MockerFixture
from .model import (
ChatMessage,
ChatSession,
Usage,
append_and_save_message,
get_chat_session,
is_message_duplicate,
maybe_append_user_message,
@@ -574,3 +576,345 @@ def test_maybe_append_assistant_skips_duplicate():
result = maybe_append_user_message(session, "dup", is_user_message=False)
assert result is False
assert len(session.messages) == 2
# --------------------------------------------------------------------------- #
# append_and_save_message #
# --------------------------------------------------------------------------- #
def _make_session_with_messages(*msgs: ChatMessage) -> ChatSession:
s = ChatSession.new(user_id="u1", dry_run=False)
s.messages = list(msgs)
return s
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_returns_none_for_duplicate(
mocker: MockerFixture,
) -> None:
"""append_and_save_message returns None when the trailing message is a duplicate."""
session = _make_session_with_messages(
ChatMessage(role="user", content="hello"),
)
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=True)
mock_redis_lock.release = mocker.AsyncMock()
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mocker.patch(
"backend.copilot.model.get_chat_session",
new_callable=mocker.AsyncMock,
return_value=session,
)
result = await append_and_save_message(
session.session_id, ChatMessage(role="user", content="hello")
)
assert result is None
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_appends_new_message(
mocker: MockerFixture,
) -> None:
"""append_and_save_message appends a non-duplicate message and returns the session."""
session = _make_session_with_messages(
ChatMessage(role="user", content="hello"),
ChatMessage(role="assistant", content="hi"),
)
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=True)
mock_redis_lock.release = mocker.AsyncMock()
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mocker.patch(
"backend.copilot.model.get_chat_session",
new_callable=mocker.AsyncMock,
return_value=session,
)
mocker.patch(
"backend.copilot.model._save_session_to_db",
new_callable=mocker.AsyncMock,
)
mocker.patch(
"backend.copilot.model.chat_db",
return_value=mocker.MagicMock(
get_next_sequence=mocker.AsyncMock(return_value=2)
),
)
mocker.patch(
"backend.copilot.model.cache_chat_session",
new_callable=mocker.AsyncMock,
)
new_msg = ChatMessage(role="user", content="second message")
result = await append_and_save_message(session.session_id, new_msg)
assert result is not None
assert result.messages[-1].content == "second message"
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_raises_when_session_not_found(
mocker: MockerFixture,
) -> None:
"""append_and_save_message raises ValueError when the session does not exist."""
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=True)
mock_redis_lock.release = mocker.AsyncMock()
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mocker.patch(
"backend.copilot.model.get_chat_session",
new_callable=mocker.AsyncMock,
return_value=None,
)
with pytest.raises(ValueError, match="not found"):
await append_and_save_message(
"missing-session-id", ChatMessage(role="user", content="hi")
)
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_uses_db_when_lock_degraded(
mocker: MockerFixture,
) -> None:
"""When the Redis lock times out (acquired=False), the fallback reads from DB."""
session = _make_session_with_messages(
ChatMessage(role="assistant", content="hi"),
)
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=False)
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mock_get_from_db = mocker.patch(
"backend.copilot.model._get_session_from_db",
new_callable=mocker.AsyncMock,
return_value=session,
)
mocker.patch(
"backend.copilot.model._save_session_to_db",
new_callable=mocker.AsyncMock,
)
mocker.patch(
"backend.copilot.model.chat_db",
return_value=mocker.MagicMock(
get_next_sequence=mocker.AsyncMock(return_value=1)
),
)
mocker.patch(
"backend.copilot.model.cache_chat_session",
new_callable=mocker.AsyncMock,
)
new_msg = ChatMessage(role="user", content="new msg")
result = await append_and_save_message(session.session_id, new_msg)
# DB path was used (not cache-first)
mock_get_from_db.assert_called_once_with(session.session_id)
assert result is not None
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_raises_database_error_on_save_failure(
mocker: MockerFixture,
) -> None:
"""When _save_session_to_db fails, append_and_save_message raises DatabaseError."""
from backend.util.exceptions import DatabaseError
session = _make_session_with_messages(
ChatMessage(role="assistant", content="hi"),
)
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=True)
mock_redis_lock.release = mocker.AsyncMock()
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mocker.patch(
"backend.copilot.model.get_chat_session",
new_callable=mocker.AsyncMock,
return_value=session,
)
mocker.patch(
"backend.copilot.model._save_session_to_db",
new_callable=mocker.AsyncMock,
side_effect=RuntimeError("db down"),
)
mocker.patch(
"backend.copilot.model.chat_db",
return_value=mocker.MagicMock(
get_next_sequence=mocker.AsyncMock(return_value=1)
),
)
with pytest.raises(DatabaseError):
await append_and_save_message(
session.session_id, ChatMessage(role="user", content="new msg")
)
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_invalidates_cache_on_cache_failure(
mocker: MockerFixture,
) -> None:
"""When cache_chat_session fails, invalidate_session_cache is called to avoid stale reads."""
session = _make_session_with_messages(
ChatMessage(role="assistant", content="hi"),
)
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=True)
mock_redis_lock.release = mocker.AsyncMock()
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mocker.patch(
"backend.copilot.model.get_chat_session",
new_callable=mocker.AsyncMock,
return_value=session,
)
mocker.patch(
"backend.copilot.model._save_session_to_db",
new_callable=mocker.AsyncMock,
)
mocker.patch(
"backend.copilot.model.chat_db",
return_value=mocker.MagicMock(
get_next_sequence=mocker.AsyncMock(return_value=1)
),
)
mocker.patch(
"backend.copilot.model.cache_chat_session",
new_callable=mocker.AsyncMock,
side_effect=RuntimeError("redis write failed"),
)
mock_invalidate = mocker.patch(
"backend.copilot.model.invalidate_session_cache",
new_callable=mocker.AsyncMock,
)
result = await append_and_save_message(
session.session_id, ChatMessage(role="user", content="new msg")
)
# DB write succeeded, cache invalidation was called
mock_invalidate.assert_called_once_with(session.session_id)
assert result is not None
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_uses_db_when_redis_unavailable(
mocker: MockerFixture,
) -> None:
"""When get_redis_async raises, _get_session_lock yields False (degraded) and DB is read."""
session = _make_session_with_messages(
ChatMessage(role="assistant", content="hi"),
)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
side_effect=ConnectionError("redis down"),
)
mock_get_from_db = mocker.patch(
"backend.copilot.model._get_session_from_db",
new_callable=mocker.AsyncMock,
return_value=session,
)
mocker.patch(
"backend.copilot.model._save_session_to_db",
new_callable=mocker.AsyncMock,
)
mocker.patch(
"backend.copilot.model.chat_db",
return_value=mocker.MagicMock(
get_next_sequence=mocker.AsyncMock(return_value=1)
),
)
mocker.patch(
"backend.copilot.model.cache_chat_session",
new_callable=mocker.AsyncMock,
)
new_msg = ChatMessage(role="user", content="new msg")
result = await append_and_save_message(session.session_id, new_msg)
mock_get_from_db.assert_called_once_with(session.session_id)
assert result is not None
@pytest.mark.asyncio(loop_scope="session")
async def test_append_and_save_message_lock_release_failure_is_ignored(
mocker: MockerFixture,
) -> None:
"""If lock.release() raises, the exception is swallowed (TTL will clean up)."""
session = _make_session_with_messages(
ChatMessage(role="assistant", content="hi"),
)
mock_redis_lock = mocker.AsyncMock()
mock_redis_lock.acquire = mocker.AsyncMock(return_value=True)
mock_redis_lock.release = mocker.AsyncMock(
side_effect=RuntimeError("release failed")
)
mock_redis_client = mocker.MagicMock()
mock_redis_client.lock = mocker.MagicMock(return_value=mock_redis_lock)
mocker.patch(
"backend.copilot.model.get_redis_async",
new_callable=mocker.AsyncMock,
return_value=mock_redis_client,
)
mocker.patch(
"backend.copilot.model.get_chat_session",
new_callable=mocker.AsyncMock,
return_value=session,
)
mocker.patch(
"backend.copilot.model._save_session_to_db",
new_callable=mocker.AsyncMock,
)
mocker.patch(
"backend.copilot.model.chat_db",
return_value=mocker.MagicMock(
get_next_sequence=mocker.AsyncMock(return_value=1)
),
)
mocker.patch(
"backend.copilot.model.cache_chat_session",
new_callable=mocker.AsyncMock,
)
new_msg = ChatMessage(role="user", content="new msg")
result = await append_and_save_message(session.session_id, new_msg)
assert result is not None

View File

@@ -423,20 +423,33 @@ async def subscribe_to_session(
extra={"json_fields": {**log_meta, "duration_ms": hgetall_time}},
)
# RACE CONDITION FIX: If session not found, retry once after small delay
# This handles the case where subscribe_to_session is called immediately
# after create_session but before Redis propagates the write
# RACE CONDITION FIX: If session not found, retry with backoff.
# Duplicate requests skip create_session and subscribe immediately; the
# original request's create_session (a Redis hset) may not have completed
# yet. 3 × 100ms gives a 300ms window which covers DB-write latency on the
# original request before the hset even starts.
if not meta:
logger.warning(
"[TIMING] Session not found on first attempt, retrying after 50ms delay",
extra={"json_fields": {**log_meta}},
)
await asyncio.sleep(0.05) # 50ms
meta = await redis.hgetall(meta_key) # type: ignore[misc]
if not meta:
_max_retries = 3
_retry_delay = 0.1 # 100ms per attempt
for attempt in range(_max_retries):
logger.warning(
f"[TIMING] Session not found (attempt {attempt + 1}/{_max_retries}), "
f"retrying after {int(_retry_delay * 1000)}ms",
extra={"json_fields": {**log_meta, "attempt": attempt + 1}},
)
await asyncio.sleep(_retry_delay)
meta = await redis.hgetall(meta_key) # type: ignore[misc]
if meta:
logger.info(
f"[TIMING] Session found after {attempt + 1} retries",
extra={"json_fields": {**log_meta, "attempts": attempt + 1}},
)
break
else:
elapsed = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] Session still not found in Redis after retry ({elapsed:.1f}ms total)",
f"[TIMING] Session still not found in Redis after {_max_retries} retries "
f"({elapsed:.1f}ms total)",
extra={
"json_fields": {
**log_meta,
@@ -446,10 +459,6 @@ async def subscribe_to_session(
},
)
return None
logger.info(
"[TIMING] Session found after retry",
extra={"json_fields": {**log_meta}},
)
# Note: Redis client uses decode_responses=True, so keys are strings
session_status = meta.get("status", "")

View File

@@ -143,6 +143,8 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.GROK_4: 9,
LlmModel.GROK_4_FAST: 1,
LlmModel.GROK_4_1_FAST: 1,
LlmModel.GROK_4_20: 5,
LlmModel.GROK_4_20_MULTI_AGENT: 5,
LlmModel.GROK_CODE_FAST_1: 1,
LlmModel.KIMI_K2: 1,
LlmModel.QWEN3_235B_A22B_THINKING: 1,

View File

@@ -40,6 +40,8 @@
"folder_id": null,
"folder_name": null,
"recommended_schedule_cron": null,
"is_scheduled": false,
"next_scheduled_run": null,
"settings": {
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
@@ -86,6 +88,8 @@
"folder_id": null,
"folder_name": null,
"recommended_schedule_cron": null,
"is_scheduled": false,
"next_scheduled_run": null,
"settings": {
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false

View File

@@ -1,5 +1,5 @@
import { describe, expect, it } from "vitest";
import { serializeGraphForChat } from "../helpers";
import { getNodeDisplayName, serializeGraphForChat } from "../helpers";
import type { CustomNode } from "../../FlowEditor/nodes/CustomNode/CustomNode";
describe("serializeGraphForChat XML injection prevention", () => {
@@ -53,3 +53,53 @@ describe("serializeGraphForChat XML injection prevention", () => {
expect(result).toContain("&lt;injection&gt;");
});
});
function makeNode(overrides: Partial<CustomNode["data"]> = {}): CustomNode {
return {
id: "node-1",
data: {
title: "AgentExecutorBlock",
description: "",
hardcodedValues: {},
inputSchema: {},
outputSchema: {},
uiType: "agent",
block_id: "b1",
costs: [],
categories: [],
...overrides,
},
type: "custom" as const,
position: { x: 0, y: 0 },
} as unknown as CustomNode;
}
describe("getNodeDisplayName", () => {
it("returns fallback when node is undefined", () => {
expect(getNodeDisplayName(undefined, "fallback-id")).toBe("fallback-id");
});
it("returns customized_name when set", () => {
const node = makeNode({
metadata: { customized_name: "My Agent" } as any,
});
expect(getNodeDisplayName(node, "fallback")).toBe("My Agent");
});
it("returns agent_name with version via getNodeDisplayTitle delegation", () => {
const node = makeNode({
hardcodedValues: { agent_name: "Researcher", graph_version: 3 },
});
expect(getNodeDisplayName(node, "fallback")).toBe("Researcher v3");
});
it("returns block title when no custom or agent name", () => {
const node = makeNode({ title: "SomeBlock" });
expect(getNodeDisplayName(node, "fallback")).toBe("SomeBlock");
});
it("returns fallback when title is empty", () => {
const node = makeNode({ title: "" });
expect(getNodeDisplayName(node, "fallback")).toBe("fallback");
});
});

View File

@@ -1,5 +1,6 @@
import type { CustomNode } from "../FlowEditor/nodes/CustomNode/CustomNode";
import type { CustomEdge } from "../FlowEditor/edges/CustomEdge";
import { getNodeDisplayTitle } from "../FlowEditor/nodes/CustomNode/helpers";
/** Maximum nodes serialized into the AI context to prevent token overruns. */
const MAX_NODES = 100;
@@ -144,18 +145,16 @@ export function getActionKey(action: GraphAction): string {
/**
* Resolves the display name for a node: prefers the user-customized name,
* falls back to the block title, then to the raw ID.
* then agent name from hardcodedValues, then block title, then fallback ID.
* Delegates to `getNodeDisplayTitle` for the 3-tier resolution logic.
* Shared between `serializeGraphForChat` and `ActionItem` to avoid duplication.
*/
export function getNodeDisplayName(
node: CustomNode | undefined,
fallback: string,
): string {
return (
(node?.data.metadata?.customized_name as string | undefined) ||
node?.data.title ||
fallback
);
if (!node) return fallback;
return getNodeDisplayTitle(node.data) || fallback;
}
/**

View File

@@ -0,0 +1,92 @@
import { describe, it, expect } from "vitest";
import { getNodeDisplayTitle, formatNodeDisplayTitle } from "../helpers";
import { CustomNodeData } from "../CustomNode";
function makeNodeData(overrides: Partial<CustomNodeData> = {}): CustomNodeData {
return {
title: "AgentExecutorBlock",
description: "",
hardcodedValues: {},
inputSchema: {},
outputSchema: {},
uiType: "agent",
block_id: "block-1",
costs: [],
categories: [],
...overrides,
} as CustomNodeData;
}
describe("getNodeDisplayTitle", () => {
it("returns customized_name when set (tier 1)", () => {
const data = makeNodeData({
metadata: { customized_name: "My Custom Agent" } as any,
hardcodedValues: { agent_name: "Researcher", graph_version: 2 },
});
expect(getNodeDisplayTitle(data)).toBe("My Custom Agent");
});
it("returns agent_name with version when no customized_name (tier 2)", () => {
const data = makeNodeData({
hardcodedValues: { agent_name: "Researcher", graph_version: 2 },
});
expect(getNodeDisplayTitle(data)).toBe("Researcher v2");
});
it("returns agent_name without version when graph_version is undefined (tier 2)", () => {
const data = makeNodeData({
hardcodedValues: { agent_name: "Researcher" },
});
expect(getNodeDisplayTitle(data)).toBe("Researcher");
});
it("returns agent_name with version 0 (tier 2)", () => {
const data = makeNodeData({
hardcodedValues: { agent_name: "Researcher", graph_version: 0 },
});
expect(getNodeDisplayTitle(data)).toBe("Researcher v0");
});
it("returns generic block title when no custom or agent name (tier 3)", () => {
const data = makeNodeData({ title: "AgentExecutorBlock" });
expect(getNodeDisplayTitle(data)).toBe("AgentExecutorBlock");
});
it("prioritizes customized_name over agent_name", () => {
const data = makeNodeData({
metadata: { customized_name: "Renamed" } as any,
hardcodedValues: { agent_name: "Original Agent", graph_version: 1 },
});
expect(getNodeDisplayTitle(data)).toBe("Renamed");
});
});
describe("formatNodeDisplayTitle", () => {
it("returns custom name as-is without beautifying", () => {
const data = makeNodeData({
metadata: { customized_name: "my_custom_name" } as any,
});
expect(formatNodeDisplayTitle(data)).toBe("my_custom_name");
});
it("returns agent name as-is without beautifying", () => {
const data = makeNodeData({
hardcodedValues: { agent_name: "Blockchain Agent", graph_version: 1 },
});
expect(formatNodeDisplayTitle(data)).toBe("Blockchain Agent v1");
});
it("beautifies generic block title and strips Block suffix", () => {
const data = makeNodeData({ title: "AgentExecutorBlock" });
const result = formatNodeDisplayTitle(data);
expect(result).not.toContain("Block");
expect(result).toBe("Agent Executor");
});
it("does not corrupt agent names containing 'Block'", () => {
const data = makeNodeData({
hardcodedValues: { agent_name: "Blockchain Agent", graph_version: 2 },
});
expect(formatNodeDisplayTitle(data)).toBe("Blockchain Agent v2");
});
});

View File

@@ -6,9 +6,10 @@ import {
TooltipProvider,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { beautifyString, cn } from "@/lib/utils";
import { useState } from "react";
import { cn } from "@/lib/utils";
import { useEffect, useState } from "react";
import { CustomNodeData } from "../CustomNode";
import { formatNodeDisplayTitle, getNodeDisplayTitle } from "../helpers";
import { NodeBadges } from "./NodeBadges";
import { NodeContextMenu } from "./NodeContextMenu";
import { NodeCost } from "./NodeCost";
@@ -21,15 +22,24 @@ type Props = {
export const NodeHeader = ({ data, nodeId }: Props) => {
const updateNodeData = useNodeStore((state) => state.updateNodeData);
const title = (data.metadata?.customized_name as string) || data.title;
const title = getNodeDisplayTitle(data);
const displayTitle = formatNodeDisplayTitle(data);
const [isEditingTitle, setIsEditingTitle] = useState(false);
const [editedTitle, setEditedTitle] = useState(title);
useEffect(() => {
if (!isEditingTitle) {
setEditedTitle(title);
}
}, [title, isEditingTitle]);
const handleTitleEdit = () => {
updateNodeData(nodeId, {
metadata: { ...data.metadata, customized_name: editedTitle },
});
if (editedTitle !== title) {
updateNodeData(nodeId, {
metadata: { ...data.metadata, customized_name: editedTitle },
});
}
setIsEditingTitle(false);
};
@@ -72,12 +82,12 @@ export const NodeHeader = ({ data, nodeId }: Props) => {
variant="large-semibold"
className="line-clamp-1 hover:cursor-text"
>
{beautifyString(title).replace("Block", "").trim()}
{displayTitle}
</Text>
</div>
</TooltipTrigger>
<TooltipContent>
<p>{beautifyString(title).replace("Block", "").trim()}</p>
<p>{displayTitle}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>

View File

@@ -0,0 +1,121 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, fireEvent } from "@/tests/integrations/test-utils";
import { NodeHeader } from "../NodeHeader";
import { CustomNodeData } from "../../CustomNode";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
vi.mock("../NodeCost", () => ({
NodeCost: () => <div data-testid="node-cost" />,
}));
vi.mock("../NodeContextMenu", () => ({
NodeContextMenu: () => <div data-testid="node-context-menu" />,
}));
vi.mock("../NodeBadges", () => ({
NodeBadges: () => <div data-testid="node-badges" />,
}));
function makeData(overrides: Partial<CustomNodeData> = {}): CustomNodeData {
return {
title: "AgentExecutorBlock",
description: "",
hardcodedValues: {},
inputSchema: {},
outputSchema: {},
uiType: "agent",
block_id: "block-1",
costs: [],
categories: [],
...overrides,
} as CustomNodeData;
}
describe("NodeHeader", () => {
const mockUpdateNodeData = vi.fn();
beforeEach(() => {
vi.clearAllMocks();
useNodeStore.setState({ updateNodeData: mockUpdateNodeData } as any);
});
it("renders beautified generic block title", () => {
render(<NodeHeader data={makeData()} nodeId="abc-123" />);
expect(screen.getByText("Agent Executor")).toBeTruthy();
});
it("renders agent name with version from hardcodedValues", () => {
const data = makeData({
hardcodedValues: { agent_name: "Researcher", graph_version: 2 },
});
render(<NodeHeader data={data} nodeId="abc-123" />);
expect(screen.getByText("Researcher v2")).toBeTruthy();
});
it("renders customized_name over agent name", () => {
const data = makeData({
metadata: { customized_name: "My Custom Node" } as any,
hardcodedValues: { agent_name: "Researcher", graph_version: 1 },
});
render(<NodeHeader data={data} nodeId="abc-123" />);
expect(screen.getByText("My Custom Node")).toBeTruthy();
});
it("shows node ID prefix", () => {
render(<NodeHeader data={makeData()} nodeId="abc-123" />);
expect(screen.getByText("#abc")).toBeTruthy();
});
it("enters edit mode on double-click and saves on blur", () => {
render(<NodeHeader data={makeData()} nodeId="node-1" />);
const titleEl = screen.getByText("Agent Executor");
fireEvent.doubleClick(titleEl);
const input = screen.getByDisplayValue("AgentExecutorBlock");
fireEvent.change(input, { target: { value: "New Name" } });
fireEvent.blur(input);
expect(mockUpdateNodeData).toHaveBeenCalledWith("node-1", {
metadata: { customized_name: "New Name" },
});
});
it("does not save when title is unchanged on blur", () => {
const data = makeData({
hardcodedValues: { agent_name: "Researcher", graph_version: 2 },
});
render(<NodeHeader data={data} nodeId="node-1" />);
const titleEl = screen.getByText("Researcher v2");
fireEvent.doubleClick(titleEl);
const input = screen.getByDisplayValue("Researcher v2");
fireEvent.blur(input);
expect(mockUpdateNodeData).not.toHaveBeenCalled();
});
it("saves on Enter key", () => {
render(<NodeHeader data={makeData()} nodeId="node-1" />);
fireEvent.doubleClick(screen.getByText("Agent Executor"));
const input = screen.getByDisplayValue("AgentExecutorBlock");
fireEvent.change(input, { target: { value: "Renamed" } });
fireEvent.keyDown(input, { key: "Enter" });
expect(mockUpdateNodeData).toHaveBeenCalledWith("node-1", {
metadata: { customized_name: "Renamed" },
});
});
it("cancels edit on Escape key", () => {
render(<NodeHeader data={makeData()} nodeId="node-1" />);
fireEvent.doubleClick(screen.getByText("Agent Executor"));
const input = screen.getByDisplayValue("AgentExecutorBlock");
fireEvent.change(input, { target: { value: "Changed" } });
fireEvent.keyDown(input, { key: "Escape" });
expect(mockUpdateNodeData).not.toHaveBeenCalled();
expect(screen.getByText("Agent Executor")).toBeTruthy();
});
});

View File

@@ -1,6 +1,55 @@
import { AgentExecutionStatus } from "@/app/api/__generated__/models/agentExecutionStatus";
import { NodeResolutionData } from "@/app/(platform)/build/stores/types";
import { beautifyString } from "@/lib/utils";
import { RJSFSchema } from "@rjsf/utils";
import { CustomNodeData } from "./CustomNode";
/**
* Resolves the display title for a node using a 3-tier fallback:
*
* 1. `customized_name` — the user's manual rename (highest priority)
* 2. `agent_name` (+ version) from `hardcodedValues` — the selected agent's
* display name, persisted by blocks like AgentExecutorBlock
* 3. `data.title` — the generic block name (e.g. "Agent Executor")
*
* `customized_name` is the user's explicit rename via double-click; it lives in
* node metadata. `agent_name` is the programmatic name of the agent graph
* selected in the block's input form; it lives in `hardcodedValues` alongside
* `graph_version`. These are distinct sources of truth — customized_name always
* wins because it reflects deliberate user intent.
*/
export function getNodeDisplayTitle(data: CustomNodeData): string {
if (data.metadata?.customized_name) {
return data.metadata.customized_name as string;
}
const agentName = data.hardcodedValues?.agent_name as string | undefined;
const graphVersion = data.hardcodedValues?.graph_version as
| number
| undefined;
if (agentName) {
return graphVersion != null ? `${agentName} v${graphVersion}` : agentName;
}
return data.title;
}
/**
* Returns the formatted display title for rendering.
* Agent names and custom names are shown as-is; generic block names get
* beautified and have the trailing " Block" suffix stripped.
*/
export function formatNodeDisplayTitle(data: CustomNodeData): string {
const title = getNodeDisplayTitle(data);
const isAgentOrCustom = !!(
data.metadata?.customized_name || data.hardcodedValues?.agent_name
);
return isAgentOrCustom
? title
: beautifyString(title)
.replace(/ Block$/, "")
.trim();
}
export const nodeStyleBasedOnStatus: Record<AgentExecutionStatus, string> = {
INCOMPLETE: "ring-slate-300 bg-slate-300",

View File

@@ -1,3 +1,4 @@
import { formatNodeDisplayTitle } from "@/app/(platform)/build/components/FlowEditor/nodes/CustomNode/helpers";
import { Separator } from "@/components/ui/separator";
import { ScrollArea } from "@/components/ui/scroll-area";
import { beautifyString, cn } from "@/lib/utils";
@@ -58,9 +59,7 @@ export function GraphSearchContent({
filteredNodes.map((node, index) => {
if (!node?.data) return null;
const nodeTitle =
(node.data.metadata?.customized_name as string) ||
beautifyString(node.data.title || "").replace(/ Block$/, "");
const nodeTitle = formatNodeDisplayTitle(node.data);
const nodeType = beautifyString(node.data.title || "").replace(
/ Block$/,
"",
@@ -70,7 +69,10 @@ export function GraphSearchContent({
node.data.description ||
"";
const hasCustomName = !!node.data.metadata?.customized_name;
const hasCustomName = !!(
node.data.metadata?.customized_name ||
node.data.hardcodedValues?.agent_name
);
return (
<div

View File

@@ -69,6 +69,9 @@ function calculateNodeScore(
const customizedName = String(
node.data?.metadata?.customized_name || "",
).toLowerCase();
const agentName = String(
node.data?.hardcodedValues?.agent_name || "",
).toLowerCase();
// Get input and output names with defensive checks
const inputNames = Object.keys(node.data?.inputSchema?.properties || {}).map(
@@ -81,6 +84,7 @@ function calculateNodeScore(
// 1. Check exact match in customized name, title (includes ID), node ID, or block type (highest priority)
if (
customizedName.includes(query) ||
agentName.includes(query) ||
nodeTitle.includes(query) ||
nodeID.includes(query) ||
blockType.includes(query) ||
@@ -95,6 +99,7 @@ function calculateNodeScore(
queryWords.every(
(word) =>
customizedName.includes(word) ||
agentName.includes(word) ||
nodeTitle.includes(word) ||
beautifiedBlockType.includes(word),
)

View File

@@ -0,0 +1,87 @@
import { renderHook } from "@testing-library/react";
import { beforeEach, describe, expect, it, vi } from "vitest";
import { useChatSession } from "../useChatSession";
const mockUseGetV2GetSession = vi.fn();
vi.mock("@/app/api/__generated__/endpoints/chat/chat", () => ({
useGetV2GetSession: (...args: unknown[]) => mockUseGetV2GetSession(...args),
usePostV2CreateSession: () => ({ mutateAsync: vi.fn(), isPending: false }),
getGetV2GetSessionQueryKey: (id: string) => ["session", id],
getGetV2ListSessionsQueryKey: () => ["sessions"],
}));
vi.mock("@tanstack/react-query", () => ({
useQueryClient: () => ({
invalidateQueries: vi.fn(),
setQueryData: vi.fn(),
}),
}));
vi.mock("nuqs", () => ({
parseAsString: { withDefault: (v: unknown) => v },
useQueryState: () => ["sess-1", vi.fn()],
}));
vi.mock("../helpers/convertChatSessionToUiMessages", () => ({
convertChatSessionMessagesToUiMessages: vi.fn(() => ({
messages: [],
historicalDurations: new Map(),
})),
}));
vi.mock("../helpers", () => ({
resolveSessionDryRun: vi.fn(() => false),
}));
vi.mock("@sentry/nextjs", () => ({
captureException: vi.fn(),
}));
function makeQueryResult(data: object | null) {
return {
data: data ? { status: 200, data } : undefined,
isLoading: false,
isError: false,
isFetching: false,
refetch: vi.fn(),
};
}
describe("useChatSession — pagination metadata", () => {
beforeEach(() => {
vi.clearAllMocks();
});
it("returns null for oldestSequence when no session data", () => {
mockUseGetV2GetSession.mockReturnValue(makeQueryResult(null));
const { result } = renderHook(() => useChatSession());
expect(result.current.oldestSequence).toBeNull();
});
it("returns oldestSequence from session data", () => {
mockUseGetV2GetSession.mockReturnValue(
makeQueryResult({
messages: [],
has_more_messages: true,
oldest_sequence: 50,
active_stream: null,
}),
);
const { result } = renderHook(() => useChatSession());
expect(result.current.oldestSequence).toBe(50);
});
it("returns hasMoreMessages from session data", () => {
mockUseGetV2GetSession.mockReturnValue(
makeQueryResult({
messages: [],
has_more_messages: true,
oldest_sequence: 0,
active_stream: null,
}),
);
const { result } = renderHook(() => useChatSession());
expect(result.current.hasMoreMessages).toBe(true);
});
});

View File

@@ -0,0 +1,131 @@
import { renderHook } from "@testing-library/react";
import { beforeEach, describe, expect, it, vi } from "vitest";
import { useCopilotPage } from "../useCopilotPage";
const mockUseChatSession = vi.fn();
const mockUseCopilotStream = vi.fn();
const mockUseLoadMoreMessages = vi.fn();
vi.mock("../useChatSession", () => ({
useChatSession: (...args: unknown[]) => mockUseChatSession(...args),
}));
vi.mock("../useCopilotStream", () => ({
useCopilotStream: (...args: unknown[]) => mockUseCopilotStream(...args),
}));
vi.mock("../useLoadMoreMessages", () => ({
useLoadMoreMessages: (...args: unknown[]) => mockUseLoadMoreMessages(...args),
}));
vi.mock("../useCopilotNotifications", () => ({
useCopilotNotifications: () => undefined,
}));
vi.mock("../useWorkflowImportAutoSubmit", () => ({
useWorkflowImportAutoSubmit: () => undefined,
}));
vi.mock("../store", () => ({
useCopilotUIStore: () => ({
sessionToDelete: null,
setSessionToDelete: vi.fn(),
isDrawerOpen: false,
setDrawerOpen: vi.fn(),
copilotChatMode: "chat",
copilotLlmModel: null,
isDryRun: false,
}),
}));
vi.mock("../helpers/convertChatSessionToUiMessages", () => ({
concatWithAssistantMerge: (a: unknown[], b: unknown[]) => [...a, ...b],
}));
vi.mock("@/app/api/__generated__/endpoints/chat/chat", () => ({
useDeleteV2DeleteSession: () => ({ mutate: vi.fn(), isPending: false }),
useGetV2ListSessions: () => ({ data: undefined, isLoading: false }),
getGetV2ListSessionsQueryKey: () => ["sessions"],
}));
vi.mock("@/components/molecules/Toast/use-toast", () => ({
toast: vi.fn(),
}));
vi.mock("@/lib/direct-upload", () => ({
uploadFileDirect: vi.fn(),
}));
vi.mock("@/lib/hooks/useBreakpoint", () => ({
useBreakpoint: () => "lg",
}));
vi.mock("@/lib/supabase/hooks/useSupabase", () => ({
useSupabase: () => ({ isUserLoading: false, isLoggedIn: true }),
}));
vi.mock("@tanstack/react-query", () => ({
useQueryClient: () => ({ invalidateQueries: vi.fn() }),
}));
vi.mock("@/services/feature-flags/use-get-flag", () => ({
Flag: { CHAT_MODE_OPTION: "CHAT_MODE_OPTION" },
useGetFlag: () => false,
}));
function makeBaseChatSession(overrides: Record<string, unknown> = {}) {
return {
sessionId: "sess-1",
setSessionId: vi.fn(),
hydratedMessages: [],
rawSessionMessages: [],
historicalDurations: new Map(),
hasActiveStream: false,
hasMoreMessages: false,
oldestSequence: null,
isLoadingSession: false,
isSessionError: false,
createSession: vi.fn(),
isCreatingSession: false,
refetchSession: vi.fn(),
sessionDryRun: false,
...overrides,
};
}
function makeBaseCopilotStream(overrides: Record<string, unknown> = {}) {
return {
messages: [],
sendMessage: vi.fn(),
stop: vi.fn(),
status: "ready",
error: undefined,
isReconnecting: false,
isSyncing: false,
isUserStoppingRef: { current: false },
rateLimitMessage: null,
dismissRateLimit: vi.fn(),
...overrides,
};
}
function makeBaseLoadMore(overrides: Record<string, unknown> = {}) {
return {
pagedMessages: [],
hasMore: false,
isLoadingMore: false,
loadMore: vi.fn(),
...overrides,
};
}
describe("useCopilotPage — backward pagination message ordering", () => {
beforeEach(() => {
vi.clearAllMocks();
});
it("prepends pagedMessages before currentMessages", () => {
const pagedMsg = { id: "paged", role: "user" };
const currentMsg = { id: "current", role: "assistant" };
mockUseChatSession.mockReturnValue(makeBaseChatSession());
mockUseCopilotStream.mockReturnValue(
makeBaseCopilotStream({ messages: [currentMsg] }),
);
mockUseLoadMoreMessages.mockReturnValue(
makeBaseLoadMore({ pagedMessages: [pagedMsg] }),
);
const { result } = renderHook(() => useCopilotPage());
// Backward: pagedMessages (older) come first
expect(result.current.messages[0]).toEqual(pagedMsg);
expect(result.current.messages[1]).toEqual(currentMsg);
});
});

View File

@@ -0,0 +1,212 @@
import { act, renderHook, waitFor } from "@testing-library/react";
import { beforeEach, describe, expect, it, vi } from "vitest";
import { useLoadMoreMessages } from "../useLoadMoreMessages";
const mockGetV2GetSession = vi.fn();
vi.mock("@/app/api/__generated__/endpoints/chat/chat", () => ({
getV2GetSession: (...args: unknown[]) => mockGetV2GetSession(...args),
}));
vi.mock("../helpers/convertChatSessionToUiMessages", () => ({
convertChatSessionMessagesToUiMessages: vi.fn(() => ({ messages: [] })),
extractToolOutputsFromRaw: vi.fn(() => []),
}));
const BASE_ARGS = {
sessionId: "sess-1",
initialOldestSequence: 50,
initialHasMore: true,
initialPageRawMessages: [],
};
function makeSuccessResponse(overrides: {
messages?: unknown[];
has_more_messages?: boolean;
oldest_sequence?: number;
}) {
return {
status: 200,
data: {
messages: overrides.messages ?? [],
has_more_messages: overrides.has_more_messages ?? false,
oldest_sequence: overrides.oldest_sequence ?? 0,
},
};
}
describe("useLoadMoreMessages", () => {
beforeEach(() => {
vi.clearAllMocks();
});
it("initialises with empty pagedMessages and correct cursors", () => {
const { result } = renderHook(() => useLoadMoreMessages(BASE_ARGS));
expect(result.current.pagedMessages).toHaveLength(0);
expect(result.current.hasMore).toBe(true);
expect(result.current.isLoadingMore).toBe(false);
});
it("resets all state on sessionId change", () => {
const { result, rerender } = renderHook(
(props) => useLoadMoreMessages(props),
{ initialProps: BASE_ARGS },
);
rerender({
...BASE_ARGS,
sessionId: "sess-2",
initialOldestSequence: 10,
initialHasMore: false,
});
expect(result.current.pagedMessages).toHaveLength(0);
expect(result.current.hasMore).toBe(false);
expect(result.current.isLoadingMore).toBe(false);
});
describe("loadMore — backward pagination", () => {
it("calls getV2GetSession with before_sequence", async () => {
mockGetV2GetSession.mockResolvedValueOnce(
makeSuccessResponse({
messages: [{ role: "user", content: "old", sequence: 0 }],
has_more_messages: false,
oldest_sequence: 0,
}),
);
const { result } = renderHook(() => useLoadMoreMessages(BASE_ARGS));
await act(async () => {
await result.current.loadMore();
});
expect(mockGetV2GetSession).toHaveBeenCalledWith(
"sess-1",
expect.objectContaining({ before_sequence: 50 }),
);
expect(result.current.hasMore).toBe(false);
});
it("is a no-op when hasMore is false", async () => {
const { result } = renderHook(() =>
useLoadMoreMessages({ ...BASE_ARGS, initialHasMore: false }),
);
await act(async () => {
await result.current.loadMore();
});
expect(mockGetV2GetSession).not.toHaveBeenCalled();
});
it("is a no-op when oldestSequence is null", async () => {
const { result } = renderHook(() =>
useLoadMoreMessages({ ...BASE_ARGS, initialOldestSequence: null }),
);
await act(async () => {
await result.current.loadMore();
});
expect(mockGetV2GetSession).not.toHaveBeenCalled();
});
});
describe("loadMore — error handling", () => {
it("does not set hasMore=false on first error", async () => {
mockGetV2GetSession.mockRejectedValueOnce(new Error("network error"));
const { result } = renderHook(() => useLoadMoreMessages(BASE_ARGS));
await act(async () => {
await result.current.loadMore();
});
expect(result.current.hasMore).toBe(true);
expect(result.current.isLoadingMore).toBe(false);
});
it("sets hasMore=false after MAX_CONSECUTIVE_ERRORS (3) errors", async () => {
mockGetV2GetSession.mockRejectedValue(new Error("network error"));
const { result } = renderHook(() => useLoadMoreMessages(BASE_ARGS));
for (let i = 0; i < 3; i++) {
await act(async () => {
await result.current.loadMore();
});
await waitFor(() => expect(result.current.isLoadingMore).toBe(false));
}
expect(result.current.hasMore).toBe(false);
});
it("ignores non-200 response and increments error count", async () => {
mockGetV2GetSession.mockResolvedValueOnce({ status: 500, data: {} });
const { result } = renderHook(() => useLoadMoreMessages(BASE_ARGS));
await act(async () => {
await result.current.loadMore();
});
expect(result.current.hasMore).toBe(true);
expect(result.current.isLoadingMore).toBe(false);
});
});
describe("loadMore — MAX_OLDER_MESSAGES truncation", () => {
it("truncates accumulated messages at MAX_OLDER_MESSAGES (2000)", async () => {
mockGetV2GetSession.mockResolvedValueOnce(
makeSuccessResponse({
messages: Array.from({ length: 2001 }, (_, i) => ({
role: "user",
content: `msg ${i}`,
sequence: i,
})),
has_more_messages: true,
oldest_sequence: 0,
}),
);
const { result } = renderHook(() => useLoadMoreMessages(BASE_ARGS));
await act(async () => {
await result.current.loadMore();
});
expect(result.current.hasMore).toBe(false);
});
});
describe("pagedMessages — initialPageRawMessages extraToolOutputs", () => {
it("calls extractToolOutputsFromRaw with non-empty initialPageRawMessages", async () => {
const { extractToolOutputsFromRaw } = await import(
"../helpers/convertChatSessionToUiMessages"
);
const rawMsg = { role: "user", content: "old", sequence: 0 };
mockGetV2GetSession.mockResolvedValueOnce(
makeSuccessResponse({
messages: [rawMsg],
has_more_messages: false,
oldest_sequence: 0,
}),
);
const { result } = renderHook(() =>
useLoadMoreMessages({
...BASE_ARGS,
initialPageRawMessages: [{ role: "assistant", content: "response" }],
}),
);
await act(async () => {
await result.current.loadMore();
});
expect(extractToolOutputsFromRaw).toHaveBeenCalled();
});
});
});

View File

@@ -88,4 +88,53 @@ describe("useAutoOpenArtifacts", () => {
expect(s.activeArtifact?.id).toBe("c");
expect(s.history).toEqual([]);
});
// SECRT-2254: "had agent panel open then went to profile then went to home
// and agent panel was still open". Nav-away unmounts the copilot page; if
// the panel state persists in the store, coming back re-renders it open.
it("closes the panel on unmount so nav-away → nav-back doesn't resurrect it (SECRT-2254)", () => {
useCopilotUIStore.getState().openArtifact(makeArtifact(A_ID, "a.txt"));
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(true);
const { unmount } = renderHook(() =>
useAutoOpenArtifacts({ sessionId: "s1" }),
);
act(() => {
unmount();
});
const s = useCopilotUIStore.getState().artifactPanel;
expect(s.isOpen).toBe(false);
expect(s.activeArtifact).toBeNull();
expect(s.history).toEqual([]);
});
// SECRT-2220: "keep closed by default" — a fresh mount (e.g. user returns to
// /copilot) must start with a closed panel even if the store somehow carries
// stale state from a prior life.
it("does not re-open a panel whose store state is stale on fresh mount (SECRT-2220)", () => {
// Simulate the store being left in an open state by a previous page life.
useCopilotUIStore.setState({
artifactPanel: {
isOpen: true,
isMinimized: false,
isMaximized: false,
width: 600,
activeArtifact: makeArtifact(A_ID, "stale.txt"),
history: [],
},
});
const { unmount } = renderHook(() =>
useAutoOpenArtifacts({ sessionId: "s1" }),
);
act(() => {
unmount();
});
// Next mount of the page should see a clean store.
renderHook(() => useAutoOpenArtifacts({ sessionId: "s1" }));
expect(useCopilotUIStore.getState().artifactPanel.isOpen).toBe(false);
});
});

View File

@@ -26,4 +26,13 @@ export function useAutoOpenArtifacts({
resetArtifactPanel();
}
}, [sessionId, resetArtifactPanel]);
// Reset on unmount so navigating away from /copilot (to /profile, /home,
// etc.) can't leave the panel open in the Zustand store, which would then
// render the panel re-open when the user returns. See SECRT-2254/2220.
useEffect(() => {
return () => {
resetArtifactPanel();
};
}, [resetArtifactPanel]);
}

View File

@@ -0,0 +1,147 @@
import { render, screen, cleanup } from "@/tests/integrations/test-utils";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { ChatMessagesContainer } from "../ChatMessagesContainer";
const mockScrollEl = {
scrollHeight: 100,
scrollTop: 0,
clientHeight: 500,
};
vi.mock("use-stick-to-bottom", () => ({
useStickToBottomContext: () => ({ scrollRef: { current: mockScrollEl } }),
Conversation: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
ConversationContent: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
ConversationScrollButton: () => null,
}));
vi.mock("@/components/ai-elements/conversation", () => ({
Conversation: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
ConversationContent: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
ConversationScrollButton: () => null,
}));
vi.mock("@/components/ai-elements/message", () => ({
Message: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
MessageContent: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
MessageActions: ({ children }: { children: React.ReactNode }) => (
<div>{children}</div>
),
}));
vi.mock("../components/AssistantMessageActions", () => ({
AssistantMessageActions: () => null,
}));
vi.mock("../components/CopyButton", () => ({ CopyButton: () => null }));
vi.mock("../components/CollapsedToolGroup", () => ({
CollapsedToolGroup: () => null,
}));
vi.mock("../components/MessageAttachments", () => ({
MessageAttachments: () => null,
}));
vi.mock("../components/MessagePartRenderer", () => ({
MessagePartRenderer: () => null,
}));
vi.mock("../components/ReasoningCollapse", () => ({
ReasoningCollapse: () => null,
}));
vi.mock("../components/ThinkingIndicator", () => ({
ThinkingIndicator: () => null,
}));
vi.mock("../../JobStatsBar/TurnStatsBar", () => ({
TurnStatsBar: () => null,
}));
vi.mock("../../JobStatsBar/useElapsedTimer", () => ({
useElapsedTimer: () => ({ elapsedSeconds: 0 }),
}));
vi.mock("../../CopilotPendingReviews/CopilotPendingReviews", () => ({
CopilotPendingReviews: () => null,
}));
vi.mock("../helpers", () => ({
buildRenderSegments: () => [],
getTurnMessages: () => [],
parseSpecialMarkers: () => ({ markerType: null }),
splitReasoningAndResponse: (parts: unknown[]) => ({
reasoningParts: [],
responseParts: parts,
}),
}));
type ObserverCallback = (entries: { isIntersecting: boolean }[]) => void;
class MockIntersectionObserver {
static lastCallback: ObserverCallback | null = null;
private callback: ObserverCallback;
constructor(cb: ObserverCallback) {
this.callback = cb;
MockIntersectionObserver.lastCallback = cb;
}
observe() {}
disconnect() {}
unobserve() {}
takeRecords() {
return [];
}
root = null;
rootMargin = "";
thresholds = [];
}
const BASE_PROPS = {
messages: [],
status: "ready" as const,
error: undefined,
isLoading: false,
sessionID: "sess-1",
hasMoreMessages: true,
isLoadingMore: false,
onLoadMore: vi.fn(),
onRetry: vi.fn(),
};
describe("ChatMessagesContainer", () => {
beforeEach(() => {
mockScrollEl.scrollHeight = 100;
mockScrollEl.scrollTop = 0;
mockScrollEl.clientHeight = 500;
MockIntersectionObserver.lastCallback = null;
vi.stubGlobal("IntersectionObserver", MockIntersectionObserver);
});
afterEach(() => {
cleanup();
vi.unstubAllGlobals();
});
it("renders top sentinel for backward pagination", () => {
render(<ChatMessagesContainer {...BASE_PROPS} />);
expect(
screen.getByRole("button", { name: /load older messages/i }),
).toBeDefined();
});
it("hides sentinel when hasMoreMessages is false", () => {
render(<ChatMessagesContainer {...BASE_PROPS} hasMoreMessages={false} />);
expect(
screen.queryByRole("button", { name: /load older messages/i }),
).toBeNull();
});
it("hides sentinel when onLoadMore is not provided", () => {
render(<ChatMessagesContainer {...BASE_PROPS} onLoadMore={undefined} />);
expect(
screen.queryByRole("button", { name: /load older messages/i }),
).toBeNull();
});
});

View File

@@ -246,7 +246,7 @@ export function ChatSidebar() {
</SidebarHeader>
)}
{!isCollapsed && (
<SidebarHeader className="shrink-0 px-4 pb-4 pt-4 shadow-[0_4px_6px_-1px_rgba(0,0,0,0.05)]">
<SidebarHeader className="shrink-0 px-4 pb-3 pt-3 shadow-[0_4px_6px_-1px_rgba(0,0,0,0.05)]">
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}

View File

@@ -16,7 +16,7 @@ export function EditNameDialog({ currentName }: Props) {
const [isOpen, setIsOpen] = useState(false);
const [name, setName] = useState(currentName);
const [isSaving, setIsSaving] = useState(false);
const { supabase, refreshSession } = useSupabase();
const { refreshSession } = useSupabase();
const { toast } = useToast();
function handleOpenChange(open: boolean) {
@@ -26,29 +26,31 @@ export function EditNameDialog({ currentName }: Props) {
async function handleSave() {
const trimmed = name.trim();
if (!trimmed || !supabase) return;
if (!trimmed) return;
setIsSaving(true);
try {
const { error } = await supabase.auth.updateUser({
data: { full_name: trimmed },
const res = await fetch("/api/auth/user", {
method: "PUT",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ full_name: trimmed }),
});
if (error) {
if (!res.ok) {
const body = await res.json();
toast({
title: "Failed to update name",
description: error.message,
description: body.error ?? "Unknown error",
variant: "destructive",
});
return;
}
try {
await refreshSession();
} catch (e) {
const session = await refreshSession();
if (session?.error) {
toast({
title: "Name saved, but session refresh failed",
description: e instanceof Error ? e.message : "Please reload.",
description: session.error,
variant: "destructive",
});
setIsOpen(false);

View File

@@ -5,31 +5,37 @@ import {
screen,
waitFor,
} from "@/tests/integrations/test-utils";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import { EditNameDialog } from "../EditNameDialog";
const mockToast = vi.hoisted(() => vi.fn());
const mockUseSupabase = vi.hoisted(() => vi.fn());
const mockRefreshSession = vi.hoisted(() => vi.fn());
vi.mock("@/components/molecules/Toast/use-toast", () => ({
useToast: () => ({ toast: mockToast }),
}));
vi.mock("@/lib/supabase/hooks/useSupabase", () => ({
useSupabase: mockUseSupabase,
useSupabase: () => ({
refreshSession: mockRefreshSession,
}),
}));
function setup({
updateUser = vi.fn().mockResolvedValue({ error: null }),
refreshSession = vi.fn().mockResolvedValue(undefined),
}: {
updateUser?: ReturnType<typeof vi.fn>;
refreshSession?: ReturnType<typeof vi.fn>;
} = {}) {
mockUseSupabase.mockReturnValue({
supabase: { auth: { updateUser } },
refreshSession,
});
return { updateUser, refreshSession };
function mockUpdateNameSuccess() {
server.use(
http.put("/api/auth/user", () => {
return HttpResponse.json({ user: { id: "u1" } });
}),
);
}
function mockUpdateNameError(message = "Network error") {
server.use(
http.put("/api/auth/user", () => {
return HttpResponse.json({ error: message }, { status: 400 });
}),
);
}
async function openDialogAndGetInput() {
@@ -49,19 +55,20 @@ function getSaveButton() {
describe("EditNameDialog", () => {
beforeEach(() => {
mockToast.mockReset();
mockUseSupabase.mockReset();
mockRefreshSession.mockReset();
mockRefreshSession.mockResolvedValue({ user: { id: "u1" } });
});
test("opens dialog with current name prefilled", async () => {
setup();
mockUpdateNameSuccess();
render(<EditNameDialog currentName="Alice" />);
const input = await openDialogAndGetInput();
expect(input.value).toBe("Alice");
});
test("saves name successfully and closes dialog", async () => {
const { updateUser, refreshSession } = setup();
test("saves name via API route and closes dialog", async () => {
mockUpdateNameSuccess();
render(<EditNameDialog currentName="Alice" />);
const input = await openDialogAndGetInput();
@@ -69,21 +76,13 @@ describe("EditNameDialog", () => {
fireEvent.click(getSaveButton());
await waitFor(() => {
expect(updateUser).toHaveBeenCalledWith({ data: { full_name: "Bob" } });
});
expect(refreshSession).toHaveBeenCalled();
await waitFor(() => {
expect(mockToast).toHaveBeenCalledWith({ title: "Name updated" });
expect(mockRefreshSession).toHaveBeenCalled();
});
expect(mockToast).toHaveBeenCalledWith({ title: "Name updated" });
});
test("shows error toast when updateUser fails and keeps dialog open", async () => {
const updateUser = vi
.fn()
.mockResolvedValue({ error: { message: "Network error" } });
const refreshSession = vi.fn();
setup({ updateUser, refreshSession });
test("shows error toast when API returns error", async () => {
mockUpdateNameError("Network error");
render(<EditNameDialog currentName="Alice" />);
const input = await openDialogAndGetInput();
@@ -99,15 +98,12 @@ describe("EditNameDialog", () => {
}),
);
});
expect(refreshSession).not.toHaveBeenCalled();
expect(mockRefreshSession).not.toHaveBeenCalled();
});
test("closes dialog and toasts failure when refreshSession throws", async () => {
const updateUser = vi.fn().mockResolvedValue({ error: null });
const refreshSession = vi
.fn()
.mockRejectedValue(new Error("refresh failed"));
setup({ updateUser, refreshSession });
test("shows warning toast when refreshSession returns an error", async () => {
mockUpdateNameSuccess();
mockRefreshSession.mockResolvedValue({ error: "refresh failed" });
render(<EditNameDialog currentName="Alice" />);
@@ -119,6 +115,7 @@ describe("EditNameDialog", () => {
expect(mockToast).toHaveBeenCalledWith(
expect.objectContaining({
title: "Name saved, but session refresh failed",
description: "refresh failed",
variant: "destructive",
}),
);
@@ -126,8 +123,8 @@ describe("EditNameDialog", () => {
expect(mockToast).not.toHaveBeenCalledWith({ title: "Name updated" });
});
test("disables Save button while empty input", async () => {
setup();
test("disables Save button while input is empty", async () => {
mockUpdateNameSuccess();
render(<EditNameDialog currentName="Alice" />);
const input = await openDialogAndGetInput();

View File

@@ -34,7 +34,7 @@ export function PulseChips({ chips, onChipClick }: Props) {
View all <ArrowRightIcon size={12} />
</NextLink>
</div>
<div className="flex gap-2 overflow-x-auto">
<div className="flex gap-2 overflow-x-auto pb-1 scrollbar-thin scrollbar-track-transparent scrollbar-thumb-zinc-300">
{chips.map((chip) => (
<PulseChip key={chip.id} chip={chip} onAsk={onChipClick} />
))}
@@ -56,7 +56,7 @@ function PulseChip({ chip, onAsk }: ChipProps) {
return (
<div
className={`${styles.chip} relative flex shrink-0 flex-col items-start gap-2 rounded-medium border border-zinc-100 bg-white px-3 py-2`}
className={`${styles.chip} relative flex w-[15rem] shrink-0 flex-col items-start gap-2 rounded-medium border border-zinc-100 bg-white px-3 py-2`}
>
<div className={`${styles.chipContent} w-full text-left`}>
{chip.priority === "success" ? (

View File

@@ -0,0 +1,105 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent } from "@/tests/integrations/test-utils";
import { PulseChips } from "../PulseChips";
import type { PulseChipData } from "../types";
function makeChip(overrides: Partial<PulseChipData> = {}): PulseChipData {
return {
id: "chip-1",
agentID: "agent-1",
name: "Test Agent",
status: "running",
priority: "running",
shortMessage: "Doing work…",
...overrides,
};
}
describe("PulseChips", () => {
test("renders nothing when chips array is empty", () => {
const { container } = render(<PulseChips chips={[]} />);
expect(container.innerHTML).toBe("");
});
test("renders chip names and messages", () => {
const chips = [
makeChip({ id: "1", name: "Alpha Bot", shortMessage: "Running task A" }),
makeChip({ id: "2", name: "Beta Bot", shortMessage: "Running task B" }),
];
render(<PulseChips chips={chips} />);
expect(screen.getByText("Alpha Bot")).toBeDefined();
expect(screen.getByText("Running task A")).toBeDefined();
expect(screen.getByText("Beta Bot")).toBeDefined();
expect(screen.getByText("Running task B")).toBeDefined();
});
test("renders section heading and View all link", () => {
render(<PulseChips chips={[makeChip()]} />);
expect(screen.getByText("What's happening with your agents")).toBeDefined();
expect(screen.getByText("View all")).toBeDefined();
});
test("shows Completed badge for success priority chips", () => {
render(
<PulseChips
chips={[makeChip({ priority: "success", status: "idle" })]}
/>,
);
expect(screen.getByText("Completed")).toBeDefined();
});
test("calls onChipClick with generated prompt when Ask is clicked", () => {
const onChipClick = vi.fn();
render(
<PulseChips
chips={[
makeChip({
name: "Error Agent",
status: "error",
priority: "error",
}),
]}
onChipClick={onChipClick}
/>,
);
fireEvent.click(screen.getByText("Ask"));
expect(onChipClick).toHaveBeenCalledWith(
"What happened with Error Agent? It has an error — can you check?",
);
});
test("generates success prompt for completed chips", () => {
const onChipClick = vi.fn();
render(
<PulseChips
chips={[
makeChip({
name: "Done Agent",
priority: "success",
status: "idle",
}),
]}
onChipClick={onChipClick}
/>,
);
fireEvent.click(screen.getByText("Ask"));
expect(onChipClick).toHaveBeenCalledWith(
"Done Agent just finished a run — can you summarize what it did?",
);
});
test("renders See link pointing to agent detail page", () => {
render(<PulseChips chips={[makeChip({ agentID: "agent-xyz" })]} />);
const seeLink = screen.getByText("See").closest("a");
expect(seeLink?.getAttribute("href")).toBe("/library/agents/agent-xyz");
});
});

View File

@@ -5,10 +5,12 @@ import { useSitrepItems } from "@/app/(platform)/library/components/SitrepItem/u
import type { PulseChipData } from "./types";
import { useMemo } from "react";
const THREE_DAYS_MS = 3 * 24 * 60 * 60 * 1000;
export function usePulseChips(): PulseChipData[] {
const { agents } = useLibraryAgents();
const sitrepItems = useSitrepItems(agents, 5);
const sitrepItems = useSitrepItems(agents, 5, THREE_DAYS_MS);
return useMemo(() => {
return sitrepItems.map((item) => ({

View File

@@ -0,0 +1,59 @@
import { describe, expect, it } from "vitest";
import { convertChatSessionMessagesToUiMessages } from "../convertChatSessionToUiMessages";
const SESSION_ID = "sess-test";
describe("convertChatSessionMessagesToUiMessages", () => {
it("does not drop user messages with null content", () => {
const result = convertChatSessionMessagesToUiMessages(
SESSION_ID,
[{ role: "user", content: null, sequence: 0 }],
{ isComplete: true },
);
expect(result.messages).toHaveLength(1);
expect(result.messages[0].role).toBe("user");
});
it("does not drop user messages with empty string content", () => {
const result = convertChatSessionMessagesToUiMessages(
SESSION_ID,
[{ role: "user", content: "", sequence: 0 }],
{ isComplete: true },
);
expect(result.messages).toHaveLength(1);
expect(result.messages[0].role).toBe("user");
});
it("still drops non-user messages with null content", () => {
const result = convertChatSessionMessagesToUiMessages(
SESSION_ID,
[{ role: "assistant", content: null, sequence: 0 }],
{ isComplete: true },
);
expect(result.messages).toHaveLength(0);
});
it("still drops non-user messages with empty string content", () => {
const result = convertChatSessionMessagesToUiMessages(
SESSION_ID,
[{ role: "assistant", content: "", sequence: 0 }],
{ isComplete: true },
);
expect(result.messages).toHaveLength(0);
});
it("includes user message with normal content", () => {
const result = convertChatSessionMessagesToUiMessages(
SESSION_ID,
[{ role: "user", content: "hello", sequence: 0 }],
{ isComplete: true },
);
expect(result.messages).toHaveLength(1);
expect(result.messages[0].role).toBe("user");
});
});

View File

@@ -253,6 +253,11 @@ export function convertChatSessionMessagesToUiMessages(
}
}
// User messages must always be rendered, even with empty content, so the
// initial prompt is visible when reloading a session.
if (parts.length === 0 && msg.role === "user") {
parts.push({ type: "text", text: "", state: "done" });
}
if (parts.length === 0) return;
// Merge consecutive assistant messages into a single UIMessage

View File

@@ -84,7 +84,7 @@ export function useCopilotPage() {
copilotModel: isModeToggleEnabled ? copilotLlmModel : undefined,
});
const { olderMessages, hasMore, isLoadingMore, loadMore } =
const { pagedMessages, hasMore, isLoadingMore, loadMore } =
useLoadMoreMessages({
sessionId,
initialOldestSequence: oldestSequence,
@@ -92,10 +92,11 @@ export function useCopilotPage() {
initialPageRawMessages: rawSessionMessages,
});
// Combine older (paginated) messages with current page messages,
// merging consecutive assistant UIMessages at the page boundary so
// reasoning + response parts stay in a single bubble.
const messages = concatWithAssistantMerge(olderMessages, currentMessages);
// Combine paginated messages with current page messages, merging consecutive
// assistant UIMessages at the page boundary so reasoning + response parts
// stay in a single bubble. Paged messages are older history prepended before
// the current page.
const messages = concatWithAssistantMerge(pagedMessages, currentMessages);
useCopilotNotifications(sessionId);

View File

@@ -23,10 +23,10 @@ export function useLoadMoreMessages({
initialHasMore,
initialPageRawMessages,
}: UseLoadMoreMessagesArgs) {
// Store accumulated raw messages from all older pages (in ascending order).
// Accumulated raw messages from all extra pages (ascending order).
// Re-converting them all together ensures tool outputs are matched across
// inter-page boundaries.
const [olderRawMessages, setOlderRawMessages] = useState<unknown[]>([]);
const [pagedRawMessages, setPagedRawMessages] = useState<unknown[]>([]);
const [oldestSequence, setOldestSequence] = useState<number | null>(
initialOldestSequence,
);
@@ -37,16 +37,14 @@ export function useLoadMoreMessages({
// Epoch counter to discard stale loadMore responses after a reset
const epochRef = useRef(0);
// Track the sessionId and initial cursor to reset state on change
const prevSessionIdRef = useRef(sessionId);
const prevInitialOldestRef = useRef(initialOldestSequence);
// Sync initial values from parent when they change.
//
// The parent's `initialOldestSequence` drifts forward every time the
// session query refetches (e.g. after a stream completes — see
// `useCopilotStream` invalidation on `streaming → ready`). If we
// wiped `olderRawMessages` every time that happened, users who had
// wiped `pagedRawMessages` every time that happened, users who had
// scrolled back would lose their loaded history on each new turn and
// subsequent `loadMore` calls would fetch messages that overlap with
// the AI SDK's retained state in `currentMessages`, producing visible
@@ -62,8 +60,7 @@ export function useLoadMoreMessages({
if (prevSessionIdRef.current !== sessionId) {
// Session changed — full reset
prevSessionIdRef.current = sessionId;
prevInitialOldestRef.current = initialOldestSequence;
setOlderRawMessages([]);
setPagedRawMessages([]);
setOldestSequence(initialOldestSequence);
setHasMore(initialHasMore);
setIsLoadingMore(false);
@@ -73,42 +70,35 @@ export function useLoadMoreMessages({
return;
}
prevInitialOldestRef.current = initialOldestSequence;
// If we haven't paged back yet, mirror the parent so the first
// If we haven't paged yet, mirror the parent so the first
// `loadMore` starts from the correct cursor.
if (olderRawMessages.length === 0) {
if (pagedRawMessages.length === 0) {
setOldestSequence(initialOldestSequence);
setHasMore(initialHasMore);
}
}, [sessionId, initialOldestSequence, initialHasMore]);
// Convert all accumulated raw messages in one pass so tool outputs
// are matched across inter-page boundaries. Initial page tool outputs
// are included via extraToolOutputs to handle the boundary between
// the last older page and the initial/streaming page.
const olderMessages: UIMessage<unknown, UIDataTypes, UITools>[] =
// are matched across inter-page boundaries.
// Include initial page tool outputs so older paged pages can match
// tool calls whose outputs landed in the initial page.
const pagedMessages: UIMessage<unknown, UIDataTypes, UITools>[] =
useMemo(() => {
if (!sessionId || olderRawMessages.length === 0) return [];
if (!sessionId || pagedRawMessages.length === 0) return [];
const extraToolOutputs =
initialPageRawMessages.length > 0
? extractToolOutputsFromRaw(initialPageRawMessages)
: undefined;
return convertChatSessionMessagesToUiMessages(
sessionId,
olderRawMessages,
pagedRawMessages,
{ isComplete: true, extraToolOutputs },
).messages;
}, [sessionId, olderRawMessages, initialPageRawMessages]);
}, [sessionId, pagedRawMessages, initialPageRawMessages]);
async function loadMore() {
if (
!sessionId ||
!hasMore ||
isLoadingMoreRef.current ||
oldestSequence === null
)
return;
if (!sessionId || !hasMore || isLoadingMoreRef.current) return;
if (oldestSequence === null) return;
const requestEpoch = epochRef.current;
isLoadingMoreRef.current = true;
@@ -136,15 +126,20 @@ export function useLoadMoreMessages({
consecutiveErrorsRef.current = 0;
const newRaw = (response.data.messages ?? []) as unknown[];
setOlderRawMessages((prev) => {
const estimatedTotal = pagedRawMessages.length + newRaw.length;
setPagedRawMessages((prev) => {
const merged = [...newRaw, ...prev];
if (merged.length > MAX_OLDER_MESSAGES) {
return merged.slice(merged.length - MAX_OLDER_MESSAGES);
}
return merged;
});
// Note: after truncation, oldest_sequence may reference a dropped
// message. This is safe because we also set hasMore=false below,
// preventing further loads with the stale cursor.
setOldestSequence(response.data.oldest_sequence ?? null);
if (newRaw.length + olderRawMessages.length >= MAX_OLDER_MESSAGES) {
if (estimatedTotal >= MAX_OLDER_MESSAGES) {
setHasMore(false);
} else {
setHasMore(!!response.data.has_more_messages);
@@ -164,5 +159,5 @@ export function useLoadMoreMessages({
}
}
return { olderMessages, hasMore, isLoadingMore, loadMore };
return { pagedMessages, hasMore, isLoadingMore, loadMore };
}

View File

@@ -160,6 +160,20 @@ const TAB_STATUS_LABEL: Record<string, string> = {
idle: "No recent activity",
};
function getAgentStatusLabel(tab: string, agent: LibraryAgent): string {
if (tab === "scheduled" && agent.next_scheduled_run) {
const diff = new Date(agent.next_scheduled_run).getTime() - Date.now();
const minutes = Math.round(diff / 60_000);
if (minutes <= 0) return "Scheduled to run soon";
if (minutes < 60) return `Scheduled to run in ${minutes}m`;
const hours = Math.round(minutes / 60);
if (hours < 24) return `Scheduled to run in ${hours}h`;
const days = Math.round(hours / 24);
return `Scheduled to run in ${days}d`;
}
return TAB_STATUS_LABEL[tab] ?? "";
}
function AgentListSection({
activeTab,
agents,
@@ -204,7 +218,7 @@ function AgentListSection({
agentName: agent.name,
agentImageUrl: agent.image_url,
priority: status,
message: TAB_STATUS_LABEL[activeTab] ?? "",
message: getAgentStatusLabel(activeTab, agent),
status,
}}
/>

View File

@@ -1,6 +1,7 @@
"use client";
import { Text } from "@/components/atoms/Text/Text";
import { OverflowText } from "@/components/atoms/OverflowText/OverflowText";
import { Emoji } from "@/components/atoms/Emoji/Emoji";
import { cn } from "@/lib/utils";
import type { FleetSummary, AgentStatusFilter } from "../../types";
@@ -78,17 +79,19 @@ export function StatsGrid({ summary, activeTab, onTabChange }: Props) {
type="button"
onClick={() => onTabChange(tile.filter)}
className={cn(
"flex flex-col gap-1 rounded-medium border p-3 text-left shadow-md transition-all hover:shadow-lg",
"flex min-w-0 flex-col gap-1 rounded-medium border p-3 text-left shadow-md transition-all hover:shadow-lg",
isActive
? "border-zinc-900 bg-zinc-50"
: "border-zinc-100 bg-white",
)}
>
<div className="flex items-center gap-1.5">
<div className="flex min-w-0 items-center gap-1.5">
<Emoji text={tile.emoji} size={18} />
<Text variant="body" className="text-zinc-800">
{tile.label}
</Text>
<OverflowText
value={tile.label}
variant="body"
className="text-zinc-800"
/>
</div>
<Text variant="h4">{value}</Text>
</button>

View File

@@ -1,7 +1,7 @@
"use client";
import { Select } from "@/components/atoms/Select/Select";
import type { SelectOption } from "@/components/atoms/Select/Select";
import { Select } from "@/components/atoms/Select/Select";
import { FunnelIcon } from "@phosphor-icons/react";
import type { AgentStatusFilter, FleetSummary } from "../../types";
@@ -32,7 +32,9 @@ export function AgentFilterMenu({ value, onChange, summary }: Props) {
return (
<div className="flex items-center" data-testid="agent-filter-dropdown">
<span className="hidden whitespace-nowrap text-sm sm:inline">filter</span>
<span className="hidden whitespace-nowrap text-sm text-zinc-500 sm:inline">
filter
</span>
<FunnelIcon className="ml-1 h-4 w-4 sm:hidden" />
<Select
id="agent-status-filter"
@@ -42,7 +44,7 @@ export function AgentFilterMenu({ value, onChange, summary }: Props) {
onValueChange={handleChange}
options={options}
size="small"
className="ml-1 w-fit border-none px-0 text-sm underline underline-offset-4 shadow-none"
className="ml-1 w-fit border-none !bg-transparent text-sm underline underline-offset-4 shadow-none"
wrapperClassName="mb-0"
/>
</div>

View File

@@ -4,8 +4,6 @@ import {
EyeIcon,
ArrowsClockwiseIcon,
MonitorPlayIcon,
PlayIcon,
ArrowCounterClockwiseIcon,
} from "@phosphor-icons/react";
import { cn } from "@/lib/utils";
import { useRouter } from "next/navigation";
@@ -63,6 +61,6 @@ const ACTION_CONFIG: Record<
error: { label: "View error", icon: EyeIcon },
listening: { label: "Reconnect", icon: ArrowsClockwiseIcon },
running: { label: "Watch live", icon: MonitorPlayIcon },
idle: { label: "Start", icon: PlayIcon },
scheduled: { label: "Start", icon: ArrowCounterClockwiseIcon },
idle: { label: "View", icon: EyeIcon },
scheduled: { label: "View", icon: EyeIcon },
};

View File

@@ -19,11 +19,11 @@ export function LibrarySortMenu({ setLibrarySort }: Props) {
const { handleSortChange } = useLibrarySortMenu({ setLibrarySort });
return (
<div className="flex items-center" data-testid="sort-by-dropdown">
<span className="hidden whitespace-nowrap text-sm sm:inline">
<span className="hidden whitespace-nowrap text-sm text-zinc-500 sm:inline">
sort by
</span>
<Select onValueChange={handleSortChange}>
<SelectTrigger className="ml-1 w-fit space-x-1 border-none px-0 text-sm underline underline-offset-4 shadow-none">
<SelectTrigger className="!m-0 ml-1 w-fit space-x-1 border-none !bg-transparent px-[1rem] text-sm underline underline-offset-4 !shadow-none !ring-offset-transparent">
<ArrowDownNarrowWideIcon className="h-4 w-4 sm:hidden" />
<SelectValue placeholder="Last Modified" />
</SelectTrigger>

View File

@@ -1,5 +1,6 @@
"use client";
import { OverflowText } from "@/components/atoms/OverflowText/OverflowText";
import { Text } from "@/components/atoms/Text/Text";
import {
WarningCircleIcon,
@@ -117,9 +118,11 @@ export function SitrepItem({ item }: Props) {
<Text variant="body-medium" className="leading-tight text-zinc-900">
{item.agentName}
</Text>
<Text variant="small" className="leading-tight text-zinc-500">
{item.message}
</Text>
<OverflowText
value={item.message}
variant="small"
className="leading-tight text-zinc-500"
/>
</div>
</div>

View File

@@ -19,21 +19,32 @@ import {
export function useSitrepItems(
agents: LibraryAgent[],
maxItems: number,
scheduledWithinMs?: number,
): SitrepItemData[] {
const { data: executions } = useGetV1ListAllExecutions({
query: { select: okData },
});
return useMemo(() => {
if (!executions || agents.length === 0) return [];
if (agents.length === 0) return [];
const graphIdToAgent = new Map(agents.map((a) => [a.graph_id, a]));
const agentExecutions = groupByAgent(executions, graphIdToAgent);
const agentExecutions = groupByAgent(executions ?? [], graphIdToAgent);
const items: SitrepItemData[] = [];
const coveredAgentIds = new Set<string>();
for (const [agent, execs] of agentExecutions) {
const item = buildSitrepFromExecutions(agent, execs);
if (item) items.push(item);
if (item) {
items.push(item);
coveredAgentIds.add(agent.id);
}
}
for (const agent of agents) {
if (coveredAgentIds.has(agent.id)) continue;
const configItem = buildSitrepFromConfig(agent, scheduledWithinMs);
if (configItem) items.push(configItem);
}
const order: Record<SitrepPriority, number> = {
@@ -48,7 +59,7 @@ export function useSitrepItems(
items.sort((a, b) => order[a.priority] - order[b.priority]);
return items.slice(0, maxItems);
}, [agents, executions, maxItems]);
}, [agents, executions, maxItems, scheduledWithinMs]);
}
function groupByAgent(
@@ -131,3 +142,57 @@ function buildSitrepFromExecutions(
return null;
}
function buildSitrepFromConfig(
agent: LibraryAgent,
scheduledWithinMs?: number,
): SitrepItemData | null {
if (agent.has_external_trigger) {
return {
id: `${agent.id}-listening`,
agentID: agent.id,
agentName: agent.name,
priority: "listening",
message: "Waiting for trigger event",
status: "listening",
};
}
if (agent.is_scheduled || agent.recommended_schedule_cron) {
if (!isNextRunWithin(agent.next_scheduled_run, scheduledWithinMs)) {
return null;
}
return {
id: `${agent.id}-scheduled`,
agentID: agent.id,
agentName: agent.name,
priority: "scheduled",
message: formatNextRun(agent.next_scheduled_run),
status: "scheduled",
};
}
return null;
}
function isNextRunWithin(
iso: string | undefined | null,
windowMs: number | undefined,
): boolean {
if (windowMs === undefined) return true;
if (!iso) return false;
const diff = new Date(iso).getTime() - Date.now();
return diff <= windowMs;
}
function formatNextRun(iso: string | undefined | null): string {
if (!iso) return "Has a scheduled run";
const diff = new Date(iso).getTime() - Date.now();
const minutes = Math.round(diff / 60_000);
if (minutes <= 0) return "Scheduled to run soon";
if (minutes < 60) return `Scheduled to run in ${minutes}m`;
const hours = Math.round(minutes / 60);
if (hours < 24) return `Scheduled to run in ${hours}h`;
const days = Math.round(hours / 24);
return `Scheduled to run in ${days}d`;
}

View File

@@ -64,7 +64,7 @@ function computeAgentStatus(
"Execution failed";
} else if (agent.has_external_trigger) {
status = "listening";
} else if (agent.recommended_schedule_cron) {
} else if (agent.is_scheduled || agent.recommended_schedule_cron) {
status = "scheduled";
} else {
status = "idle";
@@ -196,7 +196,7 @@ export function useFleetSummary(agents: LibraryAgent[]): FleetSummary {
counts.error += 1;
} else if (agent.has_external_trigger) {
counts.listening += 1;
} else if (agent.recommended_schedule_cron) {
} else if (agent.is_scheduled || agent.recommended_schedule_cron) {
counts.scheduled += 1;
} else {
counts.idle += 1;

View File

@@ -94,7 +94,7 @@ export function useLibraryFleetSummary(
summary.error += 1;
} else if (agent.has_external_trigger) {
summary.listening += 1;
} else if (agent.recommended_schedule_cron) {
} else if (agent.is_scheduled || agent.recommended_schedule_cron) {
summary.scheduled += 1;
} else {
summary.idle += 1;

View File

@@ -33,7 +33,6 @@ type Props = {
};
export function TimezoneForm({ user, currentTimezone = "not-set" }: Props) {
console.log("currentTimezone", currentTimezone);
// If timezone is not set, try to detect it from the browser
const effectiveTimezone = React.useMemo(() => {
if (currentTimezone === "not-set") {

View File

@@ -15,15 +15,35 @@ export async function GET() {
export async function PUT(request: Request) {
try {
const supabase = await getServerSupabase();
const { email } = await request.json();
if (!email) {
return NextResponse.json({ error: "Email is required" }, { status: 400 });
let body: unknown;
try {
body = await request.json();
} catch {
return NextResponse.json({ error: "Invalid JSON body" }, { status: 400 });
}
const { data, error } = await supabase.auth.updateUser({
email,
});
const { email: rawEmail, full_name: rawFullName } = body as {
email?: unknown;
full_name?: unknown;
};
const email = typeof rawEmail === "string" ? rawEmail.trim() : undefined;
const fullName =
typeof rawFullName === "string" ? rawFullName.trim() : undefined;
if (!email && !fullName) {
return NextResponse.json(
{ error: "Email or full_name is required" },
{ status: 400 },
);
}
const updatePayload: Parameters<typeof supabase.auth.updateUser>[0] = {};
if (email) updatePayload.email = email;
if (fullName) updatePayload.data = { full_name: fullName };
const { data, error } = await supabase.auth.updateUser(updatePayload);
if (error) {
return NextResponse.json({ error: error.message }, { status: 400 });
@@ -32,7 +52,7 @@ export async function PUT(request: Request) {
return NextResponse.json(data);
} catch {
return NextResponse.json(
{ error: "Failed to update user email" },
{ error: "Failed to update user" },
{ status: 500 },
);
}

View File

@@ -1498,7 +1498,7 @@
"get": {
"tags": ["v2", "chat", "chat"],
"summary": "Get Session",
"description": "Retrieve the details of a specific chat session.\n\nSupports cursor-based pagination via ``limit`` and ``before_sequence``.\nWhen no pagination params are provided, returns the most recent messages.\n\nArgs:\n session_id: The unique identifier for the desired chat session.\n user_id: The authenticated user's ID.\n limit: Maximum number of messages to return (1-200, default 50).\n before_sequence: Return messages with sequence < this value (cursor).\n\nReturns:\n SessionDetailResponse: Details for the requested session, including\n active_stream info and pagination metadata.",
"description": "Retrieve the details of a specific chat session.\n\nSupports cursor-based pagination via ``limit`` and ``before_sequence``.\nWhen no pagination params are provided, returns the most recent messages.",
"operationId": "getV2GetSession",
"security": [{ "HTTPBearerJWT": [] }],
"parameters": [
@@ -10931,6 +10931,17 @@
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Recommended Schedule Cron"
},
"is_scheduled": {
"type": "boolean",
"title": "Is Scheduled",
"description": "Whether this agent has active execution schedules",
"default": false
},
"next_scheduled_run": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Next Scheduled Run",
"description": "ISO 8601 timestamp of the next scheduled run, if any"
},
"settings": { "$ref": "#/components/schemas/GraphSettings" },
"marketplace_listing": {
"anyOf": [

View File

@@ -96,7 +96,6 @@ export function EditAgentForm({
control={form.control}
name="category"
render={({ field }) => {
console.log("Edit Category field value:", field.value);
return (
<Select
id={field.name}

View File

@@ -60,7 +60,7 @@ export function Navbar() {
<PreviewBanner branchName={previewBranchName} />
) : null}
<nav
className="inline-flex w-full items-center bg-[#FAFAFA]/80 p-3 backdrop-blur-xl"
className="inline-flex w-full items-center border-b border-[#f1f1f1] bg-[#FAFAFA]/80 p-3 backdrop-blur-xl"
style={{ height: NAVBAR_HEIGHT_PX }}
>
{/* Left section */}

View File

@@ -59,8 +59,6 @@ export function WalletRefill() {
resolver: zodResolver(autoRefillSchema),
});
console.log("autoRefillForm");
// Pre-fill the auto-refill form with existing values
useEffect(() => {
if (

View File

@@ -6,8 +6,6 @@ export function useNavbar() {
const { isLoggedIn, isUserLoading } = useSupabase();
const logoutInProgress = isLogoutInProgress();
console.log("isLoggedIn", isLoggedIn);
const {
data: profileResponse,
isLoading: isProfileLoading,

View File

@@ -65,7 +65,7 @@ The result routes data to yes_output or no_output, enabling intelligent branchin
| condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
### Outputs
@@ -103,7 +103,7 @@ The block sends the entire conversation history to the chosen LLM, including sys
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
@@ -257,7 +257,7 @@ The block formulates a prompt based on the given focus or source data, sends it
|-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -424,7 +424,7 @@ The block sends the input prompt to a chosen LLM, along with any system prompts
| prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |
@@ -464,7 +464,7 @@ The block sends the input prompt to a chosen LLM, processes the response, and re
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
@@ -501,7 +501,7 @@ The block splits the input text into smaller chunks, sends each chunk to an LLM
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -721,7 +721,7 @@ _Add technical explanation here._
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-sonnet-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-2.5-pro" \| "google/gemini-3.1-pro-preview" \| "google/gemini-3-flash-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-3.1-flash-lite-preview" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "mistralai/mistral-large-2512" \| "mistralai/mistral-medium-3.1" \| "mistralai/mistral-small-3.2-24b-instruct" \| "mistralai/codestral-2508" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "cohere/command-a-03-2025" \| "cohere/command-a-translate-08-2025" \| "cohere/command-a-reasoning-08-2025" \| "cohere/command-a-vision-07-2025" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-reasoning-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "microsoft/phi-4" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-3" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-4.20" \| "x-ai/grok-4.20-multi-agent" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "z-ai/glm-4-32b" \| "z-ai/glm-4.5" \| "z-ai/glm-4.5-air" \| "z-ai/glm-4.5-air:free" \| "z-ai/glm-4.5v" \| "z-ai/glm-4.6" \| "z-ai/glm-4.6v" \| "z-ai/glm-4.7" \| "z-ai/glm-4.7-flash" \| "z-ai/glm-5" \| "z-ai/glm-5-turbo" \| "z-ai/glm-5v-turbo" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |

View File

@@ -58,7 +58,7 @@ Tool and block identifiers provided in `tools` and `blocks` are validated at run
| system_context | Optional additional context prepended to the prompt. Use this to constrain autopilot behavior, provide domain context, or set output format requirements. | str | No |
| session_id | Session ID to continue an existing autopilot conversation. Leave empty to start a new session. Use the session_id output from a previous run to continue. | str | No |
| max_recursion_depth | Maximum nesting depth when the autopilot calls this block recursively (sub-agent pattern). Prevents infinite loops. | int | No |
| tools | Tool names to filter. Works with tools_exclude to form an allow-list or deny-list. Leave empty to apply no tool filter. | List["add_understanding" \| "ask_question" \| "bash_exec" \| "browser_act" \| "browser_navigate" \| "browser_screenshot" \| "connect_integration" \| "continue_run_block" \| "create_agent" \| "create_feature_request" \| "create_folder" \| "customize_agent" \| "delete_folder" \| "delete_workspace_file" \| "edit_agent" \| "find_agent" \| "find_block" \| "find_library_agent" \| "fix_agent_graph" \| "get_agent_building_guide" \| "get_doc_page" \| "get_mcp_guide" \| "list_folders" \| "list_workspace_files" \| "memory_search" \| "memory_store" \| "move_agents_to_folder" \| "move_folder" \| "read_workspace_file" \| "run_agent" \| "run_block" \| "run_mcp_tool" \| "search_docs" \| "search_feature_requests" \| "update_folder" \| "validate_agent_graph" \| "view_agent_output" \| "web_fetch" \| "write_workspace_file" \| "Agent" \| "Edit" \| "Glob" \| "Grep" \| "Read" \| "Task" \| "TodoWrite" \| "WebSearch" \| "Write"] | No |
| tools | Tool names to filter. Works with tools_exclude to form an allow-list or deny-list. Leave empty to apply no tool filter. | List["add_understanding" \| "ask_question" \| "bash_exec" \| "browser_act" \| "browser_navigate" \| "browser_screenshot" \| "connect_integration" \| "continue_run_block" \| "create_agent" \| "create_feature_request" \| "create_folder" \| "customize_agent" \| "delete_folder" \| "delete_workspace_file" \| "edit_agent" \| "find_agent" \| "find_block" \| "find_library_agent" \| "fix_agent_graph" \| "get_agent_building_guide" \| "get_doc_page" \| "get_mcp_guide" \| "list_folders" \| "list_workspace_files" \| "memory_forget_confirm" \| "memory_forget_search" \| "memory_search" \| "memory_store" \| "move_agents_to_folder" \| "move_folder" \| "read_workspace_file" \| "run_agent" \| "run_block" \| "run_mcp_tool" \| "search_docs" \| "search_feature_requests" \| "update_folder" \| "validate_agent_graph" \| "view_agent_output" \| "web_fetch" \| "write_workspace_file" \| "Agent" \| "Edit" \| "Glob" \| "Grep" \| "Read" \| "Task" \| "TodoWrite" \| "WebSearch" \| "Write"] | No |
| tools_exclude | Controls how the 'tools' list is interpreted. True (default): 'tools' is a deny-list — listed tools are blocked, all others are allowed. An empty 'tools' list means allow everything. False: 'tools' is an allow-list — only listed tools are permitted. | bool | No |
| blocks | Block identifiers to filter when the copilot uses run_block. Each entry can be: a block name (e.g. 'HTTP Request'), a full block UUID, or the first 8 hex characters of the UUID (e.g. 'c069dc6b'). Works with blocks_exclude. Leave empty to apply no block filter. | List[str] | No |
| blocks_exclude | Controls how the 'blocks' list is interpreted. True (default): 'blocks' is a deny-list — listed blocks are blocked, all others are allowed. An empty 'blocks' list means allow everything. False: 'blocks' is an allow-list — only listed blocks are permitted. | bool | No |

View File

@@ -12,36 +12,37 @@ Blocks are reusable components that perform specific tasks in AutoGPT workflows.
First, create a `_config.py` file to configure your provider using the `ProviderBuilder`:
```python
from backend.sdk import BlockCostType, ProviderBuilder
!!! note "Simple API key provider"
```python
from backend.sdk import BlockCostType, ProviderBuilder
# Simple API key provider
my_provider = (
ProviderBuilder("my_provider")
.with_api_key("MY_PROVIDER_API_KEY", "My Provider API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)
```
my_provider = (
ProviderBuilder("my_provider")
.with_api_key("MY_PROVIDER_API_KEY", "My Provider API Key")
.with_base_cost(1, BlockCostType.RUN)
.build()
)
```
For OAuth providers:
```python
from backend.sdk import BlockCostType, ProviderBuilder
from ._oauth import MyProviderOAuthHandler
!!! note "OAuth provider configuration"
```python
from backend.sdk import BlockCostType, ProviderBuilder
from ._oauth import MyProviderOAuthHandler
my_provider = (
ProviderBuilder("my_provider")
.with_oauth(
MyProviderOAuthHandler,
scopes=["read", "write"],
client_id_env_var="MY_PROVIDER_CLIENT_ID",
client_secret_env_var="MY_PROVIDER_CLIENT_SECRET",
my_provider = (
ProviderBuilder("my_provider")
.with_oauth(
MyProviderOAuthHandler,
scopes=["read", "write"],
client_id_env_var="MY_PROVIDER_CLIENT_ID",
client_secret_env_var="MY_PROVIDER_CLIENT_SECRET",
)
.with_base_cost(1, BlockCostType.RUN)
.build()
)
.with_base_cost(1, BlockCostType.RUN)
.build()
)
```
```
### 2. Create the Block Class
@@ -65,73 +66,88 @@ from ._config import my_provider
class MyBlock(Block):
class Input(BlockSchemaInput):
# Define input fields
credentials: CredentialsMetaInput = my_provider.credentials_field(
description="API credentials for My Provider"
)
query: str = SchemaField(description="The query to process")
limit: int = SchemaField(
description="Number of results",
description="Number of results",
default=10,
ge=1, # Greater than or equal to 1
le=100 # Less than or equal to 100
ge=1,
le=100,
)
advanced_option: str = SchemaField(
description="Advanced setting",
default="",
advanced=True # Hidden by default in UI
advanced=True,
)
class Output(BlockSchemaOutput):
# Define output fields
results: list = SchemaField(description="List of results")
count: int = SchemaField(description="Total count")
# error output pin is already defined on BlockSchemaOutput
def __init__(self):
super().__init__(
id=str(uuid.uuid4()), # Generate unique ID
id=str(uuid.uuid4()),
description="Brief description of what this block does",
categories={BlockCategory.SEARCH}, # Choose appropriate categories
categories={BlockCategory.SEARCH},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self,
input_data: Input,
*,
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
**kwargs
) -> BlockOutput:
try:
# Your block logic here
results = await self.process_data(
input_data.query,
input_data.limit,
credentials
)
# Yield outputs
yield "results", results
yield "count", len(results)
except Exception as e:
yield "error", str(e)
async def process_data(self, query, limit, credentials):
# Implement your logic
# Use credentials.api_key.get_secret_value() to access the API key
pass
```
!!! note "Input Schema Fields"
- **`credentials`**: Use `my_provider.credentials_field()` to add provider authentication
- **`query`**: Simple string field with description
- **`limit`**: Integer field with validation constraints (`ge=1`, `le=100`)
- **`advanced_option`**: Marked with `advanced=True` to hide from basic UI
!!! note "Output Schema Fields"
- **`results`**: List of results from the block
- **`count`**: Total count of results
- The `error` output pin is already defined on `BlockSchemaOutput`
!!! note "Block Initialization"
- **`id`**: Generate a unique ID using `uuid.uuid4()`
- **`description`**: Brief description of what the block does
- **`categories`**: Choose from `BlockCategory` enum (e.g., SEARCH, AI, PRODUCTIVITY)
- **`input_schema` / `output_schema`**: Assign the Input and Output classes
!!! note "Run Method"
- Implement your block logic in `process_data()` helper method
- Use `credentials.api_key.get_secret_value()` to access the API key
- Use `yield` to output results
## Key Components Explained
### Provider Configuration
The `ProviderBuilder` allows you to:
- **`.with_api_key()`**: Add API key authentication
- **`.with_oauth()`**: Add OAuth authentication
- **`.with_oauth()`**: Add OAuth authentication
- **`.with_base_cost()`**: Set resource costs for the block
- **`.with_webhook_manager()`**: Add webhook support
- **`.with_user_password()`**: Add username/password auth
@@ -155,69 +171,72 @@ The `ProviderBuilder` allows you to:
Add test configuration to your block:
```python
def __init__(self):
super().__init__(
# ... other config ...
test_input={
"query": "test query",
"limit": 5,
"credentials": {
"provider": "my_provider",
"id": str(uuid.uuid4()),
"type": "api_key"
!!! note "Test Configuration"
```python
def __init__(self):
super().__init__(
# ... other config ...
test_input={
"query": "test query",
"limit": 5,
"credentials": {
"provider": "my_provider",
"id": str(uuid.uuid4()),
"type": "api_key"
}
},
test_output=[
("results", ["result1", "result2"]),
("count", 2)
],
test_mock={
"process_data": lambda *args, **kwargs: ["result1", "result2"]
}
},
test_output=[
("results", ["result1", "result2"]),
("count", 2)
],
test_mock={
"process_data": lambda *args, **kwargs: ["result1", "result2"]
}
)
```
)
```
### OAuth Support
Create an OAuth handler in `_oauth.py`:
```python
from backend.integrations.oauth.base import BaseOAuthHandler
!!! note "OAuth Handler Implementation"
```python
from backend.integrations.oauth.base import BaseOAuthHandler
class MyProviderOAuthHandler(BaseOAuthHandler):
PROVIDER_NAME = "my_provider"
def _get_authorization_url(self, scopes: list[str], state: str) -> str:
# Implementation
pass
def _exchange_code_for_token(self, code: str, scopes: list[str]) -> dict:
# Implementation
pass
```
class MyProviderOAuthHandler(BaseOAuthHandler):
PROVIDER_NAME = "my_provider"
def _get_authorization_url(self, scopes: list[str], state: str) -> str:
# Implement URL generation for OAuth flow
pass
def _exchange_code_for_token(self, code: str, scopes: list[str]) -> dict:
# Implement token exchange logic
pass
```
### Webhook Support
Create a webhook manager in `_webhook.py`:
```python
from backend.integrations.webhooks._base import BaseWebhooksManager
!!! note "Webhook Manager Implementation"
```python
from backend.integrations.webhooks._base import BaseWebhooksManager
class MyProviderWebhookManager(BaseWebhooksManager):
PROVIDER_NAME = "my_provider"
async def validate_event(self, event: dict) -> bool:
# Implementation
pass
```
class MyProviderWebhookManager(BaseWebhooksManager):
PROVIDER_NAME = "my_provider"
async def validate_event(self, event: dict) -> bool:
# Implement event validation logic
pass
```
## File Organization
```
backend/blocks/my_provider/
├── __init__.py # Export your blocks
├── _config.py # Provider configuration
├── _config.py # Provider configuration
├── _oauth.py # OAuth handler (optional)
├── _webhook.py # Webhook manager (optional)
├── _api.py # API client wrapper (optional)
@@ -248,13 +267,13 @@ async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwar
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
"Content-Type": "application/json"
}
response = await Requests().post(
"https://api.example.com/endpoint",
headers=headers,
json={"query": input_data.query}
)
data = response.json()
yield "results", data.get("results", [])
```
@@ -263,20 +282,24 @@ async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwar
```python
async def run(
self,
input_data: Input,
*,
self,
input_data: Input,
*,
credentials: OAuth2Credentials | APIKeyCredentials,
**kwargs
):
if isinstance(credentials, OAuth2Credentials):
# Handle OAuth
# Handle OAuth credentials
token = credentials.access_token.get_secret_value()
else:
# Handle API key
# Handle API key credentials
token = credentials.api_key.get_secret_value()
```
!!! note "Authentication Types"
- **`OAuth2Credentials`**: Access token via `credentials.access_token.get_secret_value()`
- **`APIKeyCredentials`**: API key via `credentials.api_key.get_secret_value()`
### Handling Files
When your block works with files (images, videos, documents), use `store_media_file()`:
@@ -311,11 +334,16 @@ async def run(
result = await store_media_file(
file=generated_url,
execution_context=execution_context,
return_format="for_block_output", # workspace:// in CoPilot, data URI in graphs
return_format="for_block_output",
)
yield "image_url", result
```
!!! note "File Handling Patterns"
- **PROCESSING**: Use `"for_local_processing"` when you need a local file path for tools like ffmpeg, MoviePy, PIL
- **EXTERNAL API**: Use `"for_external_api"` when sending content to APIs like Replicate or OpenAI (returns base64 data URI)
- **OUTPUT**: Use `"for_block_output"` to return results - this automatically adapts: `workspace://` in CoPilot, data URI in graphs
**Return format options:**
- `"for_local_processing"` - Local file path for processing tools
- `"for_external_api"` - Data URI for external APIs needing base64
@@ -350,4 +378,4 @@ poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[MyBl
- **OAuth + API**: `/backend/blocks/linear/` - OAuth and API key support
- **Webhooks**: `/backend/blocks/exa/` - Includes webhook manager
Study these examples to understand different patterns and approaches for building blocks.
Study these examples to understand different patterns and approaches for building blocks.