Compare commits

...

4 Commits

Author SHA1 Message Date
Otto
47d2e1aca1 feat(chat): add input schema to discovery tools + validate unknown fields
- Add 'inputs' field to AgentInfo model for find_agent/find_library_agent
- Fetch input schemas from graphs during agent search
- Reject unknown input fields in run_agent with helpful error message

Closes OPEN-2980
2026-01-30 23:16:42 +00:00
Reinier van der Leer
350ad3591b fix(backend/chat): Filter credentials for graph execution by scopes (#11881)
[SECRT-1842: run_agent tool does not correctly use credentials - agents
fail with insufficient auth
scopes](https://linear.app/autogpt/issue/SECRT-1842)

### Changes 🏗️

- Include scopes in credentials filter in
`backend.api.features.chat.tools.utils.match_user_credentials_to_graph`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI must pass
- It's broken now and a simple change so we'll test in the dev
deployment
2026-01-30 11:01:51 +00:00
Bently
de0ec3d388 chore(llm): remove deprecated Claude 3.7 Sonnet model with migration and defensive handling (#11841)
## Summary
Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of
Anthropic's API retirement, with comprehensive migration and defensive
error handling.

## Background
Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`)
on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from
the platform and migrates existing users to prevent service
interruptions.

## Changes

### Code Changes
- Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py`
- Remove corresponding `ModelMetadata` entry
- Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum
- Remove `CLAUDE_3_7_SONNET` from block cost config
- Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum
- Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to
`CLAUDE_4_5_SONNET` (staying in Claude family)
- Add defensive error handling in `CredentialsFieldInfo.discriminate()`
for deprecated model values

### Database Migration
- Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet`
- Migrates `AgentNode.constantInput` model references
- Migrates `AgentNodeExecutionInputOutput.data` preset overrides

### Documentation
- Updated `docs/integrations/block-integrations/llm.md` to remove
deprecated model
- Updated `docs/integrations/block-integrations/stagehand/blocks.md` to
remove deprecated model and add Claude 4.5 Sonnet

## Notes
- Agent JSON files in `autogpt_platform/backend/agents/` still reference
this model in their provider mappings. These are auto-generated and
should be regenerated separately.

## Testing
- [ ] Verify LLM block still functions with remaining models
- [ ] Confirm no import errors in affected files
- [ ] Verify migration runs successfully
- [ ] Verify deprecated model gives helpful error message instead of
KeyError
2026-01-30 08:40:55 +00:00
Otto
7cb1e588b0 fix(frontend): Refocus ChatInput after voice transcription completes (#11893)
## Summary
Refocuses the chat input textarea after voice transcription finishes,
allowing users to immediately use `spacebar+enter` to record and send
their prompt.

## Changes
- Added `inputId` parameter to `useVoiceRecording` hook
- After transcription completes, the input is automatically focused
- This improves the voice input UX flow

## Testing
1. Click mic button or press spacebar to record voice
2. Record a message and stop
3. After transcription completes, the input should be focused
4. User can now press Enter to send or spacebar to record again

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-30 14:49:05 +07:00
25 changed files with 1616 additions and 29 deletions

View File

@@ -1,10 +1,11 @@
"""Shared agent search functionality for find_agent and find_library_agent tools."""
import logging
from typing import Literal
from typing import Any, Literal
from backend.api.features.library import db as library_db
from backend.api.features.store import db as store_db
from backend.data.graph import get_graph
from backend.util.exceptions import DatabaseError, NotFoundError
from .models import (
@@ -14,12 +15,39 @@ from .models import (
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
SearchSource = Literal["marketplace", "library"]
async def _fetch_input_schema_for_store_agent(
creator: str, slug: str
) -> dict[str, Any] | None:
"""Fetch input schema for a marketplace agent. Returns None on error."""
try:
graph, _ = await fetch_graph_from_store_slug(creator, slug)
if graph and graph.input_schema:
return graph.input_schema.get("properties", {})
except Exception as e:
logger.debug(f"Could not fetch input schema for {creator}/{slug}: {e}")
return None
async def _fetch_input_schema_for_library_agent(
graph_id: str, graph_version: int, user_id: str
) -> dict[str, Any] | None:
"""Fetch input schema for a library agent. Returns None on error."""
try:
graph = await get_graph(graph_id, graph_version, user_id=user_id)
if graph and graph.input_schema:
return graph.input_schema.get("properties", {})
except Exception as e:
logger.debug(f"Could not fetch input schema for graph {graph_id}: {e}")
return None
async def search_agents(
query: str,
source: SearchSource,
@@ -55,6 +83,10 @@ async def search_agents(
logger.info(f"Searching marketplace for: {query}")
results = await store_db.get_store_agents(search_query=query, page_size=5)
for agent in results.agents:
# Fetch input schema for this agent
inputs = await _fetch_input_schema_for_store_agent(
agent.creator, agent.slug
)
agents.append(
AgentInfo(
id=f"{agent.creator}/{agent.slug}",
@@ -67,6 +99,7 @@ async def search_agents(
rating=agent.rating,
runs=agent.runs,
is_featured=False,
inputs=inputs,
)
)
else: # library
@@ -77,6 +110,10 @@ async def search_agents(
page_size=10,
)
for agent in results.agents:
# Fetch input schema for this agent
inputs = await _fetch_input_schema_for_library_agent(
agent.graph_id, agent.graph_version, user_id # type: ignore[arg-type]
)
agents.append(
AgentInfo(
id=agent.id,
@@ -90,6 +127,7 @@ async def search_agents(
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
inputs=inputs,
)
)
logger.info(f"Found {len(agents)} agents in {source}")

View File

@@ -68,6 +68,10 @@ class AgentInfo(BaseModel):
has_external_trigger: bool | None = None
new_output: bool | None = None
graph_id: str | None = None
inputs: dict[str, Any] | None = Field(
default=None,
description="Input schema for the agent (properties from input_schema)",
)
class AgentsFoundResponse(ToolResponseBase):

View File

@@ -273,6 +273,27 @@ class RunAgentTool(BaseTool):
input_properties = graph.input_schema.get("properties", {})
required_fields = set(graph.input_schema.get("required", []))
provided_inputs = set(params.inputs.keys())
valid_fields = set(input_properties.keys())
# Check for unknown fields - reject early with helpful message
unknown_fields = provided_inputs - valid_fields
if unknown_fields:
valid_list = ", ".join(sorted(valid_fields)) if valid_fields else "none"
return AgentDetailsResponse(
message=(
f"Unknown input field(s) provided: {', '.join(sorted(unknown_fields))}. "
f"Valid fields for '{graph.name}': {valid_list}. "
"Please check the field names and try again."
),
session_id=session_id,
agent=self._build_agent_details(
graph,
extract_credentials_from_schema(graph.credentials_input_schema),
),
user_authenticated=True,
graph_id=graph.id,
graph_version=graph.version,
)
# If agent has inputs but none were provided AND use_defaults is not set,
# always show what's available first so user can decide

View File

@@ -8,7 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.data.model import Credentials, CredentialsFieldInfo, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError
@@ -266,13 +266,14 @@ async def match_user_credentials_to_graph(
credential_requirements,
_node_fields,
) in aggregated_creds.items():
# Find first matching credential by provider and type
# Find first matching credential by provider, type, and scopes
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in credential_requirements.provider
and cred.type in credential_requirements.supported_types
and _credential_has_required_scopes(cred, credential_requirements)
),
None,
)
@@ -296,10 +297,17 @@ async def match_user_credentials_to_graph(
f"{credential_field_name} (validation failed: {e})"
)
else:
# Build a helpful error message including scope requirements
error_parts = [
f"provider in {list(credential_requirements.provider)}",
f"type in {list(credential_requirements.supported_types)}",
]
if credential_requirements.required_scopes:
error_parts.append(
f"scopes including {list(credential_requirements.required_scopes)}"
)
missing_creds.append(
f"{credential_field_name} "
f"(requires provider in {list(credential_requirements.provider)}, "
f"type in {list(credential_requirements.supported_types)})"
f"{credential_field_name} (requires {', '.join(error_parts)})"
)
logger.info(
@@ -309,6 +317,28 @@ async def match_user_credentials_to_graph(
return graph_credentials_inputs, missing_creds
def _credential_has_required_scopes(
credential: Credentials,
requirements: CredentialsFieldInfo,
) -> bool:
"""
Check if a credential has all the scopes required by the block.
For OAuth2 credentials, verifies that the credential's scopes are a superset
of the required scopes. For other credential types, returns True (no scope check).
"""
# Only OAuth2 credentials have scopes to check
if credential.type != "oauth2":
return True
# If no scopes are required, any credential matches
if not requirements.required_scopes:
return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes)
async def check_user_has_required_credentials(
user_id: str,
required_credentials: list[CredentialsMetaInput],

View File

@@ -115,7 +115,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
@@ -280,9 +279,6 @@ MODEL_METADATA = {
LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata(
"anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2
), # claude-haiku-4-5-20251001
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 64000, "Claude 3.7 Sonnet", "Anthropic", "Anthropic", 2
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1
), # claude-3-haiku-20240307

View File

@@ -83,7 +83,7 @@ class StagehandRecommendedLlmModel(str, Enum):
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
@property
def provider_name(self) -> str:
@@ -137,7 +137,7 @@ class StagehandObserveBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
@@ -230,7 +230,7 @@ class StagehandActBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
@@ -330,7 +330,7 @@ class StagehandExtractBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()

View File

@@ -81,7 +81,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.CLAUDE_4_5_HAIKU: 4,
LlmModel.CLAUDE_4_5_OPUS: 14,
LlmModel.CLAUDE_4_5_SONNET: 9,
LlmModel.CLAUDE_3_7_SONNET: 5,
LlmModel.CLAUDE_3_HAIKU: 1,
LlmModel.AIML_API_QWEN2_5_72B: 1,
LlmModel.AIML_API_LLAMA3_1_70B: 1,

View File

@@ -666,10 +666,16 @@ class CredentialsFieldInfo(BaseModel, Generic[CP, CT]):
if not (self.discriminator and self.discriminator_mapping):
return self
try:
provider = self.discriminator_mapping[discriminator_value]
except KeyError:
raise ValueError(
f"Model '{discriminator_value}' is not supported. "
"It may have been deprecated. Please update your agent configuration."
)
return CredentialsFieldInfo(
credentials_provider=frozenset(
[self.discriminator_mapping[discriminator_value]]
),
credentials_provider=frozenset([provider]),
credentials_types=self.supported_types,
credentials_scopes=self.required_scopes,
discriminator=self.discriminator,

View File

@@ -0,0 +1,22 @@
-- Migrate Claude 3.7 Sonnet to Claude 4.5 Sonnet
-- This updates all AgentNode blocks that use the deprecated Claude 3.7 Sonnet model
-- Anthropic is retiring claude-3-7-sonnet-20250219 on February 19, 2026
-- Update AgentNode constant inputs
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';
-- Update AgentPreset input overrides (stored in AgentNodeExecutionInputOutput)
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';

View File

@@ -0,0 +1,185 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import { FavoritesSection } from "../FavoritesSection/FavoritesSection";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const mockFavoriteAgent = {
id: "fav-agent-id",
graph_id: "fav-graph-id",
graph_version: 1,
owner_user_id: "test-owner-id",
image_url: null,
creator_name: "Test Creator",
creator_image_url: "https://example.com/avatar.png",
status: "READY",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
name: "Favorite Agent Name",
description: "Test favorite agent",
input_schema: {},
output_schema: {},
credentials_input_schema: null,
has_external_trigger: false,
has_human_in_the_loop: false,
has_sensitive_action: false,
new_output: false,
can_access_graph: true,
is_latest_version: true,
is_favorite: true,
};
describe("FavoritesSection", () => {
afterEach(() => {
resetAuthState();
});
test("renders favorites section when there are favorites", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText(/favorites/i)).toBeInTheDocument();
});
});
test("renders favorite agent cards", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText("Favorite Agent Name")).toBeInTheDocument();
});
});
test("shows agent count", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByTestId("agents-count")).toBeInTheDocument();
});
});
test("does not render when there are no favorites", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
const { container } = render(<FavoritesSection searchTerm="" />);
// Wait for loading to complete
await waitFor(() => {
// Component should return null when no favorites
expect(container.textContent).toBe("");
});
});
test("filters favorites based on search term", async () => {
mockAuthenticatedUser();
// Mock that returns different results based on search term
server.use(
http.get("*/api/library/agents/favorites*", ({ request }) => {
const url = new URL(request.url);
const searchTerm = url.searchParams.get("search_term");
if (searchTerm === "nonexistent") {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
const { rerender } = render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText("Favorite Agent Name")).toBeInTheDocument();
});
// Rerender with search term that yields no results
rerender(<FavoritesSection searchTerm="nonexistent" />);
await waitFor(() => {
expect(screen.queryByText("Favorite Agent Name")).not.toBeInTheDocument();
});
});
});

View File

@@ -0,0 +1,122 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { LibraryAgentCard } from "../LibraryAgentCard/LibraryAgentCard";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const mockAgent: LibraryAgent = {
id: "test-agent-id",
graph_id: "test-graph-id",
graph_version: 1,
owner_user_id: "test-owner-id",
image_url: null,
creator_name: "Test Creator",
creator_image_url: "https://example.com/avatar.png",
status: "READY",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
name: "Test Agent Name",
description: "Test agent description",
input_schema: {},
output_schema: {},
credentials_input_schema: null,
has_external_trigger: false,
has_human_in_the_loop: false,
has_sensitive_action: false,
new_output: false,
can_access_graph: true,
is_latest_version: true,
is_favorite: false,
};
describe("LibraryAgentCard", () => {
afterEach(() => {
resetAuthState();
});
test("renders agent name", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText("Test Agent Name")).toBeInTheDocument();
});
test("renders see runs link", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/see runs/i)).toBeInTheDocument();
});
test("renders open in builder link when can_access_graph is true", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/open in builder/i)).toBeInTheDocument();
});
test("does not render open in builder link when can_access_graph is false", () => {
mockAuthenticatedUser();
const agentWithoutAccess = { ...mockAgent, can_access_graph: false };
render(<LibraryAgentCard agent={agentWithoutAccess} />);
expect(screen.queryByText(/open in builder/i)).not.toBeInTheDocument();
});
test("shows 'FROM MARKETPLACE' label for marketplace agents", () => {
mockAuthenticatedUser();
const marketplaceAgent = {
...mockAgent,
marketplace_listing: {
id: "listing-id",
name: "Marketplace Agent",
slug: "marketplace-agent",
creator: {
id: "creator-id",
name: "Creator Name",
slug: "creator-slug",
},
},
};
render(<LibraryAgentCard agent={marketplaceAgent} />);
expect(screen.getByText(/from marketplace/i)).toBeInTheDocument();
});
test("shows 'Built by you' label for user's own agents", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/built by you/i)).toBeInTheDocument();
});
test("renders favorite button", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
// The favorite button should be present (as a heart icon button)
const card = screen.getByTestId("library-agent-card");
expect(card).toBeInTheDocument();
});
test("links to correct agent detail page", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
const link = screen.getByTestId("library-agent-card-see-runs-link");
expect(link).toHaveAttribute("href", "/library/agents/test-agent-id");
});
test("links to correct builder page", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
const builderLink = screen.getByTestId(
"library-agent-card-open-in-builder-link",
);
expect(builderLink).toHaveAttribute("href", "/build?flowID=test-graph-id");
});
});

View File

@@ -0,0 +1,53 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import { LibrarySearchBar } from "../LibrarySearchBar/LibrarySearchBar";
describe("LibrarySearchBar", () => {
test("renders search input", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
test("renders search icon", () => {
const setSearchTerm = vi.fn();
const { container } = render(
<LibrarySearchBar setSearchTerm={setSearchTerm} />,
);
// Check for the magnifying glass icon (SVG element)
const searchIcon = container.querySelector("svg");
expect(searchIcon).toBeInTheDocument();
});
test("calls setSearchTerm on input change", async () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
const input = screen.getByPlaceholderText(/search agents/i);
fireEvent.change(input, { target: { value: "test query" } });
// The search bar uses debouncing, so we need to wait
await waitFor(
() => {
expect(setSearchTerm).toHaveBeenCalled();
},
{ timeout: 1000 },
);
});
test("has correct test id", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
test("input has correct test id", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByTestId("library-textbox")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,53 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import { LibrarySortMenu } from "../LibrarySortMenu/LibrarySortMenu";
describe("LibrarySortMenu", () => {
test("renders sort dropdown", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByTestId("sort-by-dropdown")).toBeInTheDocument();
});
test("shows 'sort by' label on larger screens", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByText(/sort by/i)).toBeInTheDocument();
});
test("shows default placeholder text", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByText(/last modified/i)).toBeInTheDocument();
});
test("opens dropdown when clicked", async () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
const trigger = screen.getByRole("combobox");
fireEvent.click(trigger);
await waitFor(() => {
expect(screen.getByText(/creation date/i)).toBeInTheDocument();
});
});
test("shows both sort options in dropdown", async () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
const trigger = screen.getByRole("combobox");
fireEvent.click(trigger);
await waitFor(() => {
expect(screen.getByText(/creation date/i)).toBeInTheDocument();
expect(
screen.getAllByText(/last modified/i).length,
).toBeGreaterThanOrEqual(1);
});
});
});

View File

@@ -0,0 +1,78 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import LibraryUploadAgentDialog from "../LibraryUploadAgentDialog/LibraryUploadAgentDialog";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryUploadAgentDialog", () => {
afterEach(() => {
resetAuthState();
});
test("renders upload button", () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
expect(
screen.getByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("opens dialog when upload button is clicked", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText("Upload Agent")).toBeInTheDocument();
});
});
test("dialog contains agent name input", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByLabelText(/agent name/i)).toBeInTheDocument();
});
});
test("dialog contains agent description input", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByLabelText(/agent description/i)).toBeInTheDocument();
});
});
test("upload button is disabled when form is incomplete", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const triggerButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(triggerButton);
await waitFor(() => {
const submitButton = screen.getByRole("button", { name: /^upload$/i });
expect(submitButton).toBeDisabled();
});
});
test("has correct test id on trigger button", () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
expect(screen.getByTestId("upload-agent-button")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,40 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for upload button text to appear (indicates page is rendered)
expect(
await screen.findByText("Upload agent", { exact: false }),
).toBeInTheDocument();
// Search bar should be visible
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<LibraryPage />);
// Wait for upload button text to appear (indicates page is rendered)
expect(
await screen.findByText("Upload agent", { exact: false }),
).toBeInTheDocument();
// Search bar should still be visible
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,82 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Empty State", () => {
afterEach(() => {
resetAuthState();
});
test("handles empty agents list gracefully", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<LibraryPage />);
// Page should still render without crashing
// Search bar should be visible even with no agents
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
// Upload button should be visible
expect(
screen.getByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("handles empty favorites gracefully", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<LibraryPage />);
// Page should still render without crashing
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,59 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
import {
getGetV2ListLibraryAgentsMockHandler422,
getGetV2ListFavoriteLibraryAgentsMockHandler422,
} from "@/app/api/__generated__/endpoints/library/library.msw";
describe("LibraryPage - Error Handling", () => {
afterEach(() => {
resetAuthState();
});
test("handles API 422 error gracefully", async () => {
mockAuthenticatedUser();
server.use(getGetV2ListLibraryAgentsMockHandler422());
render(<LibraryPage />);
// Page should still render without crashing
// Search bar should be visible even with error
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
test("handles favorites API 422 error gracefully", async () => {
mockAuthenticatedUser();
server.use(getGetV2ListFavoriteLibraryAgentsMockHandler422());
render(<LibraryPage />);
// Page should still render without crashing
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
test("handles API 500 error gracefully", async () => {
mockAuthenticatedUser();
server.use(create500Handler("get", "*/api/library/agents*"));
render(<LibraryPage />);
// Page should still render without crashing
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
});

View File

@@ -0,0 +1,55 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse, delay } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Loading State", () => {
afterEach(() => {
resetAuthState();
});
test("shows loading spinner while agents are being fetched", async () => {
mockAuthenticatedUser();
// Override handlers to add delay to simulate loading
server.use(
http.get("*/api/library/agents*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
http.get("*/api/library/agents/favorites*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
const { container } = render(<LibraryPage />);
// Check for loading spinner (LoadingSpinner component)
const loadingElements = container.querySelectorAll(
'[class*="animate-spin"]',
);
expect(loadingElements.length).toBeGreaterThan(0);
});
});

View File

@@ -0,0 +1,65 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Rendering", () => {
afterEach(() => {
resetAuthState();
});
test("renders search bar", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
});
test("renders upload agent button", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
expect(
await screen.findByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("renders agent cards when data is loaded", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for agent cards to appear (from mock data)
await waitFor(() => {
const agentCards = screen.getAllByTestId("library-agent-card");
expect(agentCards.length).toBeGreaterThan(0);
});
});
test("agent cards display agent name", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for agent cards and check they have names
await waitFor(() => {
const agentNames = screen.getAllByTestId("library-agent-card-name");
expect(agentNames.length).toBeGreaterThan(0);
});
});
test("agent cards have see runs link", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
await waitFor(() => {
const seeRunsLinks = screen.getAllByTestId(
"library-agent-card-see-runs-link",
);
expect(seeRunsLinks.length).toBeGreaterThan(0);
});
});
});

View File

@@ -0,0 +1,246 @@
/**
* useChatContainerAiSdk - ChatContainer hook using Vercel AI SDK
*
* This is a drop-in replacement for useChatContainer that uses @ai-sdk/react
* instead of the custom streaming implementation. The API surface is identical
* to enable easy A/B testing and gradual migration.
*/
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { useEffect, useMemo, useRef } from "react";
import type { UIMessage } from "ai";
import { useAiSdkChat } from "../../useAiSdkChat";
import { usePageContext } from "../../usePageContext";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import {
filterAuthMessages,
hasSentInitialPrompt,
markInitialPromptSent,
processInitialMessages,
} from "./helpers";
// Helper to convert backend messages to AI SDK UIMessage format
function convertToUIMessages(
messages: SessionDetailResponse["messages"],
): UIMessage[] {
const result: UIMessage[] = [];
for (const msg of messages) {
if (!msg.role || !msg.content) continue;
// Create parts based on message type
const parts: UIMessage["parts"] = [];
if (msg.role === "user" || msg.role === "assistant") {
if (typeof msg.content === "string") {
parts.push({ type: "text", text: msg.content });
}
}
// Handle tool calls in assistant messages
if (msg.role === "assistant" && msg.tool_calls) {
for (const toolCall of msg.tool_calls as Array<{
id: string;
type: string;
function: { name: string; arguments: string };
}>) {
if (toolCall.type === "function") {
let args = {};
try {
args = JSON.parse(toolCall.function.arguments);
} catch {
// Keep empty args
}
parts.push({
type: `tool-${toolCall.function.name}` as `tool-${string}`,
toolCallId: toolCall.id,
toolName: toolCall.function.name,
state: "input-available",
input: args,
} as UIMessage["parts"][number]);
}
}
}
// Handle tool responses
if (msg.role === "tool" && msg.tool_call_id) {
// Find the matching tool call to get the name
const toolName = "unknown";
let output: unknown = msg.content;
try {
output =
typeof msg.content === "string"
? JSON.parse(msg.content)
: msg.content;
} catch {
// Keep as string
}
parts.push({
type: `tool-${toolName}` as `tool-${string}`,
toolCallId: msg.tool_call_id as string,
toolName,
state: "output-available",
output,
} as UIMessage["parts"][number]);
}
if (parts.length > 0) {
result.push({
id: msg.id || `msg-${Date.now()}-${Math.random()}`,
role: msg.role === "tool" ? "assistant" : (msg.role as "user" | "assistant"),
parts,
createdAt: msg.created_at ? new Date(msg.created_at as string) : new Date(),
});
}
}
return result;
}
interface Args {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
initialPrompt?: string;
onOperationStarted?: () => void;
}
export function useChatContainerAiSdk({
sessionId,
initialMessages,
initialPrompt,
onOperationStarted,
}: Args) {
const { capturePageContext } = usePageContext();
const sendMessageRef = useRef<
(
content: string,
isUserMessage?: boolean,
context?: { url: string; content: string },
) => Promise<void>
>();
// Convert initial messages to AI SDK format
const uiMessages = useMemo(
() => convertToUIMessages(initialMessages),
[initialMessages],
);
const {
messages: aiSdkMessages,
streamingChunks,
isStreaming,
error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessage,
stopStreaming,
} = useAiSdkChat({
sessionId,
initialMessages: uiMessages,
onOperationStarted,
});
// Keep ref updated for initial prompt handling
sendMessageRef.current = sendMessage;
// Merge AI SDK messages with processed initial messages
// This ensures we show both historical messages and new streaming messages
const allMessages = useMemo(() => {
const processedInitial = processInitialMessages(initialMessages);
// Build a set of message keys for deduplication
const seenKeys = new Set<string>();
const result: ChatMessageData[] = [];
// Add processed initial messages first
for (const msg of processedInitial) {
const key = getMessageKey(msg);
if (!seenKeys.has(key)) {
seenKeys.add(key);
result.push(msg);
}
}
// Add AI SDK messages that aren't duplicates
for (const msg of aiSdkMessages) {
const key = getMessageKey(msg);
if (!seenKeys.has(key)) {
seenKeys.add(key);
result.push(msg);
}
}
return result;
}, [initialMessages, aiSdkMessages]);
// Handle initial prompt
useEffect(
function handleInitialPrompt() {
if (!initialPrompt || !sessionId) return;
if (initialMessages.length > 0) return;
if (hasSentInitialPrompt(sessionId)) return;
markInitialPromptSent(sessionId);
const context = capturePageContext();
sendMessageRef.current?.(initialPrompt, true, context);
},
[initialPrompt, sessionId, initialMessages.length, capturePageContext],
);
// Send message with page context
async function sendMessageWithContext(
content: string,
isUserMessage: boolean = true,
) {
const context = capturePageContext();
await sendMessage(content, isUserMessage, context);
}
function handleRegionModalOpenChange(open: boolean) {
setIsRegionBlockedModalOpen(open);
}
function handleRegionModalClose() {
setIsRegionBlockedModalOpen(false);
}
return {
messages: filterAuthMessages(allMessages),
streamingChunks,
isStreaming,
error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessageWithContext,
handleRegionModalOpenChange,
handleRegionModalClose,
sendMessage,
stopStreaming,
};
}
// Helper to generate deduplication key for a message
function getMessageKey(msg: ChatMessageData): string {
if (msg.type === "message") {
return `msg:${msg.role}:${msg.content}`;
} else if (msg.type === "tool_call") {
return `toolcall:${msg.toolId}`;
} else if (msg.type === "tool_response") {
return `toolresponse:${(msg as { toolId?: string }).toolId}`;
} else if (
msg.type === "operation_started" ||
msg.type === "operation_pending" ||
msg.type === "operation_in_progress"
) {
const typedMsg = msg as {
toolId?: string;
operationId?: string;
toolCallId?: string;
toolName?: string;
};
return `op:${typedMsg.toolId || typedMsg.operationId || typedMsg.toolCallId || ""}:${typedMsg.toolName || ""}`;
} else {
return `${msg.type}:${JSON.stringify(msg).slice(0, 100)}`;
}
}

View File

@@ -57,6 +57,7 @@ export function ChatInput({
isStreaming,
value,
baseHandleKeyDown,
inputId,
});
return (

View File

@@ -15,6 +15,7 @@ interface Args {
isStreaming?: boolean;
value: string;
baseHandleKeyDown: (event: KeyboardEvent<HTMLTextAreaElement>) => void;
inputId?: string;
}
export function useVoiceRecording({
@@ -23,6 +24,7 @@ export function useVoiceRecording({
isStreaming = false,
value,
baseHandleKeyDown,
inputId,
}: Args) {
const [isRecording, setIsRecording] = useState(false);
const [isTranscribing, setIsTranscribing] = useState(false);
@@ -103,7 +105,7 @@ export function useVoiceRecording({
setIsTranscribing(false);
}
},
[handleTranscription],
[handleTranscription, inputId],
);
const stopRecording = useCallback(() => {
@@ -201,6 +203,15 @@ export function useVoiceRecording({
}
}, [error, toast]);
useEffect(() => {
if (!isTranscribing && inputId) {
const inputElement = document.getElementById(inputId);
if (inputElement) {
inputElement.focus();
}
}
}, [isTranscribing, inputId]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
if (event.key === " " && !value.trim() && !isTranscribing) {

View File

@@ -0,0 +1,421 @@
"use client";
/**
* useAiSdkChat - Vercel AI SDK integration for CoPilot Chat
*
* This hook wraps @ai-sdk/react's useChat to provide:
* - Streaming chat with the existing Python backend (already AI SDK protocol compatible)
* - Integration with existing session management
* - Custom tool response parsing for AutoGPT-specific types
* - Page context injection
*
* The Python backend already implements the AI SDK Data Stream Protocol (v1),
* so this hook can communicate directly without any backend changes.
*/
import { useChat as useAiSdkChatBase } from "@ai-sdk/react";
import { DefaultChatTransport, type UIMessage } from "ai";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { toast } from "sonner";
import type { ChatMessageData } from "./components/ChatMessage/useChatMessage";
// Tool response types from the backend
type OperationType =
| "operation_started"
| "operation_pending"
| "operation_in_progress";
interface ToolOutputBase {
type: string;
[key: string]: unknown;
}
interface UseAiSdkChatOptions {
sessionId: string | null;
initialMessages?: UIMessage[];
onOperationStarted?: () => void;
onStreamingChange?: (isStreaming: boolean) => void;
}
/**
* Parse tool output from AI SDK message parts into ChatMessageData format
*/
function parseToolOutput(
toolCallId: string,
toolName: string,
output: unknown,
): ChatMessageData | null {
if (!output) return null;
let parsed: ToolOutputBase;
try {
parsed =
typeof output === "string"
? JSON.parse(output)
: (output as ToolOutputBase);
} catch {
return null;
}
const type = parsed.type;
// Handle operation status types
if (
type === "operation_started" ||
type === "operation_pending" ||
type === "operation_in_progress"
) {
return {
type: type as OperationType,
toolId: toolCallId,
toolName: toolName,
operationId: (parsed.operation_id as string) || undefined,
message: (parsed.message as string) || undefined,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle agent carousel
if (type === "agent_carousel" && Array.isArray(parsed.agents)) {
return {
type: "agent_carousel",
toolId: toolCallId,
toolName: toolName,
agents: parsed.agents,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle execution started
if (type === "execution_started") {
return {
type: "execution_started",
toolId: toolCallId,
toolName: toolName,
graphId: parsed.graph_id as string,
graphVersion: parsed.graph_version as number,
graphExecId: parsed.graph_exec_id as string,
nodeExecIds: parsed.node_exec_ids as string[],
timestamp: new Date(),
} as ChatMessageData;
}
// Handle error responses
if (type === "error") {
return {
type: "tool_response",
toolId: toolCallId,
toolName: toolName,
result: parsed,
success: false,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle clarification questions
if (type === "clarification_questions" && Array.isArray(parsed.questions)) {
return {
type: "clarification_questions",
toolId: toolCallId,
toolName: toolName,
questions: parsed.questions,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle credentials needed
if (type === "credentials_needed" || type === "setup_requirements") {
const credentials = parsed.credentials as
| Array<{
provider: string;
provider_name: string;
credential_type: string;
scopes?: string[];
}>
| undefined;
if (credentials && credentials.length > 0) {
return {
type: "credentials_needed",
toolId: toolCallId,
toolName: toolName,
credentials: credentials,
timestamp: new Date(),
} as ChatMessageData;
}
}
// Default: generic tool response
return {
type: "tool_response",
toolId: toolCallId,
toolName: toolName,
result: parsed,
success: true,
timestamp: new Date(),
} as ChatMessageData;
}
/**
* Convert AI SDK UIMessage parts to ChatMessageData array
*/
function convertMessageToChatData(message: UIMessage): ChatMessageData[] {
const result: ChatMessageData[] = [];
for (const part of message.parts) {
switch (part.type) {
case "text":
if (part.text.trim()) {
result.push({
type: "message",
role: message.role as "user" | "assistant",
content: part.text,
timestamp: new Date(message.createdAt || Date.now()),
});
}
break;
default:
// Handle tool parts (tool-*)
if (part.type.startsWith("tool-")) {
const toolPart = part as {
type: string;
toolCallId: string;
toolName: string;
state: string;
input?: Record<string, unknown>;
output?: unknown;
};
// Show tool call in progress
if (
toolPart.state === "input-streaming" ||
toolPart.state === "input-available"
) {
result.push({
type: "tool_call",
toolId: toolPart.toolCallId,
toolName: toolPart.toolName,
arguments: toolPart.input || {},
timestamp: new Date(),
});
}
// Parse tool output when available
if (
toolPart.state === "output-available" &&
toolPart.output !== undefined
) {
const parsed = parseToolOutput(
toolPart.toolCallId,
toolPart.toolName,
toolPart.output,
);
if (parsed) {
result.push(parsed);
}
}
// Handle tool errors
if (toolPart.state === "output-error") {
result.push({
type: "tool_response",
toolId: toolPart.toolCallId,
toolName: toolPart.toolName,
response: {
type: "error",
message: (toolPart as { errorText?: string }).errorText,
},
success: false,
timestamp: new Date(),
} as ChatMessageData);
}
}
break;
}
}
return result;
}
export function useAiSdkChat({
sessionId,
initialMessages = [],
onOperationStarted,
onStreamingChange,
}: UseAiSdkChatOptions) {
const [isRegionBlockedModalOpen, setIsRegionBlockedModalOpen] =
useState(false);
const previousSessionIdRef = useRef<string | null>(null);
const hasNotifiedOperationRef = useRef<Set<string>>(new Set());
// Create transport with session-specific endpoint
const transport = useMemo(() => {
if (!sessionId) return undefined;
return new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
headers: {
"Content-Type": "application/json",
},
});
}, [sessionId]);
const {
messages: aiMessages,
status,
error,
stop,
setMessages,
sendMessage: aiSendMessage,
} = useAiSdkChatBase({
transport,
initialMessages,
onError: (err) => {
console.error("[useAiSdkChat] Error:", err);
// Check for region blocking
if (
err.message?.toLowerCase().includes("not available in your region") ||
(err as { code?: string }).code === "MODEL_NOT_AVAILABLE_REGION"
) {
setIsRegionBlockedModalOpen(true);
return;
}
toast.error("Chat Error", {
description: err.message || "An error occurred",
});
},
onFinish: ({ message }) => {
console.info("[useAiSdkChat] Message finished:", {
id: message.id,
partsCount: message.parts.length,
});
},
});
// Track streaming status
const isStreaming = status === "streaming" || status === "submitted";
// Notify parent of streaming changes
useEffect(() => {
onStreamingChange?.(isStreaming);
}, [isStreaming, onStreamingChange]);
// Handle session changes - reset state
useEffect(() => {
if (sessionId === previousSessionIdRef.current) return;
if (previousSessionIdRef.current && status === "streaming") {
stop();
}
previousSessionIdRef.current = sessionId;
hasNotifiedOperationRef.current = new Set();
if (sessionId) {
setMessages(initialMessages);
}
}, [sessionId, status, stop, setMessages, initialMessages]);
// Convert AI SDK messages to ChatMessageData format
const messages = useMemo(() => {
const result: ChatMessageData[] = [];
for (const message of aiMessages) {
const converted = convertMessageToChatData(message);
result.push(...converted);
// Check for operation_started and notify
for (const msg of converted) {
if (
msg.type === "operation_started" &&
!hasNotifiedOperationRef.current.has(
(msg as { toolId?: string }).toolId || "",
)
) {
hasNotifiedOperationRef.current.add(
(msg as { toolId?: string }).toolId || "",
);
onOperationStarted?.();
}
}
}
return result;
}, [aiMessages, onOperationStarted]);
// Get streaming text chunks from the last assistant message
const streamingChunks = useMemo(() => {
if (!isStreaming) return [];
const lastMessage = aiMessages[aiMessages.length - 1];
if (!lastMessage || lastMessage.role !== "assistant") return [];
const chunks: string[] = [];
for (const part of lastMessage.parts) {
if (part.type === "text" && part.text) {
chunks.push(part.text);
}
}
return chunks;
}, [aiMessages, isStreaming]);
// Send message with optional context
const sendMessage = useCallback(
async (
content: string,
isUserMessage: boolean = true,
context?: { url: string; content: string },
) => {
if (!sessionId || !transport) {
console.error("[useAiSdkChat] Cannot send message: no session");
return;
}
setIsRegionBlockedModalOpen(false);
try {
await aiSendMessage(
{ text: content },
{
body: {
is_user_message: isUserMessage,
context: context || null,
},
},
);
} catch (err) {
console.error("[useAiSdkChat] Failed to send message:", err);
if (err instanceof Error && err.name === "AbortError") return;
toast.error("Failed to send message", {
description:
err instanceof Error ? err.message : "Failed to send message",
});
}
},
[sessionId, transport, aiSendMessage],
);
// Stop streaming
const stopStreaming = useCallback(() => {
stop();
}, [stop]);
return {
messages,
streamingChunks,
isStreaming,
error,
status,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessage,
stopStreaming,
// Expose raw AI SDK state for advanced use cases
aiMessages,
setAiMessages: setMessages,
};
}

View File

@@ -65,7 +65,7 @@ The result routes data to yes_output or no_output, enabling intelligent branchin
| condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
### Outputs
@@ -103,7 +103,7 @@ The block sends the entire conversation history to the chosen LLM, including sys
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
@@ -257,7 +257,7 @@ The block formulates a prompt based on the given focus or source data, sends it
|-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -424,7 +424,7 @@ The block sends the input prompt to a chosen LLM, along with any system prompts
| prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |
@@ -464,7 +464,7 @@ The block sends the input prompt to a chosen LLM, processes the response, and re
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
@@ -501,7 +501,7 @@ The block splits the input text into smaller chunks, sends each chunk to an LLM
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -763,7 +763,7 @@ Configure agent_mode_max_iterations to control loop behavior: 0 for single decis
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |

View File

@@ -20,7 +20,7 @@ Configure timeouts for DOM settlement and page loading. Variables can be passed
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| url | URL to navigate to. | str | Yes |
| action | Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step. | List[str] | Yes |
| variables | Variables to use in the action. Variables contains data you want the action to use. | Dict[str, str] | No |
@@ -65,7 +65,7 @@ Supports searching within iframes and configurable timeouts for dynamic content
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| url | URL to navigate to. | str | Yes |
| instruction | Natural language description of elements or actions to discover. | str | Yes |
| iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No |
@@ -106,7 +106,7 @@ Use this to explore a page's interactive elements before building automated work
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| url | URL to navigate to. | str | Yes |
| instruction | Natural language description of elements or actions to discover. | str | Yes |
| iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No |