Compare commits

..

7 Commits

Author SHA1 Message Date
claude[bot]
657190e759 fix(frontend): address latest CodeRabbit review suggestions
- Use valid sort value "runs" instead of undefined in MainSearchResultPage
  test defaultProps to match production default and satisfy type contract
- Remove redundant marketplacePage.goto() navigation in E2E test since
  the page is already at /marketplace after login

Co-authored-by: Ubbe <0ubbe@users.noreply.github.com>
2026-02-12 15:02:18 +00:00
claude[bot]
caabee9278 fix(frontend): address CodeRabbit review suggestions for marketplace tests
- Fix filename typo: supress → suppress and update imports
- Replace waitFor + getByText/getByRole with findByText/findByRole (idiomatic RTL async queries)
- Remove unnecessary comments in test files per coding guidelines
- Fix operator precedence with explicit parentheses in suppress helper
- Remove redundant `undefined as undefined` type casts
- Extract inline props to `interface Props` in MockOnboardingProvider
- Widen body type in create-500-handler from Record<string,unknown> to unknown
- Add isValidating reset in mock-supabase-auth helpers
- Add missing creators MSW handler in no-results tests
- Clean up vitest.setup.tsx: replace nested afterAll with module-scoped variable
- Fix lint errors: unused imports (act, matchesUrl) and unused params
- Fix formatting in custom-mutator.ts

Co-authored-by: Ubbe <0ubbe@users.noreply.github.com>
2026-02-12 14:36:34 +00:00
Otto
0fcaa63162 style(frontend): fix formatting in marketplace integration tests 2026-01-30 06:34:39 +00:00
Abhimanyu Yadav
6299045f98 Merge branch 'dev' into abhi/marketplace-integration-tests 2026-01-30 11:42:52 +05:30
Otto
24cd34ed3f refactor(frontend): reorganize marketplace integration tests into file-specific locations
- Split main.test.tsx files into dedicated test files:
  - rendering.test.tsx - Component rendering tests
  - auth-state.test.tsx - Authentication state tests
  - error-handling.test.tsx - API error handling tests

- Add new test files:
  - loading-state.test.tsx - Loading skeleton tests
  - empty-state.test.tsx - Empty data handling tests
  - no-results.test.tsx - Search with no results tests

Test coverage:
- MainMarketplacePage: 14 tests (5 files)
- MainAgentPage: 13 tests (3 files)
- MainCreatorPage: 10 tests (3 files)
- MainSearchResultPage: 11 tests (4 files)
- Total: 48 tests across 15 files
2026-01-30 06:11:53 +00:00
abhi1992002
876c6677de fix(frontend): enhance testing and error handling in marketplace components
### Changes 🏗️
- Updated `MainMarketplacePage` tests to include rendering checks for various sections and error handling for API failures.
- Improved `AgentInfo` component to filter out NaN values from version numbers.
- Modified `customMutator` to conditionally log errors based on the environment.
- Enhanced Vitest configuration for better integration testing setup.
- Refactored existing tests for marketplace agents and creators to focus on cross-page flows.

### Checklist 📋
- [x] Verified that all tests pass with the new changes.
- [x] Ensured comprehensive coverage for error handling scenarios in tests.
- [x] Updated documentation for testing practices in `CLAUDE.md`.
2026-01-23 12:26:00 +05:30
abhi1992002
3e3af45456 fix(frontend): update testing setup with @testing-library/jest-dom and happy-dom
### Changes 🏗️
- Removed `happy-dom` from `devDependencies` and added it back in a different section for clarity.
- Added `@testing-library/jest-dom` to `devDependencies` for improved testing assertions.
- Updated `tsconfig.json` to include types for `@testing-library/jest-dom`.
- Configured Vitest to enable global variables for testing.
- Imported `@testing-library/jest-dom` in the Vitest setup file for enhanced testing capabilities.

### Checklist 📋
- [x] Verified that all tests pass with the new setup.
- [x] Ensured that the testing environment is correctly configured for integration tests.
2026-01-23 10:07:36 +05:30
57 changed files with 1094 additions and 1875 deletions

View File

@@ -1,11 +1,10 @@
"""Shared agent search functionality for find_agent and find_library_agent tools."""
import logging
from typing import Any, Literal
from typing import Literal
from backend.api.features.library import db as library_db
from backend.api.features.store import db as store_db
from backend.data.graph import get_graph
from backend.util.exceptions import DatabaseError, NotFoundError
from .models import (
@@ -15,39 +14,12 @@ from .models import (
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
SearchSource = Literal["marketplace", "library"]
async def _fetch_input_schema_for_store_agent(
creator: str, slug: str
) -> dict[str, Any] | None:
"""Fetch input schema for a marketplace agent. Returns None on error."""
try:
graph, _ = await fetch_graph_from_store_slug(creator, slug)
if graph and graph.input_schema:
return graph.input_schema.get("properties", {})
except Exception as e:
logger.debug(f"Could not fetch input schema for {creator}/{slug}: {e}")
return None
async def _fetch_input_schema_for_library_agent(
graph_id: str, graph_version: int, user_id: str
) -> dict[str, Any] | None:
"""Fetch input schema for a library agent. Returns None on error."""
try:
graph = await get_graph(graph_id, graph_version, user_id=user_id)
if graph and graph.input_schema:
return graph.input_schema.get("properties", {})
except Exception as e:
logger.debug(f"Could not fetch input schema for graph {graph_id}: {e}")
return None
async def search_agents(
query: str,
source: SearchSource,
@@ -83,10 +55,6 @@ async def search_agents(
logger.info(f"Searching marketplace for: {query}")
results = await store_db.get_store_agents(search_query=query, page_size=5)
for agent in results.agents:
# Fetch input schema for this agent
inputs = await _fetch_input_schema_for_store_agent(
agent.creator, agent.slug
)
agents.append(
AgentInfo(
id=f"{agent.creator}/{agent.slug}",
@@ -99,7 +67,6 @@ async def search_agents(
rating=agent.rating,
runs=agent.runs,
is_featured=False,
inputs=inputs,
)
)
else: # library
@@ -110,10 +77,6 @@ async def search_agents(
page_size=10,
)
for agent in results.agents:
# Fetch input schema for this agent
inputs = await _fetch_input_schema_for_library_agent(
agent.graph_id, agent.graph_version, user_id # type: ignore[arg-type]
)
agents.append(
AgentInfo(
id=agent.id,
@@ -127,7 +90,6 @@ async def search_agents(
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
inputs=inputs,
)
)
logger.info(f"Found {len(agents)} agents in {source}")

View File

@@ -68,10 +68,6 @@ class AgentInfo(BaseModel):
has_external_trigger: bool | None = None
new_output: bool | None = None
graph_id: str | None = None
inputs: dict[str, Any] | None = Field(
default=None,
description="Input schema for the agent (properties from input_schema)",
)
class AgentsFoundResponse(ToolResponseBase):

View File

@@ -273,27 +273,6 @@ class RunAgentTool(BaseTool):
input_properties = graph.input_schema.get("properties", {})
required_fields = set(graph.input_schema.get("required", []))
provided_inputs = set(params.inputs.keys())
valid_fields = set(input_properties.keys())
# Check for unknown fields - reject early with helpful message
unknown_fields = provided_inputs - valid_fields
if unknown_fields:
valid_list = ", ".join(sorted(valid_fields)) if valid_fields else "none"
return AgentDetailsResponse(
message=(
f"Unknown input field(s) provided: {', '.join(sorted(unknown_fields))}. "
f"Valid fields for '{graph.name}': {valid_list}. "
"Please check the field names and try again."
),
session_id=session_id,
agent=self._build_agent_details(
graph,
extract_credentials_from_schema(graph.credentials_input_schema),
),
user_authenticated=True,
graph_id=graph.id,
graph_version=graph.version,
)
# If agent has inputs but none were provided AND use_defaults is not set,
# always show what's available first so user can decide

View File

@@ -8,7 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import Credentials, CredentialsFieldInfo, CredentialsMetaInput
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError
@@ -266,14 +266,13 @@ async def match_user_credentials_to_graph(
credential_requirements,
_node_fields,
) in aggregated_creds.items():
# Find first matching credential by provider, type, and scopes
# Find first matching credential by provider and type
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in credential_requirements.provider
and cred.type in credential_requirements.supported_types
and _credential_has_required_scopes(cred, credential_requirements)
),
None,
)
@@ -297,17 +296,10 @@ async def match_user_credentials_to_graph(
f"{credential_field_name} (validation failed: {e})"
)
else:
# Build a helpful error message including scope requirements
error_parts = [
f"provider in {list(credential_requirements.provider)}",
f"type in {list(credential_requirements.supported_types)}",
]
if credential_requirements.required_scopes:
error_parts.append(
f"scopes including {list(credential_requirements.required_scopes)}"
)
missing_creds.append(
f"{credential_field_name} (requires {', '.join(error_parts)})"
f"{credential_field_name} "
f"(requires provider in {list(credential_requirements.provider)}, "
f"type in {list(credential_requirements.supported_types)})"
)
logger.info(
@@ -317,28 +309,6 @@ async def match_user_credentials_to_graph(
return graph_credentials_inputs, missing_creds
def _credential_has_required_scopes(
credential: Credentials,
requirements: CredentialsFieldInfo,
) -> bool:
"""
Check if a credential has all the scopes required by the block.
For OAuth2 credentials, verifies that the credential's scopes are a superset
of the required scopes. For other credential types, returns True (no scope check).
"""
# Only OAuth2 credentials have scopes to check
if credential.type != "oauth2":
return True
# If no scopes are required, any credential matches
if not requirements.required_scopes:
return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes)
async def check_user_has_required_credentials(
user_id: str,
required_credentials: list[CredentialsMetaInput],

View File

@@ -115,6 +115,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
@@ -279,6 +280,9 @@ MODEL_METADATA = {
LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata(
"anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2
), # claude-haiku-4-5-20251001
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 64000, "Claude 3.7 Sonnet", "Anthropic", "Anthropic", 2
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1
), # claude-3-haiku-20240307

View File

@@ -83,7 +83,7 @@ class StagehandRecommendedLlmModel(str, Enum):
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
@property
def provider_name(self) -> str:
@@ -137,7 +137,7 @@ class StagehandObserveBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
@@ -230,7 +230,7 @@ class StagehandActBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()
@@ -330,7 +330,7 @@ class StagehandExtractBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET,
advanced=False,
)
model_credentials: AICredentials = AICredentialsField()

View File

@@ -81,6 +81,7 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.CLAUDE_4_5_HAIKU: 4,
LlmModel.CLAUDE_4_5_OPUS: 14,
LlmModel.CLAUDE_4_5_SONNET: 9,
LlmModel.CLAUDE_3_7_SONNET: 5,
LlmModel.CLAUDE_3_HAIKU: 1,
LlmModel.AIML_API_QWEN2_5_72B: 1,
LlmModel.AIML_API_LLAMA3_1_70B: 1,

View File

@@ -666,16 +666,10 @@ class CredentialsFieldInfo(BaseModel, Generic[CP, CT]):
if not (self.discriminator and self.discriminator_mapping):
return self
try:
provider = self.discriminator_mapping[discriminator_value]
except KeyError:
raise ValueError(
f"Model '{discriminator_value}' is not supported. "
"It may have been deprecated. Please update your agent configuration."
)
return CredentialsFieldInfo(
credentials_provider=frozenset([provider]),
credentials_provider=frozenset(
[self.discriminator_mapping[discriminator_value]]
),
credentials_types=self.supported_types,
credentials_scopes=self.required_scopes,
discriminator=self.discriminator,

View File

@@ -1,22 +0,0 @@
-- Migrate Claude 3.7 Sonnet to Claude 4.5 Sonnet
-- This updates all AgentNode blocks that use the deprecated Claude 3.7 Sonnet model
-- Anthropic is retiring claude-3-7-sonnet-20250219 on February 19, 2026
-- Update AgentNode constant inputs
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';
-- Update AgentPreset input overrides (stored in AgentNodeExecutionInputOutput)
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';

View File

@@ -132,6 +132,7 @@
"@tanstack/eslint-plugin-query": "5.91.2",
"@tanstack/react-query-devtools": "5.90.2",
"@testing-library/dom": "10.4.1",
"@testing-library/jest-dom": "6.9.1",
"@testing-library/react": "16.3.2",
"@types/canvas-confetti": "1.9.0",
"@types/lodash": "4.17.20",

View File

@@ -312,6 +312,9 @@ importers:
'@testing-library/dom':
specifier: 10.4.1
version: 10.4.1
'@testing-library/jest-dom':
specifier: 6.9.1
version: 6.9.1
'@testing-library/react':
specifier: 16.3.2
version: 16.3.2(@testing-library/dom@10.4.1)(@types/react-dom@18.3.5(@types/react@18.3.17))(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)

View File

@@ -1,185 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import { FavoritesSection } from "../FavoritesSection/FavoritesSection";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const mockFavoriteAgent = {
id: "fav-agent-id",
graph_id: "fav-graph-id",
graph_version: 1,
owner_user_id: "test-owner-id",
image_url: null,
creator_name: "Test Creator",
creator_image_url: "https://example.com/avatar.png",
status: "READY",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
name: "Favorite Agent Name",
description: "Test favorite agent",
input_schema: {},
output_schema: {},
credentials_input_schema: null,
has_external_trigger: false,
has_human_in_the_loop: false,
has_sensitive_action: false,
new_output: false,
can_access_graph: true,
is_latest_version: true,
is_favorite: true,
};
describe("FavoritesSection", () => {
afterEach(() => {
resetAuthState();
});
test("renders favorites section when there are favorites", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText(/favorites/i)).toBeInTheDocument();
});
});
test("renders favorite agent cards", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText("Favorite Agent Name")).toBeInTheDocument();
});
});
test("shows agent count", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByTestId("agents-count")).toBeInTheDocument();
});
});
test("does not render when there are no favorites", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
const { container } = render(<FavoritesSection searchTerm="" />);
// Wait for loading to complete
await waitFor(() => {
// Component should return null when no favorites
expect(container.textContent).toBe("");
});
});
test("filters favorites based on search term", async () => {
mockAuthenticatedUser();
// Mock that returns different results based on search term
server.use(
http.get("*/api/library/agents/favorites*", ({ request }) => {
const url = new URL(request.url);
const searchTerm = url.searchParams.get("search_term");
if (searchTerm === "nonexistent") {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}
return HttpResponse.json({
agents: [mockFavoriteAgent],
pagination: {
total_items: 1,
total_pages: 1,
current_page: 1,
page_size: 20,
},
});
}),
);
const { rerender } = render(<FavoritesSection searchTerm="" />);
await waitFor(() => {
expect(screen.getByText("Favorite Agent Name")).toBeInTheDocument();
});
// Rerender with search term that yields no results
rerender(<FavoritesSection searchTerm="nonexistent" />);
await waitFor(() => {
expect(screen.queryByText("Favorite Agent Name")).not.toBeInTheDocument();
});
});
});

View File

@@ -1,122 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { LibraryAgentCard } from "../LibraryAgentCard/LibraryAgentCard";
import { LibraryAgent } from "@/app/api/__generated__/models/libraryAgent";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const mockAgent: LibraryAgent = {
id: "test-agent-id",
graph_id: "test-graph-id",
graph_version: 1,
owner_user_id: "test-owner-id",
image_url: null,
creator_name: "Test Creator",
creator_image_url: "https://example.com/avatar.png",
status: "READY",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
name: "Test Agent Name",
description: "Test agent description",
input_schema: {},
output_schema: {},
credentials_input_schema: null,
has_external_trigger: false,
has_human_in_the_loop: false,
has_sensitive_action: false,
new_output: false,
can_access_graph: true,
is_latest_version: true,
is_favorite: false,
};
describe("LibraryAgentCard", () => {
afterEach(() => {
resetAuthState();
});
test("renders agent name", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText("Test Agent Name")).toBeInTheDocument();
});
test("renders see runs link", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/see runs/i)).toBeInTheDocument();
});
test("renders open in builder link when can_access_graph is true", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/open in builder/i)).toBeInTheDocument();
});
test("does not render open in builder link when can_access_graph is false", () => {
mockAuthenticatedUser();
const agentWithoutAccess = { ...mockAgent, can_access_graph: false };
render(<LibraryAgentCard agent={agentWithoutAccess} />);
expect(screen.queryByText(/open in builder/i)).not.toBeInTheDocument();
});
test("shows 'FROM MARKETPLACE' label for marketplace agents", () => {
mockAuthenticatedUser();
const marketplaceAgent = {
...mockAgent,
marketplace_listing: {
id: "listing-id",
name: "Marketplace Agent",
slug: "marketplace-agent",
creator: {
id: "creator-id",
name: "Creator Name",
slug: "creator-slug",
},
},
};
render(<LibraryAgentCard agent={marketplaceAgent} />);
expect(screen.getByText(/from marketplace/i)).toBeInTheDocument();
});
test("shows 'Built by you' label for user's own agents", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
expect(screen.getByText(/built by you/i)).toBeInTheDocument();
});
test("renders favorite button", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
// The favorite button should be present (as a heart icon button)
const card = screen.getByTestId("library-agent-card");
expect(card).toBeInTheDocument();
});
test("links to correct agent detail page", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
const link = screen.getByTestId("library-agent-card-see-runs-link");
expect(link).toHaveAttribute("href", "/library/agents/test-agent-id");
});
test("links to correct builder page", () => {
mockAuthenticatedUser();
render(<LibraryAgentCard agent={mockAgent} />);
const builderLink = screen.getByTestId(
"library-agent-card-open-in-builder-link",
);
expect(builderLink).toHaveAttribute("href", "/build?flowID=test-graph-id");
});
});

View File

@@ -1,53 +0,0 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import { LibrarySearchBar } from "../LibrarySearchBar/LibrarySearchBar";
describe("LibrarySearchBar", () => {
test("renders search input", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
test("renders search icon", () => {
const setSearchTerm = vi.fn();
const { container } = render(
<LibrarySearchBar setSearchTerm={setSearchTerm} />,
);
// Check for the magnifying glass icon (SVG element)
const searchIcon = container.querySelector("svg");
expect(searchIcon).toBeInTheDocument();
});
test("calls setSearchTerm on input change", async () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
const input = screen.getByPlaceholderText(/search agents/i);
fireEvent.change(input, { target: { value: "test query" } });
// The search bar uses debouncing, so we need to wait
await waitFor(
() => {
expect(setSearchTerm).toHaveBeenCalled();
},
{ timeout: 1000 },
);
});
test("has correct test id", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
test("input has correct test id", () => {
const setSearchTerm = vi.fn();
render(<LibrarySearchBar setSearchTerm={setSearchTerm} />);
expect(screen.getByTestId("library-textbox")).toBeInTheDocument();
});
});

View File

@@ -1,53 +0,0 @@
import { describe, expect, test, vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import { LibrarySortMenu } from "../LibrarySortMenu/LibrarySortMenu";
describe("LibrarySortMenu", () => {
test("renders sort dropdown", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByTestId("sort-by-dropdown")).toBeInTheDocument();
});
test("shows 'sort by' label on larger screens", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByText(/sort by/i)).toBeInTheDocument();
});
test("shows default placeholder text", () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
expect(screen.getByText(/last modified/i)).toBeInTheDocument();
});
test("opens dropdown when clicked", async () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
const trigger = screen.getByRole("combobox");
fireEvent.click(trigger);
await waitFor(() => {
expect(screen.getByText(/creation date/i)).toBeInTheDocument();
});
});
test("shows both sort options in dropdown", async () => {
const setLibrarySort = vi.fn();
render(<LibrarySortMenu setLibrarySort={setLibrarySort} />);
const trigger = screen.getByRole("combobox");
fireEvent.click(trigger);
await waitFor(() => {
expect(screen.getByText(/creation date/i)).toBeInTheDocument();
expect(
screen.getAllByText(/last modified/i).length,
).toBeGreaterThanOrEqual(1);
});
});
});

View File

@@ -1,78 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, fireEvent, waitFor } from "@/tests/integrations/test-utils";
import LibraryUploadAgentDialog from "../LibraryUploadAgentDialog/LibraryUploadAgentDialog";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryUploadAgentDialog", () => {
afterEach(() => {
resetAuthState();
});
test("renders upload button", () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
expect(
screen.getByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("opens dialog when upload button is clicked", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByText("Upload Agent")).toBeInTheDocument();
});
});
test("dialog contains agent name input", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByLabelText(/agent name/i)).toBeInTheDocument();
});
});
test("dialog contains agent description input", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const uploadButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(uploadButton);
await waitFor(() => {
expect(screen.getByLabelText(/agent description/i)).toBeInTheDocument();
});
});
test("upload button is disabled when form is incomplete", async () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
const triggerButton = screen.getByRole("button", { name: /upload agent/i });
fireEvent.click(triggerButton);
await waitFor(() => {
const submitButton = screen.getByRole("button", { name: /^upload$/i });
expect(submitButton).toBeDisabled();
});
});
test("has correct test id on trigger button", () => {
mockAuthenticatedUser();
render(<LibraryUploadAgentDialog />);
expect(screen.getByTestId("upload-agent-button")).toBeInTheDocument();
});
});

View File

@@ -1,40 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for upload button text to appear (indicates page is rendered)
expect(
await screen.findByText("Upload agent", { exact: false }),
).toBeInTheDocument();
// Search bar should be visible
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<LibraryPage />);
// Wait for upload button text to appear (indicates page is rendered)
expect(
await screen.findByText("Upload agent", { exact: false }),
).toBeInTheDocument();
// Search bar should still be visible
expect(screen.getByTestId("search-bar")).toBeInTheDocument();
});
});

View File

@@ -1,82 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Empty State", () => {
afterEach(() => {
resetAuthState();
});
test("handles empty agents list gracefully", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<LibraryPage />);
// Page should still render without crashing
// Search bar should be visible even with no agents
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
// Upload button should be visible
expect(
screen.getByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("handles empty favorites gracefully", async () => {
mockAuthenticatedUser();
server.use(
http.get("*/api/library/agents/favorites*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
render(<LibraryPage />);
// Page should still render without crashing
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
});
});

View File

@@ -1,59 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
import {
getGetV2ListLibraryAgentsMockHandler422,
getGetV2ListFavoriteLibraryAgentsMockHandler422,
} from "@/app/api/__generated__/endpoints/library/library.msw";
describe("LibraryPage - Error Handling", () => {
afterEach(() => {
resetAuthState();
});
test("handles API 422 error gracefully", async () => {
mockAuthenticatedUser();
server.use(getGetV2ListLibraryAgentsMockHandler422());
render(<LibraryPage />);
// Page should still render without crashing
// Search bar should be visible even with error
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
test("handles favorites API 422 error gracefully", async () => {
mockAuthenticatedUser();
server.use(getGetV2ListFavoriteLibraryAgentsMockHandler422());
render(<LibraryPage />);
// Page should still render without crashing
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
test("handles API 500 error gracefully", async () => {
mockAuthenticatedUser();
server.use(create500Handler("get", "*/api/library/agents*"));
render(<LibraryPage />);
// Page should still render without crashing
await waitFor(() => {
expect(screen.getByPlaceholderText(/search agents/i)).toBeInTheDocument();
});
});
});

View File

@@ -1,55 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse, delay } from "msw";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Loading State", () => {
afterEach(() => {
resetAuthState();
});
test("shows loading spinner while agents are being fetched", async () => {
mockAuthenticatedUser();
// Override handlers to add delay to simulate loading
server.use(
http.get("*/api/library/agents*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
http.get("*/api/library/agents/favorites*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 20,
},
});
}),
);
const { container } = render(<LibraryPage />);
// Check for loading spinner (LoadingSpinner component)
const loadingElements = container.querySelectorAll(
'[class*="animate-spin"]',
);
expect(loadingElements.length).toBeGreaterThan(0);
});
});

View File

@@ -1,65 +0,0 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import LibraryPage from "../../page";
import {
mockAuthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("LibraryPage - Rendering", () => {
afterEach(() => {
resetAuthState();
});
test("renders search bar", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
expect(
await screen.findByPlaceholderText(/search agents/i),
).toBeInTheDocument();
});
test("renders upload agent button", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
expect(
await screen.findByRole("button", { name: /upload agent/i }),
).toBeInTheDocument();
});
test("renders agent cards when data is loaded", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for agent cards to appear (from mock data)
await waitFor(() => {
const agentCards = screen.getAllByTestId("library-agent-card");
expect(agentCards.length).toBeGreaterThan(0);
});
});
test("agent cards display agent name", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
// Wait for agent cards and check they have names
await waitFor(() => {
const agentNames = screen.getAllByTestId("library-agent-card-name");
expect(agentNames.length).toBeGreaterThan(0);
});
});
test("agent cards have see runs link", async () => {
mockAuthenticatedUser();
render(<LibraryPage />);
await waitFor(() => {
const seeRunsLinks = screen.getAllByTestId(
"library-agent-card-see-runs-link",
);
expect(seeRunsLinks.length).toBeGreaterThan(0);
});
});
});

View File

@@ -80,6 +80,7 @@ export const AgentInfo = ({
const allVersions = storeData?.versions
? storeData.versions
.map((versionStr: string) => parseInt(versionStr, 10))
.filter((versionNum: number) => !isNaN(versionNum))
.sort((a: number, b: number) => b - a)
.map((versionNum: number) => ({
version: versionNum,

View File

@@ -0,0 +1,62 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import { MainAgentPage } from "../MainAgentPage";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const defaultParams = {
creator: "test-creator",
slug: "test-agent",
};
describe("MainAgentPage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("shows add to library button when authenticated", async () => {
mockAuthenticatedUser();
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(
screen.getByTestId("agent-add-library-button"),
).toBeInTheDocument();
});
});
test("hides add to library button when not authenticated", async () => {
mockUnauthenticatedUser();
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-title")).toBeInTheDocument();
});
expect(
screen.queryByTestId("agent-add-library-button"),
).not.toBeInTheDocument();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-title")).toBeInTheDocument();
});
expect(screen.getByTestId("agent-download-button")).toBeInTheDocument();
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-title")).toBeInTheDocument();
});
expect(screen.getByTestId("agent-add-library-button")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,57 @@
import { describe, expect, test } from "vitest";
import { render, screen, waitFor, act } from "@/tests/integrations/test-utils";
import { MainAgentPage } from "../MainAgentPage";
import { server } from "@/mocks/mock-server";
import { getGetV2GetSpecificAgentMockHandler422 } from "@/app/api/__generated__/endpoints/store/store.msw";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
const defaultParams = {
creator: "test-creator",
slug: "test-agent",
};
describe("MainAgentPage - Error Handling", () => {
test("displays error when agent API returns 422", async () => {
server.use(getGetV2GetSpecificAgentMockHandler422());
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(
screen.getByText("Failed to load agent data", { exact: false }),
).toBeInTheDocument();
});
await act(async () => {});
});
test("displays error when API returns 500", async () => {
server.use(
create500Handler("get", "*/api/store/agents/test-creator/test-agent"),
);
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(
screen.getByText("Failed to load agent data", { exact: false }),
).toBeInTheDocument();
});
await act(async () => {});
});
test("retry button is visible on error", async () => {
server.use(getGetV2GetSpecificAgentMockHandler422());
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(
screen.getByRole("button", { name: /try again/i }),
).toBeInTheDocument();
});
await act(async () => {});
});
});

View File

@@ -0,0 +1,61 @@
import { describe, expect, test } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import { MainAgentPage } from "../MainAgentPage";
const defaultParams = {
creator: "test-creator",
slug: "test-agent",
};
describe("MainAgentPage - Rendering", () => {
test("renders agent info with title", async () => {
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-title")).toBeInTheDocument();
});
});
test("renders agent creator info", async () => {
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-creator")).toBeInTheDocument();
});
});
test("renders agent description", async () => {
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-description")).toBeInTheDocument();
});
});
test("renders breadcrumbs with marketplace link", async () => {
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(
screen.getByRole("link", { name: /marketplace/i }),
).toBeInTheDocument();
});
});
test("renders download button", async () => {
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("agent-download-button")).toBeInTheDocument();
});
});
test("renders similar agents section", async () => {
render(<MainAgentPage params={defaultParams} />);
await waitFor(() => {
expect(
screen.getByText("Similar agents", { exact: false }),
).toBeInTheDocument();
});
});
});

View File

@@ -0,0 +1,36 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen, waitFor } from "@/tests/integrations/test-utils";
import { MainCreatorPage } from "../MainCreatorPage";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const defaultParams = {
creator: "test-creator",
};
describe("MainCreatorPage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<MainCreatorPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("creator-description")).toBeInTheDocument();
});
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<MainCreatorPage params={defaultParams} />);
await waitFor(() => {
expect(screen.getByTestId("creator-description")).toBeInTheDocument();
});
});
});

View File

@@ -0,0 +1,55 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainCreatorPage } from "../MainCreatorPage";
import { server } from "@/mocks/mock-server";
import {
getGetV2GetCreatorDetailsMockHandler422,
getGetV2ListStoreAgentsMockHandler422,
} from "@/app/api/__generated__/endpoints/store/store.msw";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
const defaultParams = {
creator: "test-creator",
};
describe("MainCreatorPage - Error Handling", () => {
test("displays error when creator details API returns 422", async () => {
server.use(getGetV2GetCreatorDetailsMockHandler422());
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByText("Failed to load creator data", { exact: false }),
).toBeInTheDocument();
});
test("displays error when creator agents API returns 422", async () => {
server.use(getGetV2ListStoreAgentsMockHandler422());
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByText("Failed to load creator data", { exact: false }),
).toBeInTheDocument();
});
test("displays error when API returns 500", async () => {
server.use(create500Handler("get", "*/api/store/creator/test-creator"));
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByText("Failed to load creator data", { exact: false }),
).toBeInTheDocument();
});
test("retry button is visible on error", async () => {
server.use(getGetV2GetCreatorDetailsMockHandler422());
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByRole("button", { name: /try again/i }),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,38 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainCreatorPage } from "../MainCreatorPage";
const defaultParams = {
creator: "test-creator",
};
describe("MainCreatorPage - Rendering", () => {
test("renders creator description", async () => {
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByTestId("creator-description"),
).toBeInTheDocument();
});
test("renders breadcrumbs with marketplace link", async () => {
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByRole("link", { name: /marketplace/i }),
).toBeInTheDocument();
});
test("renders about section", async () => {
render(<MainCreatorPage params={defaultParams} />);
expect(await screen.findByText("About")).toBeInTheDocument();
});
test("renders agents by creator section", async () => {
render(<MainCreatorPage params={defaultParams} />);
expect(
await screen.findByText(/Agents by/i, { exact: false }),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,38 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("MainMarketplacePage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeInTheDocument();
expect(
screen.getByText("Top Agents", { exact: false }),
).toBeInTheDocument();
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeInTheDocument();
expect(
screen.getByText("Top Agents", { exact: false }),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,82 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
describe("MainMarketplacePage - Empty State", () => {
test("handles empty featured agents gracefully", async () => {
server.use(
http.get("*/api/store/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured creators", { exact: false }),
).toBeInTheDocument();
});
test("handles empty creators gracefully", async () => {
server.use(
http.get("*/api/store/creators*", () => {
return HttpResponse.json({
creators: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeInTheDocument();
});
test("handles all empty data gracefully", async () => {
server.use(
http.get("*/api/store/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
http.get("*/api/store/creators*", () => {
return HttpResponse.json({
creators: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
render(<MainMarkeplacePage />);
expect(await screen.findByPlaceholderText(/search/i)).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,57 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
import { server } from "@/mocks/mock-server";
import {
getGetV2ListStoreAgentsMockHandler422,
getGetV2ListStoreCreatorsMockHandler422,
} from "@/app/api/__generated__/endpoints/store/store.msw";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
describe("MainMarketplacePage - Error Handling", () => {
test("displays error when featured agents API returns 422", async () => {
server.use(getGetV2ListStoreAgentsMockHandler422());
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Failed to load marketplace data", {
exact: false,
}),
).toBeInTheDocument();
});
test("displays error when creators API returns 422", async () => {
server.use(getGetV2ListStoreCreatorsMockHandler422());
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Failed to load marketplace data", {
exact: false,
}),
).toBeInTheDocument();
});
test("displays error when API returns 500", async () => {
server.use(create500Handler("get", "*/api/store/agents*"));
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Failed to load marketplace data", {
exact: false,
}),
).toBeInTheDocument();
});
test("retry button is visible on error", async () => {
server.use(getGetV2ListStoreAgentsMockHandler422());
render(<MainMarkeplacePage />);
expect(
await screen.findByRole("button", { name: /try again/i }),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,45 @@
import { describe, expect, test } from "vitest";
import { render, waitFor } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse, delay } from "msw";
describe("MainMarketplacePage - Loading State", () => {
test("shows loading skeleton while data is being fetched", async () => {
server.use(
http.get("*/api/store/agents*", async () => {
await delay(500);
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
http.get("*/api/store/creators*", async () => {
await delay(500);
return HttpResponse.json({
creators: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
const { container } = render(<MainMarkeplacePage />);
await waitFor(() => {
const loadingElements = container.querySelectorAll(
'[class*="animate-pulse"]',
);
expect(loadingElements.length).toBeGreaterThan(0);
});
});
});

View File

@@ -1,15 +0,0 @@
import { expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
import { server } from "@/mocks/mock-server";
import { getDeleteV2DeleteStoreSubmissionMockHandler422 } from "@/app/api/__generated__/endpoints/store/store.msw";
// Only for CI testing purpose, will remove it in future PR
test("MainMarketplacePage", async () => {
server.use(getDeleteV2DeleteStoreSubmissionMockHandler422());
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeDefined();
});

View File

@@ -0,0 +1,39 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainMarkeplacePage } from "../MainMarketplacePage";
describe("MainMarketplacePage - Rendering", () => {
test("renders hero section with search bar", async () => {
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeInTheDocument();
expect(screen.getByPlaceholderText(/search/i)).toBeInTheDocument();
});
test("renders featured agents section", async () => {
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured agents", { exact: false }),
).toBeInTheDocument();
});
test("renders top agents section", async () => {
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Top Agents", { exact: false }),
).toBeInTheDocument();
});
test("renders featured creators section", async () => {
render(<MainMarkeplacePage />);
expect(
await screen.findByText("Featured creators", { exact: false }),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,33 @@
import { describe, expect, test, afterEach } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainSearchResultPage } from "../MainSearchResultPage";
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
const defaultProps = {
searchTerm: "test-search",
sort: "runs" as const,
};
describe("MainSearchResultPage - Auth State", () => {
afterEach(() => {
resetAuthState();
});
test("renders page correctly when logged out", async () => {
mockUnauthenticatedUser();
render(<MainSearchResultPage {...defaultProps} />);
expect(await screen.findByText("Results for:")).toBeInTheDocument();
});
test("renders page correctly when logged in", async () => {
mockAuthenticatedUser();
render(<MainSearchResultPage {...defaultProps} />);
expect(await screen.findByText("Results for:")).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,62 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainSearchResultPage } from "../MainSearchResultPage";
import { server } from "@/mocks/mock-server";
import {
getGetV2ListStoreAgentsMockHandler422,
getGetV2ListStoreCreatorsMockHandler422,
} from "@/app/api/__generated__/endpoints/store/store.msw";
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
const defaultProps = {
searchTerm: "test-search",
sort: "runs" as const,
};
describe("MainSearchResultPage - Error Handling", () => {
test("displays error when agents API returns 422", async () => {
server.use(getGetV2ListStoreAgentsMockHandler422());
render(<MainSearchResultPage {...defaultProps} />);
expect(
await screen.findByText("Failed to load marketplace data", {
exact: false,
}),
).toBeInTheDocument();
});
test("displays error when creators API returns 422", async () => {
server.use(getGetV2ListStoreCreatorsMockHandler422());
render(<MainSearchResultPage {...defaultProps} />);
expect(
await screen.findByText("Failed to load marketplace data", {
exact: false,
}),
).toBeInTheDocument();
});
test("displays error when API returns 500", async () => {
server.use(create500Handler("get", "*/api/store/agents*"));
render(<MainSearchResultPage {...defaultProps} />);
expect(
await screen.findByText("Failed to load marketplace data", {
exact: false,
}),
).toBeInTheDocument();
});
test("retry button is visible on error", async () => {
server.use(getGetV2ListStoreAgentsMockHandler422());
render(<MainSearchResultPage {...defaultProps} />);
expect(
await screen.findByRole("button", { name: /try again/i }),
).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,108 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainSearchResultPage } from "../MainSearchResultPage";
import { server } from "@/mocks/mock-server";
import { http, HttpResponse } from "msw";
const defaultProps = {
searchTerm: "nonexistent-search-term-xyz",
sort: "runs" as const,
};
describe("MainSearchResultPage - No Results", () => {
test("shows empty state when no agents match search", async () => {
server.use(
http.get("*/api/store/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
http.get("*/api/store/creators*", () => {
return HttpResponse.json({
creators: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
render(<MainSearchResultPage {...defaultProps} />);
expect(await screen.findByText("Results for:")).toBeInTheDocument();
expect(screen.getByText("nonexistent-search-term-xyz")).toBeInTheDocument();
});
test("displays search term even with no results", async () => {
server.use(
http.get("*/api/store/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
http.get("*/api/store/creators*", () => {
return HttpResponse.json({
creators: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
render(<MainSearchResultPage {...defaultProps} />);
expect(
await screen.findByText("nonexistent-search-term-xyz"),
).toBeInTheDocument();
});
test("search bar is present with no results", async () => {
server.use(
http.get("*/api/store/agents*", () => {
return HttpResponse.json({
agents: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
http.get("*/api/store/creators*", () => {
return HttpResponse.json({
creators: [],
pagination: {
total_items: 0,
total_pages: 0,
current_page: 1,
page_size: 10,
},
});
}),
);
render(<MainSearchResultPage {...defaultProps} />);
expect(await screen.findByPlaceholderText(/search/i)).toBeInTheDocument();
});
});

View File

@@ -0,0 +1,23 @@
import { describe, expect, test } from "vitest";
import { render, screen } from "@/tests/integrations/test-utils";
import { MainSearchResultPage } from "../MainSearchResultPage";
const defaultProps = {
searchTerm: "test-search",
sort: "runs" as const,
};
describe("MainSearchResultPage - Rendering", () => {
test("renders search results header with search term", async () => {
render(<MainSearchResultPage {...defaultProps} />);
expect(await screen.findByText("Results for:")).toBeInTheDocument();
expect(await screen.findByText("test-search")).toBeInTheDocument();
});
test("renders search bar", async () => {
render(<MainSearchResultPage {...defaultProps} />);
expect(await screen.findByPlaceholderText(/search/i)).toBeInTheDocument();
});
});

View File

@@ -136,16 +136,19 @@ export const customMutator = async <
response.statusText ||
`HTTP ${response.status}`;
console.error(
`Request failed ${environment.isServerSide() ? "on server" : "on client"}`,
{
status: response.status,
method,
url: fullUrl.replace(baseUrl, ""), // Show relative URL for cleaner logs
errorMessage,
responseData: responseData || "No response data",
},
);
const isTestEnv = process.env.NODE_ENV === "test";
if (!isTestEnv) {
console.error(
`Request failed ${environment.isServerSide() ? "on server" : "on client"}`,
{
status: response.status,
method,
url: fullUrl.replace(baseUrl, ""), // Show relative URL for cleaner logs
errorMessage,
responseData: responseData || "No response data",
},
);
}
throw new ApiError(errorMessage, response.status, responseData);
}

View File

@@ -1,246 +0,0 @@
/**
* useChatContainerAiSdk - ChatContainer hook using Vercel AI SDK
*
* This is a drop-in replacement for useChatContainer that uses @ai-sdk/react
* instead of the custom streaming implementation. The API surface is identical
* to enable easy A/B testing and gradual migration.
*/
import type { SessionDetailResponse } from "@/app/api/__generated__/models/sessionDetailResponse";
import { useEffect, useMemo, useRef } from "react";
import type { UIMessage } from "ai";
import { useAiSdkChat } from "../../useAiSdkChat";
import { usePageContext } from "../../usePageContext";
import type { ChatMessageData } from "../ChatMessage/useChatMessage";
import {
filterAuthMessages,
hasSentInitialPrompt,
markInitialPromptSent,
processInitialMessages,
} from "./helpers";
// Helper to convert backend messages to AI SDK UIMessage format
function convertToUIMessages(
messages: SessionDetailResponse["messages"],
): UIMessage[] {
const result: UIMessage[] = [];
for (const msg of messages) {
if (!msg.role || !msg.content) continue;
// Create parts based on message type
const parts: UIMessage["parts"] = [];
if (msg.role === "user" || msg.role === "assistant") {
if (typeof msg.content === "string") {
parts.push({ type: "text", text: msg.content });
}
}
// Handle tool calls in assistant messages
if (msg.role === "assistant" && msg.tool_calls) {
for (const toolCall of msg.tool_calls as Array<{
id: string;
type: string;
function: { name: string; arguments: string };
}>) {
if (toolCall.type === "function") {
let args = {};
try {
args = JSON.parse(toolCall.function.arguments);
} catch {
// Keep empty args
}
parts.push({
type: `tool-${toolCall.function.name}` as `tool-${string}`,
toolCallId: toolCall.id,
toolName: toolCall.function.name,
state: "input-available",
input: args,
} as UIMessage["parts"][number]);
}
}
}
// Handle tool responses
if (msg.role === "tool" && msg.tool_call_id) {
// Find the matching tool call to get the name
const toolName = "unknown";
let output: unknown = msg.content;
try {
output =
typeof msg.content === "string"
? JSON.parse(msg.content)
: msg.content;
} catch {
// Keep as string
}
parts.push({
type: `tool-${toolName}` as `tool-${string}`,
toolCallId: msg.tool_call_id as string,
toolName,
state: "output-available",
output,
} as UIMessage["parts"][number]);
}
if (parts.length > 0) {
result.push({
id: msg.id || `msg-${Date.now()}-${Math.random()}`,
role: msg.role === "tool" ? "assistant" : (msg.role as "user" | "assistant"),
parts,
createdAt: msg.created_at ? new Date(msg.created_at as string) : new Date(),
});
}
}
return result;
}
interface Args {
sessionId: string | null;
initialMessages: SessionDetailResponse["messages"];
initialPrompt?: string;
onOperationStarted?: () => void;
}
export function useChatContainerAiSdk({
sessionId,
initialMessages,
initialPrompt,
onOperationStarted,
}: Args) {
const { capturePageContext } = usePageContext();
const sendMessageRef = useRef<
(
content: string,
isUserMessage?: boolean,
context?: { url: string; content: string },
) => Promise<void>
>();
// Convert initial messages to AI SDK format
const uiMessages = useMemo(
() => convertToUIMessages(initialMessages),
[initialMessages],
);
const {
messages: aiSdkMessages,
streamingChunks,
isStreaming,
error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessage,
stopStreaming,
} = useAiSdkChat({
sessionId,
initialMessages: uiMessages,
onOperationStarted,
});
// Keep ref updated for initial prompt handling
sendMessageRef.current = sendMessage;
// Merge AI SDK messages with processed initial messages
// This ensures we show both historical messages and new streaming messages
const allMessages = useMemo(() => {
const processedInitial = processInitialMessages(initialMessages);
// Build a set of message keys for deduplication
const seenKeys = new Set<string>();
const result: ChatMessageData[] = [];
// Add processed initial messages first
for (const msg of processedInitial) {
const key = getMessageKey(msg);
if (!seenKeys.has(key)) {
seenKeys.add(key);
result.push(msg);
}
}
// Add AI SDK messages that aren't duplicates
for (const msg of aiSdkMessages) {
const key = getMessageKey(msg);
if (!seenKeys.has(key)) {
seenKeys.add(key);
result.push(msg);
}
}
return result;
}, [initialMessages, aiSdkMessages]);
// Handle initial prompt
useEffect(
function handleInitialPrompt() {
if (!initialPrompt || !sessionId) return;
if (initialMessages.length > 0) return;
if (hasSentInitialPrompt(sessionId)) return;
markInitialPromptSent(sessionId);
const context = capturePageContext();
sendMessageRef.current?.(initialPrompt, true, context);
},
[initialPrompt, sessionId, initialMessages.length, capturePageContext],
);
// Send message with page context
async function sendMessageWithContext(
content: string,
isUserMessage: boolean = true,
) {
const context = capturePageContext();
await sendMessage(content, isUserMessage, context);
}
function handleRegionModalOpenChange(open: boolean) {
setIsRegionBlockedModalOpen(open);
}
function handleRegionModalClose() {
setIsRegionBlockedModalOpen(false);
}
return {
messages: filterAuthMessages(allMessages),
streamingChunks,
isStreaming,
error,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessageWithContext,
handleRegionModalOpenChange,
handleRegionModalClose,
sendMessage,
stopStreaming,
};
}
// Helper to generate deduplication key for a message
function getMessageKey(msg: ChatMessageData): string {
if (msg.type === "message") {
return `msg:${msg.role}:${msg.content}`;
} else if (msg.type === "tool_call") {
return `toolcall:${msg.toolId}`;
} else if (msg.type === "tool_response") {
return `toolresponse:${(msg as { toolId?: string }).toolId}`;
} else if (
msg.type === "operation_started" ||
msg.type === "operation_pending" ||
msg.type === "operation_in_progress"
) {
const typedMsg = msg as {
toolId?: string;
operationId?: string;
toolCallId?: string;
toolName?: string;
};
return `op:${typedMsg.toolId || typedMsg.operationId || typedMsg.toolCallId || ""}:${typedMsg.toolName || ""}`;
} else {
return `${msg.type}:${JSON.stringify(msg).slice(0, 100)}`;
}
}

View File

@@ -57,7 +57,6 @@ export function ChatInput({
isStreaming,
value,
baseHandleKeyDown,
inputId,
});
return (

View File

@@ -15,7 +15,6 @@ interface Args {
isStreaming?: boolean;
value: string;
baseHandleKeyDown: (event: KeyboardEvent<HTMLTextAreaElement>) => void;
inputId?: string;
}
export function useVoiceRecording({
@@ -24,7 +23,6 @@ export function useVoiceRecording({
isStreaming = false,
value,
baseHandleKeyDown,
inputId,
}: Args) {
const [isRecording, setIsRecording] = useState(false);
const [isTranscribing, setIsTranscribing] = useState(false);
@@ -105,7 +103,7 @@ export function useVoiceRecording({
setIsTranscribing(false);
}
},
[handleTranscription, inputId],
[handleTranscription],
);
const stopRecording = useCallback(() => {
@@ -203,15 +201,6 @@ export function useVoiceRecording({
}
}, [error, toast]);
useEffect(() => {
if (!isTranscribing && inputId) {
const inputElement = document.getElementById(inputId);
if (inputElement) {
inputElement.focus();
}
}
}, [isTranscribing, inputId]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
if (event.key === " " && !value.trim() && !isTranscribing) {

View File

@@ -1,421 +0,0 @@
"use client";
/**
* useAiSdkChat - Vercel AI SDK integration for CoPilot Chat
*
* This hook wraps @ai-sdk/react's useChat to provide:
* - Streaming chat with the existing Python backend (already AI SDK protocol compatible)
* - Integration with existing session management
* - Custom tool response parsing for AutoGPT-specific types
* - Page context injection
*
* The Python backend already implements the AI SDK Data Stream Protocol (v1),
* so this hook can communicate directly without any backend changes.
*/
import { useChat as useAiSdkChatBase } from "@ai-sdk/react";
import { DefaultChatTransport, type UIMessage } from "ai";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { toast } from "sonner";
import type { ChatMessageData } from "./components/ChatMessage/useChatMessage";
// Tool response types from the backend
type OperationType =
| "operation_started"
| "operation_pending"
| "operation_in_progress";
interface ToolOutputBase {
type: string;
[key: string]: unknown;
}
interface UseAiSdkChatOptions {
sessionId: string | null;
initialMessages?: UIMessage[];
onOperationStarted?: () => void;
onStreamingChange?: (isStreaming: boolean) => void;
}
/**
* Parse tool output from AI SDK message parts into ChatMessageData format
*/
function parseToolOutput(
toolCallId: string,
toolName: string,
output: unknown,
): ChatMessageData | null {
if (!output) return null;
let parsed: ToolOutputBase;
try {
parsed =
typeof output === "string"
? JSON.parse(output)
: (output as ToolOutputBase);
} catch {
return null;
}
const type = parsed.type;
// Handle operation status types
if (
type === "operation_started" ||
type === "operation_pending" ||
type === "operation_in_progress"
) {
return {
type: type as OperationType,
toolId: toolCallId,
toolName: toolName,
operationId: (parsed.operation_id as string) || undefined,
message: (parsed.message as string) || undefined,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle agent carousel
if (type === "agent_carousel" && Array.isArray(parsed.agents)) {
return {
type: "agent_carousel",
toolId: toolCallId,
toolName: toolName,
agents: parsed.agents,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle execution started
if (type === "execution_started") {
return {
type: "execution_started",
toolId: toolCallId,
toolName: toolName,
graphId: parsed.graph_id as string,
graphVersion: parsed.graph_version as number,
graphExecId: parsed.graph_exec_id as string,
nodeExecIds: parsed.node_exec_ids as string[],
timestamp: new Date(),
} as ChatMessageData;
}
// Handle error responses
if (type === "error") {
return {
type: "tool_response",
toolId: toolCallId,
toolName: toolName,
result: parsed,
success: false,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle clarification questions
if (type === "clarification_questions" && Array.isArray(parsed.questions)) {
return {
type: "clarification_questions",
toolId: toolCallId,
toolName: toolName,
questions: parsed.questions,
timestamp: new Date(),
} as ChatMessageData;
}
// Handle credentials needed
if (type === "credentials_needed" || type === "setup_requirements") {
const credentials = parsed.credentials as
| Array<{
provider: string;
provider_name: string;
credential_type: string;
scopes?: string[];
}>
| undefined;
if (credentials && credentials.length > 0) {
return {
type: "credentials_needed",
toolId: toolCallId,
toolName: toolName,
credentials: credentials,
timestamp: new Date(),
} as ChatMessageData;
}
}
// Default: generic tool response
return {
type: "tool_response",
toolId: toolCallId,
toolName: toolName,
result: parsed,
success: true,
timestamp: new Date(),
} as ChatMessageData;
}
/**
* Convert AI SDK UIMessage parts to ChatMessageData array
*/
function convertMessageToChatData(message: UIMessage): ChatMessageData[] {
const result: ChatMessageData[] = [];
for (const part of message.parts) {
switch (part.type) {
case "text":
if (part.text.trim()) {
result.push({
type: "message",
role: message.role as "user" | "assistant",
content: part.text,
timestamp: new Date(message.createdAt || Date.now()),
});
}
break;
default:
// Handle tool parts (tool-*)
if (part.type.startsWith("tool-")) {
const toolPart = part as {
type: string;
toolCallId: string;
toolName: string;
state: string;
input?: Record<string, unknown>;
output?: unknown;
};
// Show tool call in progress
if (
toolPart.state === "input-streaming" ||
toolPart.state === "input-available"
) {
result.push({
type: "tool_call",
toolId: toolPart.toolCallId,
toolName: toolPart.toolName,
arguments: toolPart.input || {},
timestamp: new Date(),
});
}
// Parse tool output when available
if (
toolPart.state === "output-available" &&
toolPart.output !== undefined
) {
const parsed = parseToolOutput(
toolPart.toolCallId,
toolPart.toolName,
toolPart.output,
);
if (parsed) {
result.push(parsed);
}
}
// Handle tool errors
if (toolPart.state === "output-error") {
result.push({
type: "tool_response",
toolId: toolPart.toolCallId,
toolName: toolPart.toolName,
response: {
type: "error",
message: (toolPart as { errorText?: string }).errorText,
},
success: false,
timestamp: new Date(),
} as ChatMessageData);
}
}
break;
}
}
return result;
}
export function useAiSdkChat({
sessionId,
initialMessages = [],
onOperationStarted,
onStreamingChange,
}: UseAiSdkChatOptions) {
const [isRegionBlockedModalOpen, setIsRegionBlockedModalOpen] =
useState(false);
const previousSessionIdRef = useRef<string | null>(null);
const hasNotifiedOperationRef = useRef<Set<string>>(new Set());
// Create transport with session-specific endpoint
const transport = useMemo(() => {
if (!sessionId) return undefined;
return new DefaultChatTransport({
api: `/api/chat/sessions/${sessionId}/stream`,
headers: {
"Content-Type": "application/json",
},
});
}, [sessionId]);
const {
messages: aiMessages,
status,
error,
stop,
setMessages,
sendMessage: aiSendMessage,
} = useAiSdkChatBase({
transport,
initialMessages,
onError: (err) => {
console.error("[useAiSdkChat] Error:", err);
// Check for region blocking
if (
err.message?.toLowerCase().includes("not available in your region") ||
(err as { code?: string }).code === "MODEL_NOT_AVAILABLE_REGION"
) {
setIsRegionBlockedModalOpen(true);
return;
}
toast.error("Chat Error", {
description: err.message || "An error occurred",
});
},
onFinish: ({ message }) => {
console.info("[useAiSdkChat] Message finished:", {
id: message.id,
partsCount: message.parts.length,
});
},
});
// Track streaming status
const isStreaming = status === "streaming" || status === "submitted";
// Notify parent of streaming changes
useEffect(() => {
onStreamingChange?.(isStreaming);
}, [isStreaming, onStreamingChange]);
// Handle session changes - reset state
useEffect(() => {
if (sessionId === previousSessionIdRef.current) return;
if (previousSessionIdRef.current && status === "streaming") {
stop();
}
previousSessionIdRef.current = sessionId;
hasNotifiedOperationRef.current = new Set();
if (sessionId) {
setMessages(initialMessages);
}
}, [sessionId, status, stop, setMessages, initialMessages]);
// Convert AI SDK messages to ChatMessageData format
const messages = useMemo(() => {
const result: ChatMessageData[] = [];
for (const message of aiMessages) {
const converted = convertMessageToChatData(message);
result.push(...converted);
// Check for operation_started and notify
for (const msg of converted) {
if (
msg.type === "operation_started" &&
!hasNotifiedOperationRef.current.has(
(msg as { toolId?: string }).toolId || "",
)
) {
hasNotifiedOperationRef.current.add(
(msg as { toolId?: string }).toolId || "",
);
onOperationStarted?.();
}
}
}
return result;
}, [aiMessages, onOperationStarted]);
// Get streaming text chunks from the last assistant message
const streamingChunks = useMemo(() => {
if (!isStreaming) return [];
const lastMessage = aiMessages[aiMessages.length - 1];
if (!lastMessage || lastMessage.role !== "assistant") return [];
const chunks: string[] = [];
for (const part of lastMessage.parts) {
if (part.type === "text" && part.text) {
chunks.push(part.text);
}
}
return chunks;
}, [aiMessages, isStreaming]);
// Send message with optional context
const sendMessage = useCallback(
async (
content: string,
isUserMessage: boolean = true,
context?: { url: string; content: string },
) => {
if (!sessionId || !transport) {
console.error("[useAiSdkChat] Cannot send message: no session");
return;
}
setIsRegionBlockedModalOpen(false);
try {
await aiSendMessage(
{ text: content },
{
body: {
is_user_message: isUserMessage,
context: context || null,
},
},
);
} catch (err) {
console.error("[useAiSdkChat] Failed to send message:", err);
if (err instanceof Error && err.name === "AbortError") return;
toast.error("Failed to send message", {
description:
err instanceof Error ? err.message : "Failed to send message",
});
}
},
[sessionId, transport, aiSendMessage],
);
// Stop streaming
const stopStreaming = useCallback(() => {
stop();
}, [stop]);
return {
messages,
streamingChunks,
isStreaming,
error,
status,
isRegionBlockedModalOpen,
setIsRegionBlockedModalOpen,
sendMessage,
stopStreaming,
// Expose raw AI SDK state for advanced use cases
aiMessages,
setAiMessages: setMessages,
};
}

View File

@@ -218,3 +218,61 @@ test("shows error when deletion fails", async () => {
4. **Co-locate integration tests** - Keep `__tests__/` folder next to the component
5. **E2E is expensive** - Only for critical happy paths; prefer integration tests
6. **AI agents are good at writing integration tests** - Start with these when adding test coverage
---
## Testing 500 Server Errors
Orval auto-generates 422 validation error handlers, but 500 errors must be created manually. Use the helper:
```tsx
import { create500Handler } from "@/tests/integrations/helpers/create-500-handler";
test("handles server error", async () => {
server.use(create500Handler("get", "*/api/store/agents"));
render(<Component />);
expect(await screen.findByText("Failed to load")).toBeInTheDocument();
});
```
Options:
- `delayMs`: Add delay before response (for testing loading states)
- `body`: Custom error response body
---
## Testing Auth-Dependent Components
For components that behave differently based on login state:
```tsx
import {
mockAuthenticatedUser,
mockUnauthenticatedUser,
resetAuthState,
} from "@/tests/integrations/helpers/mock-supabase-auth";
describe("MyComponent", () => {
afterEach(() => {
resetAuthState();
});
test("shows feature when logged in", async () => {
mockAuthenticatedUser();
render(<MyComponent />);
expect(await screen.findByText("Premium Feature")).toBeInTheDocument();
});
test("hides feature when logged out", async () => {
mockUnauthenticatedUser();
render(<MyComponent />);
expect(screen.queryByText("Premium Feature")).not.toBeInTheDocument();
});
test("with custom user data", async () => {
mockAuthenticatedUser({ email: "custom@test.com" });
// ...
});
});
```

View File

@@ -0,0 +1,31 @@
import { http, HttpResponse, delay } from "msw";
type HttpMethod = "get" | "post" | "put" | "patch" | "delete";
interface Create500HandlerOptions {
delayMs?: number;
body?: unknown;
}
export function create500Handler(
method: HttpMethod,
url: string,
options?: Create500HandlerOptions,
) {
const { delayMs = 0, body } = options ?? {};
const responseBody = body ?? {
detail: "Internal Server Error",
};
return http[method](url, async () => {
if (delayMs > 0) {
await delay(delayMs);
}
return HttpResponse.json(responseBody, {
status: 500,
headers: { "Content-Type": "application/json" },
});
});
}

View File

@@ -0,0 +1,46 @@
import { createContext, useContext, ReactNode } from "react";
import { UserOnboarding } from "@/lib/autogpt-server-api";
import { PostV1CompleteOnboardingStepStep } from "@/app/api/__generated__/models/postV1CompleteOnboardingStepStep";
import type { LocalOnboardingStateUpdate } from "@/providers/onboarding/helpers";
const MockOnboardingContext = createContext<{
state: UserOnboarding | null;
updateState: (state: LocalOnboardingStateUpdate) => void;
step: number;
setStep: (step: number) => void;
completeStep: (step: PostV1CompleteOnboardingStepStep) => void;
}>({
state: null,
updateState: () => {},
step: 1,
setStep: () => {},
completeStep: () => {},
});
export function useOnboarding(
_step?: number,
_completeStep?: PostV1CompleteOnboardingStepStep,
) {
const context = useContext(MockOnboardingContext);
return context;
}
interface Props {
children: ReactNode;
}
export function MockOnboardingProvider({ children }: Props) {
return (
<MockOnboardingContext.Provider
value={{
state: null,
updateState: () => {},
step: 1,
setStep: () => {},
completeStep: () => {},
}}
>
{children}
</MockOnboardingContext.Provider>
);
}

View File

@@ -0,0 +1,43 @@
import type { User } from "@supabase/supabase-js";
import { useSupabaseStore } from "@/lib/supabase/hooks/useSupabaseStore";
export const mockUser: User = {
id: "test-user-id",
email: "test@example.com",
aud: "authenticated",
role: "authenticated",
created_at: new Date().toISOString(),
app_metadata: {},
user_metadata: {},
};
export function mockAuthenticatedUser(user: Partial<User> = {}): User {
const mergedUser = { ...mockUser, ...user };
useSupabaseStore.setState({
user: mergedUser,
isUserLoading: false,
hasLoadedUser: true,
isValidating: false,
});
return mergedUser;
}
export function mockUnauthenticatedUser(): void {
useSupabaseStore.setState({
user: null,
isUserLoading: false,
hasLoadedUser: true,
isValidating: false,
});
}
export function resetAuthState(): void {
useSupabaseStore.setState({
user: null,
isUserLoading: true,
hasLoadedUser: false,
isValidating: false,
});
}

View File

@@ -0,0 +1,34 @@
export function suppressReactQueryUpdateWarning() {
const originalError = console.error;
console.error = (...args: unknown[]) => {
const isActWarning = args.some(
(arg) =>
typeof arg === "string" &&
(arg.includes("not wrapped in act(...)") ||
(arg.includes("An update to") && arg.includes("inside a test"))),
);
if (isActWarning) {
const fullMessage = args
.map((arg) => String(arg))
.join("\n")
.toLowerCase();
const isReactQueryRelated =
fullMessage.includes("queryclientprovider") ||
fullMessage.includes("react query") ||
fullMessage.includes("@tanstack/react-query");
if (isReactQueryRelated || fullMessage.includes("testproviders")) {
return;
}
}
originalError(...args);
};
return () => {
console.error = originalError;
};
}

View File

@@ -1,14 +1,25 @@
import { BackendAPIProvider } from "@/lib/autogpt-server-api/context";
import OnboardingProvider from "@/providers/onboarding/onboarding-provider";
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { render, RenderOptions } from "@testing-library/react";
import { ReactElement, ReactNode } from "react";
import {
MockOnboardingProvider,
useOnboarding as mockUseOnboarding,
} from "./helpers/mock-onboarding-provider";
vi.mock("@/providers/onboarding/onboarding-provider", () => ({
useOnboarding: mockUseOnboarding,
default: vi.fn(),
}));
function createTestQueryClient() {
return new QueryClient({
defaultOptions: {
queries: {
retry: false,
refetchOnWindowFocus: false,
refetchOnMount: false,
refetchOnReconnect: false,
},
},
});
@@ -19,7 +30,7 @@ function TestProviders({ children }: { children: ReactNode }) {
return (
<QueryClientProvider client={queryClient}>
<BackendAPIProvider>
<OnboardingProvider>{children}</OnboardingProvider>
<MockOnboardingProvider>{children}</MockOnboardingProvider>
</BackendAPIProvider>
</QueryClientProvider>
);

View File

@@ -2,11 +2,23 @@ import { beforeAll, afterAll, afterEach } from "vitest";
import { server } from "@/mocks/mock-server";
import { mockNextjsModules } from "./setup-nextjs-mocks";
import { mockSupabaseRequest } from "./mock-supabase-request";
import "@testing-library/jest-dom";
import { suppressReactQueryUpdateWarning } from "./helpers/suppress-react-query-update-warning";
let restoreConsoleError: (() => void) | null = null;
beforeAll(() => {
mockNextjsModules();
mockSupabaseRequest(); // If you need user's data - please mock supabase actions in your specific test - it sends null user [It's only to avoid cookies() call]
return server.listen({ onUnhandledRequest: "error" });
mockSupabaseRequest();
restoreConsoleError = suppressReactQueryUpdateWarning();
server.listen({ onUnhandledRequest: "error" });
});
afterEach(() => {
server.resetHandlers();
});
afterAll(() => {
restoreConsoleError?.();
server.close();
});
afterEach(() => server.resetHandlers());
afterAll(() => server.close());

View File

@@ -2,85 +2,14 @@ import { expect, test } from "@playwright/test";
import { getTestUserWithLibraryAgents } from "./credentials";
import { LoginPage } from "./pages/login.page";
import { MarketplacePage } from "./pages/marketplace.page";
import { hasUrl, isVisible, matchesUrl } from "./utils/assertion";
import { hasUrl, isVisible } from "./utils/assertion";
import { getSelectors } from "./utils/selectors";
function escapeRegExp(value: string) {
return value.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
}
test.describe("Marketplace Agent Page - Basic Functionality", () => {
test("User can access agent page when logged out", async ({ page }) => {
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const firstStoreCard = await marketplacePage.getFirstTopAgent();
await firstStoreCard.click();
await page.waitForURL("**/marketplace/agent/**");
await matchesUrl(page, /\/marketplace\/agent\/.+/);
});
test("User can access agent page when logged in", async ({ page }) => {
const loginPage = new LoginPage(page);
const marketplacePage = new MarketplacePage(page);
await loginPage.goto();
const richUser = getTestUserWithLibraryAgents();
await loginPage.login(richUser.email, richUser.password);
await hasUrl(page, "/marketplace");
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const firstStoreCard = await marketplacePage.getFirstTopAgent();
await firstStoreCard.click();
await page.waitForURL("**/marketplace/agent/**");
await matchesUrl(page, /\/marketplace\/agent\/.+/);
});
test("Agent page details are visible", async ({ page }) => {
const { getId } = getSelectors(page);
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
const firstStoreCard = await marketplacePage.getFirstTopAgent();
await firstStoreCard.click();
await page.waitForURL("**/marketplace/agent/**");
const agentTitle = getId("agent-title");
await isVisible(agentTitle);
const agentDescription = getId("agent-description");
await isVisible(agentDescription);
const creatorInfo = getId("agent-creator");
await isVisible(creatorInfo);
});
test("Download button functionality works", async ({ page }) => {
const { getId, getText } = getSelectors(page);
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
const firstStoreCard = await marketplacePage.getFirstTopAgent();
await firstStoreCard.click();
await page.waitForURL("**/marketplace/agent/**");
const downloadButton = getId("agent-download-button");
await isVisible(downloadButton);
await downloadButton.click();
const downloadSuccessMessage = getText(
"Your agent has been successfully downloaded.",
);
await isVisible(downloadSuccessMessage);
});
test.describe("Marketplace Agent Page - Cross-Page Flows", () => {
test("Add to library button works and agent appears in library", async ({
page,
}) => {
@@ -93,7 +22,6 @@ test.describe("Marketplace Agent Page - Basic Functionality", () => {
const richUser = getTestUserWithLibraryAgents();
await loginPage.login(richUser.email, richUser.password);
await hasUrl(page, "/marketplace");
await marketplacePage.goto(page);
const firstStoreCard = await marketplacePage.getFirstTopAgent();
await firstStoreCard.click();

View File

@@ -1,64 +1,8 @@
import { test } from "@playwright/test";
import { getTestUserWithLibraryAgents } from "./credentials";
import { LoginPage } from "./pages/login.page";
import { MarketplacePage } from "./pages/marketplace.page";
import { hasUrl, isVisible, matchesUrl } from "./utils/assertion";
import { getSelectors } from "./utils/selectors";
test.describe("Marketplace Creator Page Basic Functionality", () => {
test("User can access creator's page when logged out", async ({ page }) => {
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const firstCreatorProfile =
await marketplacePage.getFirstCreatorProfile(page);
await firstCreatorProfile.click();
await page.waitForURL("**/marketplace/creator/**");
await matchesUrl(page, /\/marketplace\/creator\/.+/);
});
test("User can access creator's page when logged in", async ({ page }) => {
const loginPage = new LoginPage(page);
const marketplacePage = new MarketplacePage(page);
await loginPage.goto();
const richUser = getTestUserWithLibraryAgents();
await loginPage.login(richUser.email, richUser.password);
await hasUrl(page, "/marketplace");
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const firstCreatorProfile =
await marketplacePage.getFirstCreatorProfile(page);
await firstCreatorProfile.click();
await page.waitForURL("**/marketplace/creator/**");
await matchesUrl(page, /\/marketplace\/creator\/.+/);
});
test("Creator page details are visible", async ({ page }) => {
const { getId } = getSelectors(page);
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const firstCreatorProfile =
await marketplacePage.getFirstCreatorProfile(page);
await firstCreatorProfile.click();
await page.waitForURL("**/marketplace/creator/**");
const creatorTitle = getId("creator-title");
await isVisible(creatorTitle);
const creatorDescription = getId("creator-description");
await isVisible(creatorDescription);
});
import { hasUrl, matchesUrl } from "./utils/assertion";
test.describe("Marketplace Creator Page Cross-Page Flows", () => {
test("Agents in agent by sections navigation works", async ({ page }) => {
const marketplacePage = new MarketplacePage(page);

View File

@@ -1,74 +1,8 @@
import { expect, test } from "@playwright/test";
import { getTestUserWithLibraryAgents } from "./credentials";
import { LoginPage } from "./pages/login.page";
import { MarketplacePage } from "./pages/marketplace.page";
import { hasMinCount, hasUrl, isVisible, matchesUrl } from "./utils/assertion";
// Marketplace tests for store agent search functionality
test.describe("Marketplace Basic Functionality", () => {
test("User can access marketplace page when logged out", async ({ page }) => {
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const marketplaceTitle = await marketplacePage.getMarketplaceTitle(page);
await isVisible(marketplaceTitle);
console.log(
"User can access marketplace page when logged out test passed ✅",
);
});
test("User can access marketplace page when logged in", async ({ page }) => {
const loginPage = new LoginPage(page);
const marketplacePage = new MarketplacePage(page);
await loginPage.goto();
const richUser = getTestUserWithLibraryAgents();
await loginPage.login(richUser.email, richUser.password);
await hasUrl(page, "/marketplace");
await marketplacePage.goto(page);
await hasUrl(page, "/marketplace");
const marketplaceTitle = await marketplacePage.getMarketplaceTitle(page);
await isVisible(marketplaceTitle);
console.log(
"User can access marketplace page when logged in test passed ✅",
);
});
test("Featured agents, top agents, and featured creators are visible", async ({
page,
}) => {
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
const featuredAgentsSection =
await marketplacePage.getFeaturedAgentsSection(page);
await isVisible(featuredAgentsSection);
const featuredAgentCards =
await marketplacePage.getFeaturedAgentCards(page);
await hasMinCount(featuredAgentCards, 1);
const topAgentsSection = await marketplacePage.getTopAgentsSection(page);
await isVisible(topAgentsSection);
const topAgentCards = await marketplacePage.getTopAgentCards(page);
await hasMinCount(topAgentCards, 1);
const featuredCreatorsSection =
await marketplacePage.getFeaturedCreatorsSection(page);
await isVisible(featuredCreatorsSection);
const creatorProfiles = await marketplacePage.getCreatorProfiles(page);
await hasMinCount(creatorProfiles, 1);
console.log(
"Featured agents, top agents, and featured creators are visible test passed ✅",
);
});
import { isVisible, matchesUrl } from "./utils/assertion";
test.describe("Marketplace Navigation", () => {
test("Can navigate and interact with marketplace elements", async ({
page,
}) => {
@@ -95,7 +29,7 @@ test.describe("Marketplace Basic Functionality", () => {
await matchesUrl(page, /\/marketplace\/creator\/.+/);
console.log(
"Can navigate and interact with marketplace elements test passed",
"Can navigate and interact with marketplace elements test passed",
);
});
@@ -128,32 +62,6 @@ test.describe("Marketplace Basic Functionality", () => {
const results = await marketplacePage.getSearchResultsCount(page);
expect(results).toBeGreaterThan(0);
console.log("Complete search flow works correctly test passed");
});
// We need to add a test search with filters, but the current business logic for filters doesn't work as expected. We'll add it once we modify that.
});
test.describe("Marketplace Edge Cases", () => {
test("Search for non-existent item shows no results", async ({ page }) => {
const marketplacePage = new MarketplacePage(page);
await marketplacePage.goto(page);
await marketplacePage.searchAndNavigate("xyznonexistentitemxyz123", page);
await marketplacePage.waitForSearchResults();
await matchesUrl(page, /\/marketplace\/search\?searchTerm=/);
const resultsHeading = page.getByText("Results for:");
await isVisible(resultsHeading);
const searchTerm = page.getByText("xyznonexistentitemxyz123");
await isVisible(searchTerm);
const results = await marketplacePage.getSearchResultsCount(page);
expect(results).toBe(0);
console.log("Search for non-existent item shows no results test passed ✅");
console.log("Complete search flow works correctly test passed");
});
});

View File

@@ -14,6 +14,7 @@
"jsx": "preserve",
"incremental": true,
"plugins": [{ "name": "next" }],
"types": ["vitest/globals", "@testing-library/jest-dom/vitest"],
"paths": {
"@/*": ["./src/*"]
}

View File

@@ -8,5 +8,6 @@ export default defineConfig({
environment: "happy-dom",
include: ["src/**/*.test.tsx"],
setupFiles: ["./src/tests/integrations/vitest.setup.tsx"],
globals: true,
},
});

View File

@@ -65,7 +65,7 @@ The result routes data to yes_output or no_output, enabling intelligent branchin
| condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
### Outputs
@@ -103,7 +103,7 @@ The block sends the entire conversation history to the chosen LLM, including sys
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
@@ -257,7 +257,7 @@ The block formulates a prompt based on the given focus or source data, sends it
|-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -424,7 +424,7 @@ The block sends the input prompt to a chosen LLM, along with any system prompts
| prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |
@@ -464,7 +464,7 @@ The block sends the input prompt to a chosen LLM, processes the response, and re
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
@@ -501,7 +501,7 @@ The block splits the input text into smaller chunks, sends each chunk to an LLM
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -763,7 +763,7 @@ Configure agent_mode_max_iterations to control loop behavior: 0 for single decis
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |

View File

@@ -20,7 +20,7 @@ Configure timeouts for DOM settlement and page loading. Variables can be passed
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No |
| url | URL to navigate to. | str | Yes |
| action | Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step. | List[str] | Yes |
| variables | Variables to use in the action. Variables contains data you want the action to use. | Dict[str, str] | No |
@@ -65,7 +65,7 @@ Supports searching within iframes and configurable timeouts for dynamic content
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No |
| url | URL to navigate to. | str | Yes |
| instruction | Natural language description of elements or actions to discover. | str | Yes |
| iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No |
@@ -106,7 +106,7 @@ Use this to explore a page's interactive elements before building automated work
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No |
| url | URL to navigate to. | str | Yes |
| instruction | Natural language description of elements or actions to discover. | str | Yes |
| iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No |