Compare commits

...

48 Commits

Author SHA1 Message Date
Zamil Majdy
d1723760aa fix: determine clarification answered state from conversation
Instead of localStorage, check if there's a user message after the
clarification request in the conversation. This works across browsers
since the conversation is stored in the backend.
2026-01-30 15:31:32 -06:00
Zamil Majdy
4362aaab75 fix: populate user_id in AgentExecutorBlock nodes at save time
Generated agents have empty user_id in AgentExecutorBlock nodes.
This fix populates the actual user_id when saving the agent to the
library, ensuring sub-agents run with correct permissions.
2026-01-30 15:24:32 -06:00
Zamil Majdy
e4e6128065 fix: correct AgentExecutorBlock handle IDs to match link sink_names
- CustomNode: Use inputs_#_<field> format for AGENT block handles
  to match standard dictionary input pattern and link sink_names
- core.py: Wrap UUID lookup in try/except for graceful degradation
2026-01-30 15:11:19 -06:00
Zamil Majdy
68722cb199 fix: add DatabaseError handling and remove inline comments
- Re-raise DatabaseError in get_library_agents_for_generation for proper
  error propagation
- Remove inline comments from LibraryAgentStatus enum and LibraryAgent
  fields per no-comments guideline
2026-01-30 14:13:26 -06:00
Zamil Majdy
3a3355befb fix: use graph_id from current_agent for exclusion in edit_agent
The agent_id parameter might be a library agent ID, not a graph_id,
which could cause incorrect agent exclusion when fetching library agents.
Always use the graph_id from the fetched current_agent.
2026-01-30 14:00:01 -06:00
Zamil Majdy
a042c3285d fix: add defensive null checks and validation to agent generator
- Add isinstance(str) checks to prevent AttributeError when agent name is None
- Add AgentJsonValidationError for better error messages on invalid agent JSON
- Add validation in json_to_graph for missing required fields (block_id, source_id, etc)
- Remove try-except that was swallowing DatabaseError in get_all_relevant_agents_for_generation
- Add DatabaseError handler in enrich_library_agents_from_steps to properly re-raise
- Export AgentJsonValidationError from __init__.py
2026-01-30 13:55:29 -06:00
Zamil Majdy
82a9612192 chore: remove inline comments per no-comments guideline
Address CodeRabbit PR review comments by removing inline comments
from core.py and edit_agent.py that violate the no-comments guideline.
The code is self-explanatory without these comments.
2026-01-30 13:44:27 -06:00
Zamil Majdy
b2777aa5fc fix: handle empty input_schema.properties in CustomNode
Add defensive check to prevent crash when inputSchema.properties
is undefined (e.g., for sub-agent blocks with empty schemas).
2026-01-30 13:25:51 -06:00
Zamil Majdy
4053deecf8 fix: prevent stale writes on session change and drop useCallback
Use ref-based guard to skip persistence when sessionId changes before
hydration completes. Replace useCallback with plain function declaration
since the handler is wrapped in an inline arrow function anyway.
2026-01-30 13:11:08 -06:00
Zamil Majdy
b993820e8f refactor: remove more inline comments per no-comments guideline
- Remove inline comment from DecompositionResult.type field
- Remove "Convert typed dicts" comments from decompose_goal, generate_agent, generate_agent_patch
- Remove inline comments from _sanitize_error_details regex operations
- Remove inline comments from get_user_message_for_error error handling
2026-01-30 13:02:16 -06:00
Zamil Majdy
67f5213339 refactor: remove inline comments per no-comments guideline
- Remove inline comments from ExecutionSummary and LibraryAgentSummary TypedDicts
- Remove inline comment on error_details parameter in edit_agent.py
2026-01-30 12:29:44 -06:00
Zamil Majdy
18abe18999 fix: clear storage when answers blank and remove inline comments
- Clear localStorage when all answers are deleted to prevent stale data on reload
- Remove inline comments to comply with no-comments guideline
2026-01-30 12:07:33 -06:00
Zamil Majdy
a04c1fa978 fix: simplify settings parsing and reset widget state on session change
- Simplify _parse_settings to handle dict/str/None types only
- Reset ClarificationQuestionsWidget answers and isSubmitted when sessionId changes
2026-01-30 11:49:57 -06:00
Zamil Majdy
d37edad901 fix: remove trailing newline from snapshot file 2026-01-30 11:36:00 -06:00
Zamil Majdy
0beb4cf351 fix: address test failures and PR review comments
- Fix executionStatus.value crash when status is a string not enum
- Update snapshot with new LibraryAgent fields (execution_count, etc.)
- Update test mocks to include include_executions=True parameter
- Fix potential KeyError/AttributeError in agent deduplication by using
  .get() with defaults instead of direct dict access
2026-01-30 10:03:36 -06:00
Zamil Majdy
f8980ad7dc refactor: use util/json instead of stdlib json in LibraryAgent.from_db
Uses the existing backend.util.json.loads utility which wraps orjson
for better performance and type handling.
2026-01-30 09:57:51 -06:00
Zamil Majdy
ea08a630cb fix: handle string settings in LibraryAgent.from_db
Fixes AUTOGPT-SERVER-7N6 - Some DB records have settings stored as
a JSON string instead of a dict, causing GraphSettings.model_validate
to fail with 'str' object has no attribute 'value'.
2026-01-30 09:40:44 -06:00
Zamil Majdy
786d1e0d97 refactor: remove inline comments per code review guidelines
Removed unnecessary inline comments from:
- agent_generator/core.py
- agent_search.py
- create_agent.py
- library/db.py
- library/model.py
2026-01-30 09:36:49 -06:00
Zamil Majdy
29e1a1b002 chore: update openapi.json with RecentExecution schema
Auto-generated from backend API changes for library agent quality metrics.
2026-01-30 09:30:58 -06:00
Zamil Majdy
c4f722cbfd fix: persist clarification widget answers in localStorage
Answers are now saved to localStorage while user fills the form
and restored on page refresh. Storage is cleared after submission.
2026-01-30 09:30:00 -06:00
Zamil Majdy
de57c99286 feat: add recent_executions and improve error messages for agent generation
- Add RecentExecution model with status, correctness_score, and activity_summary
- Expose recent_executions in LibraryAgent for quality assessment
- Always pass error_details to user-facing messages for better debugging
- Update ExecutionSummary TypedDict for sub-agent composition
2026-01-30 09:15:58 -06:00
Zamil Majdy
1ad8fde75d fix: address PR review comments for agent generator
- Re-raise DatabaseError in get_library_agent_by_id to not swallow DB failures
- Add error details sanitization to strip sensitive info (paths, URLs, etc.)
- Clean up redundant inline comments in edit_agent.py
2026-01-30 07:59:11 -06:00
Zamil Majdy
aef705007b refactor: remove aggressive ERROR status filter from library agent search
The ERROR status filter was too aggressive - a single failed execution
would exclude an agent from sub-agent composition, even if it had many
successful runs. Removed the filter for now.

Future enhancement: Add quality filtering based on execution success rate
or correctness_score (stored in AgentGraphExecution stats) rather than
the binary ERROR status.
2026-01-30 07:45:00 -06:00
Zamil Majdy
be7e1ad9b6 feat: add quality filtering to exclude ERROR status library agents
Filter out library agents with ERROR status when searching for
sub-agent composition candidates. This prevents recommending broken
or draft agents that have failed executions.
2026-01-30 07:40:17 -06:00
Zamil Majdy
ce050abff9 feat: add include_library parameter to get_all_relevant_agents_for_generation
Add configurable include_library parameter (default True) to allow
controlling whether user's library agents are included in the search
results for sub-agent composition.
2026-01-30 07:36:39 -06:00
Zamil Majdy
79eb2889ab style: fix formatting in agent_generator/service.py 2026-01-30 07:29:32 -06:00
Zamil Majdy
5bc5e02dcb Merge branch 'dev' into feat/sub-agent-support 2026-01-30 07:24:08 -06:00
Zamil Majdy
f83366d08d fix: address PR review comments - remove inline comments, add stripInternalReasoning
- Remove remaining inline comments per style guidelines
- Add stripInternalReasoning to error case in formatToolResponse
2026-01-30 07:23:08 -06:00
Reinier van der Leer
350ad3591b fix(backend/chat): Filter credentials for graph execution by scopes (#11881)
[SECRT-1842: run_agent tool does not correctly use credentials - agents
fail with insufficient auth
scopes](https://linear.app/autogpt/issue/SECRT-1842)

### Changes 🏗️

- Include scopes in credentials filter in
`backend.api.features.chat.tools.utils.match_user_credentials_to_graph`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI must pass
- It's broken now and a simple change so we'll test in the dev
deployment
2026-01-30 11:01:51 +00:00
Bently
de0ec3d388 chore(llm): remove deprecated Claude 3.7 Sonnet model with migration and defensive handling (#11841)
## Summary
Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of
Anthropic's API retirement, with comprehensive migration and defensive
error handling.

## Background
Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`)
on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from
the platform and migrates existing users to prevent service
interruptions.

## Changes

### Code Changes
- Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py`
- Remove corresponding `ModelMetadata` entry
- Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum
- Remove `CLAUDE_3_7_SONNET` from block cost config
- Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum
- Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to
`CLAUDE_4_5_SONNET` (staying in Claude family)
- Add defensive error handling in `CredentialsFieldInfo.discriminate()`
for deprecated model values

### Database Migration
- Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet`
- Migrates `AgentNode.constantInput` model references
- Migrates `AgentNodeExecutionInputOutput.data` preset overrides

### Documentation
- Updated `docs/integrations/block-integrations/llm.md` to remove
deprecated model
- Updated `docs/integrations/block-integrations/stagehand/blocks.md` to
remove deprecated model and add Claude 4.5 Sonnet

## Notes
- Agent JSON files in `autogpt_platform/backend/agents/` still reference
this model in their provider mappings. These are auto-generated and
should be regenerated separately.

## Testing
- [ ] Verify LLM block still functions with remaining models
- [ ] Confirm no import errors in affected files
- [ ] Verify migration runs successfully
- [ ] Verify deprecated model gives helpful error message instead of
KeyError
2026-01-30 08:40:55 +00:00
Otto
7cb1e588b0 fix(frontend): Refocus ChatInput after voice transcription completes (#11893)
## Summary
Refocuses the chat input textarea after voice transcription finishes,
allowing users to immediately use `spacebar+enter` to record and send
their prompt.

## Changes
- Added `inputId` parameter to `useVoiceRecording` hook
- After transcription completes, the input is automatically focused
- This improves the voice input UX flow

## Testing
1. Click mic button or press spacebar to record voice
2. Record a message and stop
3. After transcription completes, the input should be focused
4. User can now press Enter to send or spacebar to record again

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-30 14:49:05 +07:00
Zamil Majdy
16ae8ddbe0 fix: correct library agent link path from /library to /library/agents
The "View in Library" link was returning 404 because the path was
missing the /agents/ segment. Fixed in both create_agent.py and
edit_agent.py to match the correct route used elsewhere.
2026-01-29 23:44:54 -06:00
Zamil Majdy
4b04ae2147 fix: address PR review comments
- Add null checks for .lower() on agent names that could be None
- Add isinstance guard for non-string step values in extract_search_terms
- Re-raise DatabaseError instead of swallowing it in agent_search
- Remove inline comments per style guidelines
2026-01-29 23:37:11 -06:00
Zamil Majdy
de71d6134a fix: display user-friendly error message instead of error code
Swap priority to check message field before error field so users see
helpful error messages instead of technical codes
2026-01-29 23:31:29 -06:00
Zamil Majdy
e6eb8a3f57 fix: improve error messages and LLM continuation for agent generation
- Add LLM continuation call when background tool execution fails with
  exception (previously users saw no explanation for errors)
- Improve validation error messages with more helpful guidance
- Add error_details parameter to include technical context in error
  responses when needed
- Update create_agent to pass error details for validation failures
2026-01-29 23:15:53 -06:00
Otto
582c6cad36 fix(e2e): Make E2E test data deterministic and fix flaky tests (#11890)
## Summary
Fixes flaky E2E marketplace and library tests that were causing PRs to
be removed from the merge queue.

## Root Cause
1. **Test data was probabilistic** - `e2e_test_data.py` used random
chances (40% approve, then 20-50% feature), which could result in 0
featured agents
2. **Library pagination threshold wrong** - Checked `>= 10`, but page
size is 20
3. **Fixed timeouts** - Used `waitForTimeout(2000)` /
`waitForTimeout(10000)` instead of proper waits

## Changes

### Backend (`e2e_test_data.py`)
- Add guaranteed minimums: 8 featured agents, 5 featured creators, 10
top agents
- First N submissions are deterministically approved and featured
- Increase agents per user from 15 → 25 (for pagination with
page_size=20)
- Fix library agent creation to use constants instead of hardcoded `10`

### Frontend Tests
- `library.spec.ts`: Fix pagination threshold to `PAGE_SIZE` (20)
- `library.page.ts`: Replace 2s timeout with `networkidle` +
`waitForFunction`
- `marketplace.page.ts`: Add `networkidle` wait, 30s waits in
`getFirst*` methods
- `marketplace.spec.ts`: Replace 10s timeout with `waitForFunction`
- `marketplace-creator.spec.ts`: Add `networkidle` + element waits

## Related
- Closes SECRT-1848, SECRT-1849
- Should unblock #11841 and other PRs in merge queue

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-30 05:12:35 +00:00
Zamil Majdy
0d1d275e8d fix: improve library search to match any word instead of exact phrase
Previously, searching for "flight price drop alert" required that exact
phrase to be in the agent name/description. Now it splits into individual
words and matches agents containing ANY of: flight, price, drop, alert.

This fixes the issue where "flight price tracker" wasn't found when
searching for "flight price drop alert" even though they share keywords.
2026-01-29 22:28:49 -06:00
Zamil Majdy
dc92a7b520 chore: add debug logging for find_library_agent tool
Added logging to help diagnose library search issues:
- Log the query and user_id when tool is called
- Log the number of results returned from database
2026-01-29 22:15:19 -06:00
Zamil Majdy
d4047b5439 fix: support UUID lookup in find_library_agent tool
When users paste a library URL or agent UUID, the find_library_agent
tool now does direct ID lookup first (both by graph_id and library
agent ID) before falling back to text search.

This fixes the issue where searching by UUID would fail because
it was only doing text matching on agent names/descriptions.
2026-01-29 22:07:42 -06:00
Zamil Majdy
f00678fd1c fix: support lookup by library agent ID in addition to graph_id
When users paste library URLs (e.g., /library/agents/{id}), the ID is
the LibraryAgent primary key, not the graph_id. The previous code only
looked up by graph_id, causing "agent not found" errors.

Now get_library_agent_by_id() tries both lookup strategies:
1. First by graph_id (AgentGraph primary key)
2. Then by library agent ID (LibraryAgent primary key)

This fixes the issue where users couldn't reference agents by pasting
their library URLs in chat.
2026-01-29 22:02:46 -06:00
Zamil Majdy
aa175e0f4e feat: extract UUIDs from user input to fetch explicitly mentioned agents
When users mention agents by UUID in their goal description, we now:
1. Extract UUID v4 patterns from the search_query text
2. Fetch those agents directly by graph_id
3. Include them in the library_agents list for the LLM

This ensures explicitly referenced agents are always available to the
Agent Generator, even if text search wouldn't find them.

Added:
- extract_uuids_from_text(): extracts UUID v4 patterns from text
- get_library_agent_by_graph_id(): fetches a single agent by graph_id
- Integration in get_all_relevant_agents_for_generation()
2026-01-29 21:26:08 -06:00
Zamil Majdy
9a8838c69a refactor: move internal imports to top-level in core.py
- Move store_db, get_graph, get_graph_all_versions imports to top-level
- Catch specific NotFoundError instead of generic Exception
- Cleaner code organization following standard Python conventions
2026-01-29 21:18:47 -06:00
Zamil Majdy
41beae1122 fix: resolve library agent IDs to graph IDs in get_agent_as_json
get_agent_as_json claimed to accept both graph IDs and library agent IDs
but only tried direct graph lookup. When a library agent ID was passed,
the function would return None (agent_not_found error).

Now the function:
1. First tries direct graph lookup with the provided ID
2. If not found, resolves the ID as a library agent ID to get the graph_id
3. Then fetches the graph using the resolved graph_id
2026-01-29 21:16:20 -06:00
Zamil Majdy
e810f7b0d7 Merge branch 'dev' into feat/sub-agent-support 2026-01-29 19:13:37 -06:00
Zamil Majdy
9c3822fffe chore: remove obvious comments and alphabetize __all__ 2026-01-29 19:03:25 -06:00
Zamil Majdy
c039a2e3ad feat: add two-phase library search for better sub-agent discovery
- Add TypedDict types for agent summaries (LibraryAgentSummary, MarketplaceAgentSummary, DecompositionResult)
- Add extract_search_terms_from_steps() to extract keywords from decomposed instructions
- Add enrich_library_agents_from_steps() for two-phase search after decomposition
- Integrate enrichment into create_agent.py flow
- Add comprehensive tests for new functionality
2026-01-29 18:51:07 -06:00
Zamil Majdy
a3fe1ede55 fix: address PR review comments
- Add try/except error handling to get_library_agents_for_generation
  for graceful degradation (consistent with marketplace search)
- Add null checks when deduplicating agents by name to prevent
  AttributeError if agent name is None
- Use actual graph ID from current_agent in edit_agent.py to properly
  exclude the agent being edited (agent_id might be a library agent ID)
2026-01-29 18:22:12 -06:00
Zamil Majdy
552d069a9d feat: add search-based library agent fetching for sub-agent support
- Add get_library_agents_for_generation() with search_term support
- Add search_marketplace_agents_for_generation() for marketplace search
- Add get_all_relevant_agents_for_generation() combining both sources
- Update service.py to pass library_agents in all requests
- Update create_agent.py to fetch and pass relevant library agents
- Update edit_agent.py to fetch and pass relevant library agents
- Add tests for library agent fetching and passthrough
2026-01-29 17:10:42 -06:00
35 changed files with 2339 additions and 292 deletions

View File

@@ -1834,6 +1834,11 @@ async def _execute_long_running_tool(
tool_call_id=tool_call_id, tool_call_id=tool_call_id,
result=error_response.model_dump_json(), result=error_response.model_dump_json(),
) )
# Generate LLM continuation so user sees explanation even for errors
try:
await _generate_llm_continuation(session_id=session_id, user_id=user_id)
except Exception as llm_err:
logger.warning(f"Failed to generate LLM continuation for error: {llm_err}")
finally: finally:
await _mark_operation_completed(tool_call_id) await _mark_operation_completed(tool_call_id)

View File

@@ -2,30 +2,54 @@
from .core import ( from .core import (
AgentGeneratorNotConfiguredError, AgentGeneratorNotConfiguredError,
AgentJsonValidationError,
AgentSummary,
DecompositionResult,
DecompositionStep,
LibraryAgentSummary,
MarketplaceAgentSummary,
decompose_goal, decompose_goal,
enrich_library_agents_from_steps,
extract_search_terms_from_steps,
extract_uuids_from_text,
generate_agent, generate_agent,
generate_agent_patch, generate_agent_patch,
get_agent_as_json, get_agent_as_json,
get_all_relevant_agents_for_generation,
get_library_agent_by_graph_id,
get_library_agent_by_id,
get_library_agents_for_generation,
json_to_graph, json_to_graph,
save_agent_to_library, save_agent_to_library,
search_marketplace_agents_for_generation,
) )
from .errors import get_user_message_for_error from .errors import get_user_message_for_error
from .service import health_check as check_external_service_health from .service import health_check as check_external_service_health
from .service import is_external_service_configured from .service import is_external_service_configured
__all__ = [ __all__ = [
# Core functions "AgentGeneratorNotConfiguredError",
"AgentJsonValidationError",
"AgentSummary",
"DecompositionResult",
"DecompositionStep",
"LibraryAgentSummary",
"MarketplaceAgentSummary",
"check_external_service_health",
"decompose_goal", "decompose_goal",
"enrich_library_agents_from_steps",
"extract_search_terms_from_steps",
"extract_uuids_from_text",
"generate_agent", "generate_agent",
"generate_agent_patch", "generate_agent_patch",
"save_agent_to_library",
"get_agent_as_json", "get_agent_as_json",
"json_to_graph", "get_all_relevant_agents_for_generation",
# Exceptions "get_library_agent_by_graph_id",
"AgentGeneratorNotConfiguredError", "get_library_agent_by_id",
# Service "get_library_agents_for_generation",
"is_external_service_configured",
"check_external_service_health",
# Error handling
"get_user_message_for_error", "get_user_message_for_error",
"is_external_service_configured",
"json_to_graph",
"save_agent_to_library",
"search_marketplace_agents_for_generation",
] ]

View File

@@ -1,11 +1,21 @@
"""Core agent generation functions.""" """Core agent generation functions."""
import logging import logging
import re
import uuid import uuid
from typing import Any from typing import Any, NotRequired, TypedDict
from backend.api.features.library import db as library_db from backend.api.features.library import db as library_db
from backend.data.graph import Graph, Link, Node, create_graph from backend.api.features.store import db as store_db
from backend.data.graph import (
Graph,
Link,
Node,
create_graph,
get_graph,
get_graph_all_versions,
)
from backend.util.exceptions import DatabaseError, NotFoundError
from .service import ( from .service import (
decompose_goal_external, decompose_goal_external,
@@ -16,6 +26,75 @@ from .service import (
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Block ID for AgentExecutorBlock - used to identify sub-agent nodes
AGENT_EXECUTOR_BLOCK_ID = "e189baac-8c20-45a1-94a7-55177ea42565"
class ExecutionSummary(TypedDict):
"""Summary of a single execution for quality assessment."""
status: str
correctness_score: NotRequired[float]
activity_summary: NotRequired[str]
class LibraryAgentSummary(TypedDict):
"""Summary of a library agent for sub-agent composition.
Includes recent executions to help the LLM decide whether to use this agent.
Each execution shows status, correctness_score (0-1), and activity_summary.
"""
graph_id: str
graph_version: int
name: str
description: str
input_schema: dict[str, Any]
output_schema: dict[str, Any]
recent_executions: NotRequired[list[ExecutionSummary]]
class MarketplaceAgentSummary(TypedDict):
"""Summary of a marketplace agent for sub-agent composition."""
name: str
description: str
sub_heading: str
creator: str
is_marketplace_agent: bool
class DecompositionStep(TypedDict, total=False):
"""A single step in decomposed instructions."""
description: str
action: str
block_name: str
tool: str
name: str
class DecompositionResult(TypedDict, total=False):
"""Result from decompose_goal - can be instructions, questions, or error."""
type: str
steps: list[DecompositionStep]
questions: list[dict[str, Any]]
error: str
error_type: str
AgentSummary = LibraryAgentSummary | MarketplaceAgentSummary | dict[str, Any]
def _to_dict_list(
agents: list[AgentSummary] | list[dict[str, Any]] | None,
) -> list[dict[str, Any]] | None:
"""Convert typed agent summaries to plain dicts for external service calls."""
if agents is None:
return None
return [dict(a) for a in agents]
class AgentGeneratorNotConfiguredError(Exception): class AgentGeneratorNotConfiguredError(Exception):
"""Raised when the external Agent Generator service is not configured.""" """Raised when the external Agent Generator service is not configured."""
@@ -36,15 +115,421 @@ def _check_service_configured() -> None:
) )
async def decompose_goal(description: str, context: str = "") -> dict[str, Any] | None: _UUID_PATTERN = re.compile(
r"[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12}",
re.IGNORECASE,
)
def extract_uuids_from_text(text: str) -> list[str]:
"""Extract all UUID v4 strings from text.
Args:
text: Text that may contain UUIDs (e.g., user's goal description)
Returns:
List of unique UUIDs found in the text (lowercase)
"""
matches = _UUID_PATTERN.findall(text)
return list({m.lower() for m in matches})
async def get_library_agent_by_id(
user_id: str, agent_id: str
) -> LibraryAgentSummary | None:
"""Fetch a specific library agent by its ID (library agent ID or graph_id).
This function tries multiple lookup strategies:
1. First tries to find by graph_id (AgentGraph primary key)
2. If not found, tries to find by library agent ID (LibraryAgent primary key)
This handles both cases:
- User provides graph_id (e.g., from AgentExecutorBlock)
- User provides library agent ID (e.g., from library URL)
Args:
user_id: The user ID
agent_id: The ID to look up (can be graph_id or library agent ID)
Returns:
LibraryAgentSummary if found, None otherwise
"""
try:
agent = await library_db.get_library_agent_by_graph_id(user_id, agent_id)
if agent:
logger.debug(f"Found library agent by graph_id: {agent.name}")
return LibraryAgentSummary(
graph_id=agent.graph_id,
graph_version=agent.graph_version,
name=agent.name,
description=agent.description,
input_schema=agent.input_schema,
output_schema=agent.output_schema,
)
except DatabaseError:
raise
except Exception as e:
logger.debug(f"Could not fetch library agent by graph_id {agent_id}: {e}")
try:
agent = await library_db.get_library_agent(agent_id, user_id)
if agent:
logger.debug(f"Found library agent by library_id: {agent.name}")
return LibraryAgentSummary(
graph_id=agent.graph_id,
graph_version=agent.graph_version,
name=agent.name,
description=agent.description,
input_schema=agent.input_schema,
output_schema=agent.output_schema,
)
except NotFoundError:
logger.debug(f"Library agent not found by library_id: {agent_id}")
except DatabaseError:
raise
except Exception as e:
logger.warning(
f"Could not fetch library agent by library_id {agent_id}: {e}",
exc_info=True,
)
return None
get_library_agent_by_graph_id = get_library_agent_by_id
async def get_library_agents_for_generation(
user_id: str,
search_query: str | None = None,
exclude_graph_id: str | None = None,
max_results: int = 15,
) -> list[LibraryAgentSummary]:
"""Fetch user's library agents formatted for Agent Generator.
Uses search-based fetching to return relevant agents instead of all agents.
This is more scalable for users with large libraries.
Includes recent_executions list to help the LLM assess agent quality:
- Each execution has status, correctness_score (0-1), and activity_summary
- This gives the LLM concrete examples of recent performance
Args:
user_id: The user ID
search_query: Optional search term to find relevant agents (user's goal/description)
exclude_graph_id: Optional graph ID to exclude (prevents circular references)
max_results: Maximum number of agents to return (default 15)
Returns:
List of LibraryAgentSummary with schemas and recent executions for sub-agent composition
"""
try:
response = await library_db.list_library_agents(
user_id=user_id,
search_term=search_query,
page=1,
page_size=max_results,
include_executions=True,
)
results: list[LibraryAgentSummary] = []
for agent in response.agents:
if exclude_graph_id is not None and agent.graph_id == exclude_graph_id:
continue
summary = LibraryAgentSummary(
graph_id=agent.graph_id,
graph_version=agent.graph_version,
name=agent.name,
description=agent.description,
input_schema=agent.input_schema,
output_schema=agent.output_schema,
)
if agent.recent_executions:
exec_summaries: list[ExecutionSummary] = []
for ex in agent.recent_executions:
exec_sum = ExecutionSummary(status=ex.status)
if ex.correctness_score is not None:
exec_sum["correctness_score"] = ex.correctness_score
if ex.activity_summary:
exec_sum["activity_summary"] = ex.activity_summary
exec_summaries.append(exec_sum)
summary["recent_executions"] = exec_summaries
results.append(summary)
return results
except DatabaseError:
raise
except Exception as e:
logger.warning(f"Failed to fetch library agents: {e}")
return []
async def search_marketplace_agents_for_generation(
search_query: str,
max_results: int = 10,
) -> list[MarketplaceAgentSummary]:
"""Search marketplace agents formatted for Agent Generator.
Note: This returns basic agent info. Full input/output schemas would require
additional graph fetches and is a potential future enhancement.
Args:
search_query: Search term to find relevant public agents
max_results: Maximum number of agents to return (default 10)
Returns:
List of MarketplaceAgentSummary (without detailed schemas for now)
"""
try:
response = await store_db.get_store_agents(
search_query=search_query,
page=1,
page_size=max_results,
)
results: list[MarketplaceAgentSummary] = []
for agent in response.agents:
results.append(
MarketplaceAgentSummary(
name=agent.agent_name,
description=agent.description,
sub_heading=agent.sub_heading,
creator=agent.creator,
is_marketplace_agent=True,
)
)
return results
except Exception as e:
logger.warning(f"Failed to search marketplace agents: {e}")
return []
async def get_all_relevant_agents_for_generation(
user_id: str,
search_query: str | None = None,
exclude_graph_id: str | None = None,
include_library: bool = True,
include_marketplace: bool = True,
max_library_results: int = 15,
max_marketplace_results: int = 10,
) -> list[AgentSummary]:
"""Fetch relevant agents from library and/or marketplace.
Searches both user's library and marketplace by default.
Explicitly mentioned UUIDs in the search query are always looked up.
Args:
user_id: The user ID
search_query: Search term to find relevant agents (user's goal/description)
exclude_graph_id: Optional graph ID to exclude (prevents circular references)
include_library: Whether to search user's library (default True)
include_marketplace: Whether to also search marketplace (default True)
max_library_results: Max library agents to return (default 15)
max_marketplace_results: Max marketplace agents to return (default 10)
Returns:
List of AgentSummary, library agents first (with full schemas),
then marketplace agents (basic info only)
"""
agents: list[AgentSummary] = []
seen_graph_ids: set[str] = set()
if search_query:
mentioned_uuids = extract_uuids_from_text(search_query)
for graph_id in mentioned_uuids:
if graph_id == exclude_graph_id:
continue
try:
agent = await get_library_agent_by_graph_id(user_id, graph_id)
except Exception as e:
logger.warning(
f"Failed to fetch explicitly mentioned agent {graph_id}: {e}",
exc_info=True,
)
continue
agent_graph_id = agent.get("graph_id") if agent else None
if agent and agent_graph_id and agent_graph_id not in seen_graph_ids:
agents.append(agent)
seen_graph_ids.add(agent_graph_id)
logger.debug(
f"Found explicitly mentioned agent: {agent.get('name') or 'Unknown'}"
)
if include_library:
library_agents = await get_library_agents_for_generation(
user_id=user_id,
search_query=search_query,
exclude_graph_id=exclude_graph_id,
max_results=max_library_results,
)
for agent in library_agents:
graph_id = agent.get("graph_id")
if graph_id and graph_id not in seen_graph_ids:
agents.append(agent)
seen_graph_ids.add(graph_id)
if include_marketplace and search_query:
marketplace_agents = await search_marketplace_agents_for_generation(
search_query=search_query,
max_results=max_marketplace_results,
)
library_names: set[str] = set()
for a in agents:
name = a.get("name")
if name and isinstance(name, str):
library_names.add(name.lower())
for agent in marketplace_agents:
agent_name = agent.get("name")
if agent_name and isinstance(agent_name, str):
if agent_name.lower() not in library_names:
agents.append(agent)
return agents
def extract_search_terms_from_steps(
decomposition_result: DecompositionResult | dict[str, Any],
) -> list[str]:
"""Extract search terms from decomposed instruction steps.
Analyzes the decomposition result to extract relevant keywords
for additional library agent searches.
Args:
decomposition_result: Result from decompose_goal containing steps
Returns:
List of unique search terms extracted from steps
"""
search_terms: list[str] = []
if decomposition_result.get("type") != "instructions":
return search_terms
steps = decomposition_result.get("steps", [])
if not steps:
return search_terms
step_keys: list[str] = ["description", "action", "block_name", "tool", "name"]
for step in steps:
for key in step_keys:
value = step.get(key) # type: ignore[union-attr]
if isinstance(value, str) and len(value) > 3:
search_terms.append(value)
seen: set[str] = set()
unique_terms: list[str] = []
for term in search_terms:
term_lower = term.lower()
if term_lower not in seen:
seen.add(term_lower)
unique_terms.append(term)
return unique_terms
async def enrich_library_agents_from_steps(
user_id: str,
decomposition_result: DecompositionResult | dict[str, Any],
existing_agents: list[AgentSummary] | list[dict[str, Any]],
exclude_graph_id: str | None = None,
include_marketplace: bool = True,
max_additional_results: int = 10,
) -> list[AgentSummary] | list[dict[str, Any]]:
"""Enrich library agents list with additional searches based on decomposed steps.
This implements two-phase search: after decomposition, we search for additional
relevant agents based on the specific steps identified.
Args:
user_id: The user ID
decomposition_result: Result from decompose_goal containing steps
existing_agents: Already fetched library agents from initial search
exclude_graph_id: Optional graph ID to exclude
include_marketplace: Whether to also search marketplace
max_additional_results: Max additional agents per search term (default 10)
Returns:
Combined list of library agents (existing + newly discovered)
"""
search_terms = extract_search_terms_from_steps(decomposition_result)
if not search_terms:
return existing_agents
existing_ids: set[str] = set()
existing_names: set[str] = set()
for agent in existing_agents:
agent_name = agent.get("name")
if agent_name and isinstance(agent_name, str):
existing_names.add(agent_name.lower())
graph_id = agent.get("graph_id") # type: ignore[call-overload]
if graph_id and isinstance(graph_id, str):
existing_ids.add(graph_id)
all_agents: list[AgentSummary] | list[dict[str, Any]] = list(existing_agents)
for term in search_terms[:3]:
try:
additional_agents = await get_all_relevant_agents_for_generation(
user_id=user_id,
search_query=term,
exclude_graph_id=exclude_graph_id,
include_marketplace=include_marketplace,
max_library_results=max_additional_results,
max_marketplace_results=5,
)
for agent in additional_agents:
agent_name = agent.get("name")
if not agent_name or not isinstance(agent_name, str):
continue
agent_name_lower = agent_name.lower()
if agent_name_lower in existing_names:
continue
graph_id = agent.get("graph_id") # type: ignore[call-overload]
if graph_id and graph_id in existing_ids:
continue
all_agents.append(agent)
existing_names.add(agent_name_lower)
if graph_id and isinstance(graph_id, str):
existing_ids.add(graph_id)
except DatabaseError:
logger.error(f"Database error searching for agents with term '{term}'")
raise
except Exception as e:
logger.warning(
f"Failed to search for additional agents with term '{term}': {e}"
)
logger.debug(
f"Enriched library agents: {len(existing_agents)} initial + "
f"{len(all_agents) - len(existing_agents)} additional = {len(all_agents)} total"
)
return all_agents
async def decompose_goal(
description: str,
context: str = "",
library_agents: list[AgentSummary] | None = None,
) -> DecompositionResult | None:
"""Break down a goal into steps or return clarifying questions. """Break down a goal into steps or return clarifying questions.
Args: Args:
description: Natural language goal description description: Natural language goal description
context: Additional context (e.g., answers to previous questions) context: Additional context (e.g., answers to previous questions)
library_agents: User's library agents available for sub-agent composition
Returns: Returns:
Dict with either: DecompositionResult with either:
- {"type": "clarifying_questions", "questions": [...]} - {"type": "clarifying_questions", "questions": [...]}
- {"type": "instructions", "steps": [...]} - {"type": "instructions", "steps": [...]}
Or None on error Or None on error
@@ -54,14 +539,21 @@ async def decompose_goal(description: str, context: str = "") -> dict[str, Any]
""" """
_check_service_configured() _check_service_configured()
logger.info("Calling external Agent Generator service for decompose_goal") logger.info("Calling external Agent Generator service for decompose_goal")
return await decompose_goal_external(description, context) result = await decompose_goal_external(
description, context, _to_dict_list(library_agents)
)
return result # type: ignore[return-value]
async def generate_agent(instructions: dict[str, Any]) -> dict[str, Any] | None: async def generate_agent(
instructions: DecompositionResult | dict[str, Any],
library_agents: list[AgentSummary] | list[dict[str, Any]] | None = None,
) -> dict[str, Any] | None:
"""Generate agent JSON from instructions. """Generate agent JSON from instructions.
Args: Args:
instructions: Structured instructions from decompose_goal instructions: Structured instructions from decompose_goal
library_agents: User's library agents available for sub-agent composition
Returns: Returns:
Agent JSON dict, error dict {"type": "error", ...}, or None on error Agent JSON dict, error dict {"type": "error", ...}, or None on error
@@ -71,12 +563,12 @@ async def generate_agent(instructions: dict[str, Any]) -> dict[str, Any] | None:
""" """
_check_service_configured() _check_service_configured()
logger.info("Calling external Agent Generator service for generate_agent") logger.info("Calling external Agent Generator service for generate_agent")
result = await generate_agent_external(instructions) result = await generate_agent_external(
dict(instructions), _to_dict_list(library_agents)
)
if result: if result:
# Check if it's an error response - pass through as-is
if isinstance(result, dict) and result.get("type") == "error": if isinstance(result, dict) and result.get("type") == "error":
return result return result
# Ensure required fields for successful agent generation
if "id" not in result: if "id" not in result:
result["id"] = str(uuid.uuid4()) result["id"] = str(uuid.uuid4())
if "version" not in result: if "version" not in result:
@@ -86,6 +578,12 @@ async def generate_agent(instructions: dict[str, Any]) -> dict[str, Any] | None:
return result return result
class AgentJsonValidationError(Exception):
"""Raised when agent JSON is invalid or missing required fields."""
pass
def json_to_graph(agent_json: dict[str, Any]) -> Graph: def json_to_graph(agent_json: dict[str, Any]) -> Graph:
"""Convert agent JSON dict to Graph model. """Convert agent JSON dict to Graph model.
@@ -94,25 +592,55 @@ def json_to_graph(agent_json: dict[str, Any]) -> Graph:
Returns: Returns:
Graph ready for saving Graph ready for saving
Raises:
AgentJsonValidationError: If required fields are missing from nodes or links
""" """
nodes = [] nodes = []
for n in agent_json.get("nodes", []): for idx, n in enumerate(agent_json.get("nodes", [])):
block_id = n.get("block_id")
if not block_id:
node_id = n.get("id", f"index_{idx}")
raise AgentJsonValidationError(
f"Node '{node_id}' is missing required field 'block_id'"
)
node = Node( node = Node(
id=n.get("id", str(uuid.uuid4())), id=n.get("id", str(uuid.uuid4())),
block_id=n["block_id"], block_id=block_id,
input_default=n.get("input_default", {}), input_default=n.get("input_default", {}),
metadata=n.get("metadata", {}), metadata=n.get("metadata", {}),
) )
nodes.append(node) nodes.append(node)
links = [] links = []
for link_data in agent_json.get("links", []): for idx, link_data in enumerate(agent_json.get("links", [])):
source_id = link_data.get("source_id")
sink_id = link_data.get("sink_id")
source_name = link_data.get("source_name")
sink_name = link_data.get("sink_name")
missing_fields = []
if not source_id:
missing_fields.append("source_id")
if not sink_id:
missing_fields.append("sink_id")
if not source_name:
missing_fields.append("source_name")
if not sink_name:
missing_fields.append("sink_name")
if missing_fields:
link_id = link_data.get("id", f"index_{idx}")
raise AgentJsonValidationError(
f"Link '{link_id}' is missing required fields: {', '.join(missing_fields)}"
)
link = Link( link = Link(
id=link_data.get("id", str(uuid.uuid4())), id=link_data.get("id", str(uuid.uuid4())),
source_id=link_data["source_id"], source_id=source_id,
sink_id=link_data["sink_id"], sink_id=sink_id,
source_name=link_data["source_name"], source_name=source_name,
sink_name=link_data["sink_name"], sink_name=sink_name,
is_static=link_data.get("is_static", False), is_static=link_data.get("is_static", False),
) )
links.append(link) links.append(link)
@@ -133,22 +661,40 @@ def _reassign_node_ids(graph: Graph) -> None:
This is needed when creating a new version to avoid unique constraint violations. This is needed when creating a new version to avoid unique constraint violations.
""" """
# Create mapping from old node IDs to new UUIDs
id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes} id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes}
# Reassign node IDs
for node in graph.nodes: for node in graph.nodes:
node.id = id_map[node.id] node.id = id_map[node.id]
# Update link references to use new node IDs
for link in graph.links: for link in graph.links:
link.id = str(uuid.uuid4()) # Also give links new IDs link.id = str(uuid.uuid4())
if link.source_id in id_map: if link.source_id in id_map:
link.source_id = id_map[link.source_id] link.source_id = id_map[link.source_id]
if link.sink_id in id_map: if link.sink_id in id_map:
link.sink_id = id_map[link.sink_id] link.sink_id = id_map[link.sink_id]
def _populate_agent_executor_user_ids(agent_json: dict[str, Any], user_id: str) -> None:
"""Populate user_id in AgentExecutorBlock nodes.
The external agent generator creates AgentExecutorBlock nodes with empty user_id.
This function fills in the actual user_id so sub-agents run with correct permissions.
Args:
agent_json: Agent JSON dict (modified in place)
user_id: User ID to set
"""
for node in agent_json.get("nodes", []):
if node.get("block_id") == AGENT_EXECUTOR_BLOCK_ID:
input_default = node.get("input_default", {})
if not input_default.get("user_id"):
input_default["user_id"] = user_id
node["input_default"] = input_default
logger.debug(
f"Set user_id for AgentExecutorBlock node {node.get('id')}"
)
async def save_agent_to_library( async def save_agent_to_library(
agent_json: dict[str, Any], user_id: str, is_update: bool = False agent_json: dict[str, Any], user_id: str, is_update: bool = False
) -> tuple[Graph, Any]: ) -> tuple[Graph, Any]:
@@ -162,33 +708,27 @@ async def save_agent_to_library(
Returns: Returns:
Tuple of (created Graph, LibraryAgent) Tuple of (created Graph, LibraryAgent)
""" """
from backend.data.graph import get_graph_all_versions # Populate user_id in AgentExecutorBlock nodes before conversion
_populate_agent_executor_user_ids(agent_json, user_id)
graph = json_to_graph(agent_json) graph = json_to_graph(agent_json)
if is_update: if is_update:
# For updates, keep the same graph ID but increment version
# and reassign node/link IDs to avoid conflicts
if graph.id: if graph.id:
existing_versions = await get_graph_all_versions(graph.id, user_id) existing_versions = await get_graph_all_versions(graph.id, user_id)
if existing_versions: if existing_versions:
latest_version = max(v.version for v in existing_versions) latest_version = max(v.version for v in existing_versions)
graph.version = latest_version + 1 graph.version = latest_version + 1
# Reassign node IDs (but keep graph ID the same)
_reassign_node_ids(graph) _reassign_node_ids(graph)
logger.info(f"Updating agent {graph.id} to version {graph.version}") logger.info(f"Updating agent {graph.id} to version {graph.version}")
else: else:
# For new agents, always generate a fresh UUID to avoid collisions
graph.id = str(uuid.uuid4()) graph.id = str(uuid.uuid4())
graph.version = 1 graph.version = 1
# Reassign all node IDs as well
_reassign_node_ids(graph) _reassign_node_ids(graph)
logger.info(f"Creating new agent with ID {graph.id}") logger.info(f"Creating new agent with ID {graph.id}")
# Save to database
created_graph = await create_graph(graph, user_id) created_graph = await create_graph(graph, user_id)
# Add to user's library (or update existing library agent)
library_agents = await library_db.create_library_agent( library_agents = await library_db.create_library_agent(
graph=created_graph, graph=created_graph,
user_id=user_id, user_id=user_id,
@@ -200,25 +740,31 @@ async def save_agent_to_library(
async def get_agent_as_json( async def get_agent_as_json(
graph_id: str, user_id: str | None agent_id: str, user_id: str | None
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Fetch an agent and convert to JSON format for editing. """Fetch an agent and convert to JSON format for editing.
Args: Args:
graph_id: Graph ID or library agent ID agent_id: Graph ID or library agent ID
user_id: User ID user_id: User ID
Returns: Returns:
Agent as JSON dict or None if not found Agent as JSON dict or None if not found
""" """
from backend.data.graph import get_graph graph = await get_graph(agent_id, version=None, user_id=user_id)
if not graph and user_id:
try:
library_agent = await library_db.get_library_agent(agent_id, user_id)
graph = await get_graph(
library_agent.graph_id, version=None, user_id=user_id
)
except NotFoundError:
pass
# Try to get the graph (version=None gets the active version)
graph = await get_graph(graph_id, version=None, user_id=user_id)
if not graph: if not graph:
return None return None
# Convert to JSON format
nodes = [] nodes = []
for node in graph.nodes: for node in graph.nodes:
nodes.append( nodes.append(
@@ -256,7 +802,9 @@ async def get_agent_as_json(
async def generate_agent_patch( async def generate_agent_patch(
update_request: str, current_agent: dict[str, Any] update_request: str,
current_agent: dict[str, Any],
library_agents: list[AgentSummary] | None = None,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Update an existing agent using natural language. """Update an existing agent using natural language.
@@ -268,6 +816,7 @@ async def generate_agent_patch(
Args: Args:
update_request: Natural language description of changes update_request: Natural language description of changes
current_agent: Current agent JSON current_agent: Current agent JSON
library_agents: User's library agents available for sub-agent composition
Returns: Returns:
Updated agent JSON, clarifying questions dict {"type": "clarifying_questions", ...}, Updated agent JSON, clarifying questions dict {"type": "clarifying_questions", ...},
@@ -278,4 +827,6 @@ async def generate_agent_patch(
""" """
_check_service_configured() _check_service_configured()
logger.info("Calling external Agent Generator service for generate_agent_patch") logger.info("Calling external Agent Generator service for generate_agent_patch")
return await generate_agent_patch_external(update_request, current_agent) return await generate_agent_patch_external(
update_request, current_agent, _to_dict_list(library_agents)
)

View File

@@ -1,11 +1,43 @@
"""Error handling utilities for agent generator.""" """Error handling utilities for agent generator."""
import re
def _sanitize_error_details(details: str) -> str:
"""Sanitize error details to remove sensitive information.
Strips common patterns that could expose internal system info:
- File paths (Unix and Windows)
- Database connection strings
- URLs with credentials
- Stack trace internals
Args:
details: Raw error details string
Returns:
Sanitized error details safe for user display
"""
sanitized = re.sub(
r"/[a-zA-Z0-9_./\-]+\.(py|js|ts|json|yaml|yml)", "[path]", details
)
sanitized = re.sub(r"[A-Z]:\\[a-zA-Z0-9_\\.\\-]+", "[path]", sanitized)
sanitized = re.sub(
r"(postgres|mysql|mongodb|redis)://[^\s]+", "[database_url]", sanitized
)
sanitized = re.sub(r"https?://[^:]+:[^@]+@[^\s]+", "[url]", sanitized)
sanitized = re.sub(r", line \d+", "", sanitized)
sanitized = re.sub(r'File "[^"]+",?', "", sanitized)
return sanitized.strip()
def get_user_message_for_error( def get_user_message_for_error(
error_type: str, error_type: str,
operation: str = "process the request", operation: str = "process the request",
llm_parse_message: str | None = None, llm_parse_message: str | None = None,
validation_message: str | None = None, validation_message: str | None = None,
error_details: str | None = None,
) -> str: ) -> str:
"""Get a user-friendly error message based on error type. """Get a user-friendly error message based on error type.
@@ -19,25 +51,45 @@ def get_user_message_for_error(
message (e.g., "analyze the goal", "generate the agent") message (e.g., "analyze the goal", "generate the agent")
llm_parse_message: Custom message for llm_parse_error type llm_parse_message: Custom message for llm_parse_error type
validation_message: Custom message for validation_error type validation_message: Custom message for validation_error type
error_details: Optional additional details about the error
Returns: Returns:
User-friendly error message suitable for display to the user User-friendly error message suitable for display to the user
""" """
base_message = ""
if error_type == "llm_parse_error": if error_type == "llm_parse_error":
return ( base_message = (
llm_parse_message llm_parse_message
or "The AI had trouble processing this request. Please try again." or "The AI had trouble processing this request. Please try again."
) )
elif error_type == "validation_error": elif error_type == "validation_error":
return ( base_message = (
validation_message validation_message
or "The request failed validation. Please try rephrasing." or "The generated agent failed validation. "
"This usually happens when the agent structure doesn't match "
"what the platform expects. Please try simplifying your goal "
"or breaking it into smaller parts."
) )
elif error_type == "patch_error": elif error_type == "patch_error":
return "Failed to apply the changes. Please try a different approach." base_message = (
"Failed to apply the changes. The modification couldn't be "
"validated. Please try a different approach or simplify the change."
)
elif error_type in ("timeout", "llm_timeout"): elif error_type in ("timeout", "llm_timeout"):
return "The request took too long. Please try again." base_message = (
"The request took too long to process. This can happen with "
"complex agents. Please try again or simplify your goal."
)
elif error_type in ("rate_limit", "llm_rate_limit"): elif error_type in ("rate_limit", "llm_rate_limit"):
return "The service is currently busy. Please try again in a moment." base_message = "The service is currently busy. Please try again in a moment."
else: else:
return f"Failed to {operation}. Please try again." base_message = f"Failed to {operation}. Please try again."
if error_details:
details = _sanitize_error_details(error_details)
if len(details) > 200:
details = details[:200] + "..."
base_message += f"\n\nTechnical details: {details}"
return base_message

View File

@@ -117,13 +117,16 @@ def _get_client() -> httpx.AsyncClient:
async def decompose_goal_external( async def decompose_goal_external(
description: str, context: str = "" description: str,
context: str = "",
library_agents: list[dict[str, Any]] | None = None,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Call the external service to decompose a goal. """Call the external service to decompose a goal.
Args: Args:
description: Natural language goal description description: Natural language goal description
context: Additional context (e.g., answers to previous questions) context: Additional context (e.g., answers to previous questions)
library_agents: User's library agents available for sub-agent composition
Returns: Returns:
Dict with either: Dict with either:
@@ -141,6 +144,8 @@ async def decompose_goal_external(
if context: if context:
# The external service uses user_instruction for additional context # The external service uses user_instruction for additional context
payload["user_instruction"] = context payload["user_instruction"] = context
if library_agents:
payload["library_agents"] = library_agents
try: try:
response = await client.post("/api/decompose-description", json=payload) response = await client.post("/api/decompose-description", json=payload)
@@ -207,21 +212,25 @@ async def decompose_goal_external(
async def generate_agent_external( async def generate_agent_external(
instructions: dict[str, Any], instructions: dict[str, Any],
library_agents: list[dict[str, Any]] | None = None,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Call the external service to generate an agent from instructions. """Call the external service to generate an agent from instructions.
Args: Args:
instructions: Structured instructions from decompose_goal instructions: Structured instructions from decompose_goal
library_agents: User's library agents available for sub-agent composition
Returns: Returns:
Agent JSON dict on success, or error dict {"type": "error", ...} on error Agent JSON dict on success, or error dict {"type": "error", ...} on error
""" """
client = _get_client() client = _get_client()
payload: dict[str, Any] = {"instructions": instructions}
if library_agents:
payload["library_agents"] = library_agents
try: try:
response = await client.post( response = await client.post("/api/generate-agent", json=payload)
"/api/generate-agent", json={"instructions": instructions}
)
response.raise_for_status() response.raise_for_status()
data = response.json() data = response.json()
@@ -229,8 +238,7 @@ async def generate_agent_external(
error_msg = data.get("error", "Unknown error from Agent Generator") error_msg = data.get("error", "Unknown error from Agent Generator")
error_type = data.get("error_type", "unknown") error_type = data.get("error_type", "unknown")
logger.error( logger.error(
f"Agent Generator generation failed: {error_msg} " f"Agent Generator generation failed: {error_msg} (type: {error_type})"
f"(type: {error_type})"
) )
return _create_error_response(error_msg, error_type) return _create_error_response(error_msg, error_type)
@@ -251,27 +259,31 @@ async def generate_agent_external(
async def generate_agent_patch_external( async def generate_agent_patch_external(
update_request: str, current_agent: dict[str, Any] update_request: str,
current_agent: dict[str, Any],
library_agents: list[dict[str, Any]] | None = None,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Call the external service to generate a patch for an existing agent. """Call the external service to generate a patch for an existing agent.
Args: Args:
update_request: Natural language description of changes update_request: Natural language description of changes
current_agent: Current agent JSON current_agent: Current agent JSON
library_agents: User's library agents available for sub-agent composition
Returns: Returns:
Updated agent JSON, clarifying questions dict, or error dict on error Updated agent JSON, clarifying questions dict, or error dict on error
""" """
client = _get_client() client = _get_client()
payload: dict[str, Any] = {
"update_request": update_request,
"current_agent_json": current_agent,
}
if library_agents:
payload["library_agents"] = library_agents
try: try:
response = await client.post( response = await client.post("/api/update-agent", json=payload)
"/api/update-agent",
json={
"update_request": update_request,
"current_agent_json": current_agent,
},
)
response.raise_for_status() response.raise_for_status()
data = response.json() data = response.json()

View File

@@ -1,6 +1,7 @@
"""Shared agent search functionality for find_agent and find_library_agent tools.""" """Shared agent search functionality for find_agent and find_library_agent tools."""
import logging import logging
import re
from typing import Literal from typing import Literal
from backend.api.features.library import db as library_db from backend.api.features.library import db as library_db
@@ -19,6 +20,85 @@ logger = logging.getLogger(__name__)
SearchSource = Literal["marketplace", "library"] SearchSource = Literal["marketplace", "library"]
_UUID_PATTERN = re.compile(
r"^[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89ab][a-f0-9]{3}-[a-f0-9]{12}$",
re.IGNORECASE,
)
def _is_uuid(text: str) -> bool:
"""Check if text is a valid UUID v4."""
return bool(_UUID_PATTERN.match(text.strip()))
async def _get_library_agent_by_id(user_id: str, agent_id: str) -> AgentInfo | None:
"""Fetch a library agent by ID (library agent ID or graph_id).
Tries multiple lookup strategies:
1. First by graph_id (AgentGraph primary key)
2. Then by library agent ID (LibraryAgent primary key)
Args:
user_id: The user ID
agent_id: The ID to look up (can be graph_id or library agent ID)
Returns:
AgentInfo if found, None otherwise
"""
try:
agent = await library_db.get_library_agent_by_graph_id(user_id, agent_id)
if agent:
logger.debug(f"Found library agent by graph_id: {agent.name}")
return AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
)
except DatabaseError:
raise
except Exception as e:
logger.warning(
f"Could not fetch library agent by graph_id {agent_id}: {e}",
exc_info=True,
)
try:
agent = await library_db.get_library_agent(agent_id, user_id)
if agent:
logger.debug(f"Found library agent by library_id: {agent.name}")
return AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
)
except NotFoundError:
logger.debug(f"Library agent not found by library_id: {agent_id}")
except DatabaseError:
raise
except Exception as e:
logger.warning(
f"Could not fetch library agent by library_id {agent_id}: {e}",
exc_info=True,
)
return None
async def search_agents( async def search_agents(
query: str, query: str,
@@ -69,29 +149,37 @@ async def search_agents(
is_featured=False, is_featured=False,
) )
) )
else: # library else:
logger.info(f"Searching user library for: {query}") if _is_uuid(query):
results = await library_db.list_library_agents( logger.info(f"Query looks like UUID, trying direct lookup: {query}")
user_id=user_id, # type: ignore[arg-type] agent = await _get_library_agent_by_id(user_id, query) # type: ignore[arg-type]
search_term=query, if agent:
page_size=10, agents.append(agent)
) logger.info(f"Found agent by direct ID lookup: {agent.name}")
for agent in results.agents:
agents.append( if not agents:
AgentInfo( logger.info(f"Searching user library for: {query}")
id=agent.id, results = await library_db.list_library_agents(
name=agent.name, user_id=user_id, # type: ignore[arg-type]
description=agent.description or "", search_term=query,
source="library", page_size=10,
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
)
) )
for agent in results.agents:
agents.append(
AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
)
)
logger.info(f"Found {len(agents)} agents in {source}") logger.info(f"Found {len(agents)} agents in {source}")
except NotFoundError: except NotFoundError:
pass pass

View File

@@ -8,7 +8,9 @@ from backend.api.features.chat.model import ChatSession
from .agent_generator import ( from .agent_generator import (
AgentGeneratorNotConfiguredError, AgentGeneratorNotConfiguredError,
decompose_goal, decompose_goal,
enrich_library_agents_from_steps,
generate_agent, generate_agent,
get_all_relevant_agents_for_generation,
get_user_message_for_error, get_user_message_for_error,
save_agent_to_library, save_agent_to_library,
) )
@@ -103,9 +105,24 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Step 1: Decompose goal into steps library_agents = None
if user_id:
try:
library_agents = await get_all_relevant_agents_for_generation(
user_id=user_id,
search_query=description,
include_marketplace=True,
)
logger.debug(
f"Found {len(library_agents)} relevant agents for sub-agent composition"
)
except Exception as e:
logger.warning(f"Failed to fetch library agents: {e}")
try: try:
decomposition_result = await decompose_goal(description, context) decomposition_result = await decompose_goal(
description, context, library_agents
)
except AgentGeneratorNotConfiguredError: except AgentGeneratorNotConfiguredError:
return ErrorResponse( return ErrorResponse(
message=( message=(
@@ -124,7 +141,6 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if the result is an error from the external service
if decomposition_result.get("type") == "error": if decomposition_result.get("type") == "error":
error_msg = decomposition_result.get("error", "Unknown error") error_msg = decomposition_result.get("error", "Unknown error")
error_type = decomposition_result.get("error_type", "unknown") error_type = decomposition_result.get("error_type", "unknown")
@@ -144,7 +160,6 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if LLM returned clarifying questions
if decomposition_result.get("type") == "clarifying_questions": if decomposition_result.get("type") == "clarifying_questions":
questions = decomposition_result.get("questions", []) questions = decomposition_result.get("questions", [])
return ClarificationNeededResponse( return ClarificationNeededResponse(
@@ -163,7 +178,6 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check for unachievable/vague goals
if decomposition_result.get("type") == "unachievable_goal": if decomposition_result.get("type") == "unachievable_goal":
suggested = decomposition_result.get("suggested_goal", "") suggested = decomposition_result.get("suggested_goal", "")
reason = decomposition_result.get("reason", "") reason = decomposition_result.get("reason", "")
@@ -190,9 +204,22 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Step 2: Generate agent JSON (external service handles fixing and validation) if user_id and library_agents is not None:
try:
library_agents = await enrich_library_agents_from_steps(
user_id=user_id,
decomposition_result=decomposition_result,
existing_agents=library_agents,
include_marketplace=True,
)
logger.debug(
f"After enrichment: {len(library_agents)} total agents for sub-agent composition"
)
except Exception as e:
logger.warning(f"Failed to enrich library agents from steps: {e}")
try: try:
agent_json = await generate_agent(decomposition_result) agent_json = await generate_agent(decomposition_result, library_agents)
except AgentGeneratorNotConfiguredError: except AgentGeneratorNotConfiguredError:
return ErrorResponse( return ErrorResponse(
message=( message=(
@@ -211,7 +238,6 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if the result is an error from the external service
if isinstance(agent_json, dict) and agent_json.get("type") == "error": if isinstance(agent_json, dict) and agent_json.get("type") == "error":
error_msg = agent_json.get("error", "Unknown error") error_msg = agent_json.get("error", "Unknown error")
error_type = agent_json.get("error_type", "unknown") error_type = agent_json.get("error_type", "unknown")
@@ -219,7 +245,12 @@ class CreateAgentTool(BaseTool):
error_type, error_type,
operation="generate the agent", operation="generate the agent",
llm_parse_message="The AI had trouble generating the agent. Please try again or simplify your goal.", llm_parse_message="The AI had trouble generating the agent. Please try again or simplify your goal.",
validation_message="The generated agent failed validation. Please try rephrasing your goal.", validation_message=(
"I wasn't able to create a valid agent for this request. "
"The generated workflow had some structural issues. "
"Please try simplifying your goal or breaking it into smaller steps."
),
error_details=error_msg,
) )
return ErrorResponse( return ErrorResponse(
message=user_message, message=user_message,
@@ -237,7 +268,6 @@ class CreateAgentTool(BaseTool):
node_count = len(agent_json.get("nodes", [])) node_count = len(agent_json.get("nodes", []))
link_count = len(agent_json.get("links", [])) link_count = len(agent_json.get("links", []))
# Step 3: Preview or save
if not save: if not save:
return AgentPreviewResponse( return AgentPreviewResponse(
message=( message=(
@@ -252,7 +282,6 @@ class CreateAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Save to library
if not user_id: if not user_id:
return ErrorResponse( return ErrorResponse(
message="You must be logged in to save agents.", message="You must be logged in to save agents.",
@@ -270,7 +299,7 @@ class CreateAgentTool(BaseTool):
agent_id=created_graph.id, agent_id=created_graph.id,
agent_name=created_graph.name, agent_name=created_graph.name,
library_agent_id=library_agent.id, library_agent_id=library_agent.id,
library_agent_link=f"/library/{library_agent.id}", library_agent_link=f"/library/agents/{library_agent.id}",
agent_page_link=f"/build?flowID={created_graph.id}", agent_page_link=f"/build?flowID={created_graph.id}",
session_id=session_id, session_id=session_id,
) )

View File

@@ -9,6 +9,7 @@ from .agent_generator import (
AgentGeneratorNotConfiguredError, AgentGeneratorNotConfiguredError,
generate_agent_patch, generate_agent_patch,
get_agent_as_json, get_agent_as_json,
get_all_relevant_agents_for_generation,
get_user_message_for_error, get_user_message_for_error,
save_agent_to_library, save_agent_to_library,
) )
@@ -117,7 +118,6 @@ class EditAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Step 1: Fetch current agent
current_agent = await get_agent_as_json(agent_id, user_id) current_agent = await get_agent_as_json(agent_id, user_id)
if current_agent is None: if current_agent is None:
@@ -127,14 +127,30 @@ class EditAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Build the update request with context library_agents = None
if user_id:
try:
graph_id = current_agent.get("id")
library_agents = await get_all_relevant_agents_for_generation(
user_id=user_id,
search_query=changes,
exclude_graph_id=graph_id,
include_marketplace=True,
)
logger.debug(
f"Found {len(library_agents)} relevant agents for sub-agent composition"
)
except Exception as e:
logger.warning(f"Failed to fetch library agents: {e}")
update_request = changes update_request = changes
if context: if context:
update_request = f"{changes}\n\nAdditional context:\n{context}" update_request = f"{changes}\n\nAdditional context:\n{context}"
# Step 2: Generate updated agent (external service handles fixing and validation)
try: try:
result = await generate_agent_patch(update_request, current_agent) result = await generate_agent_patch(
update_request, current_agent, library_agents
)
except AgentGeneratorNotConfiguredError: except AgentGeneratorNotConfiguredError:
return ErrorResponse( return ErrorResponse(
message=( message=(
@@ -153,7 +169,6 @@ class EditAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if the result is an error from the external service
if isinstance(result, dict) and result.get("type") == "error": if isinstance(result, dict) and result.get("type") == "error":
error_msg = result.get("error", "Unknown error") error_msg = result.get("error", "Unknown error")
error_type = result.get("error_type", "unknown") error_type = result.get("error_type", "unknown")
@@ -162,6 +177,7 @@ class EditAgentTool(BaseTool):
operation="generate the changes", operation="generate the changes",
llm_parse_message="The AI had trouble generating the changes. Please try again or simplify your request.", llm_parse_message="The AI had trouble generating the changes. Please try again or simplify your request.",
validation_message="The generated changes failed validation. Please try rephrasing your request.", validation_message="The generated changes failed validation. Please try rephrasing your request.",
error_details=error_msg,
) )
return ErrorResponse( return ErrorResponse(
message=user_message, message=user_message,
@@ -175,7 +191,6 @@ class EditAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if LLM returned clarifying questions
if result.get("type") == "clarifying_questions": if result.get("type") == "clarifying_questions":
questions = result.get("questions", []) questions = result.get("questions", [])
return ClarificationNeededResponse( return ClarificationNeededResponse(
@@ -194,7 +209,6 @@ class EditAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Result is the updated agent JSON
updated_agent = result updated_agent = result
agent_name = updated_agent.get("name", "Updated Agent") agent_name = updated_agent.get("name", "Updated Agent")
@@ -202,7 +216,6 @@ class EditAgentTool(BaseTool):
node_count = len(updated_agent.get("nodes", [])) node_count = len(updated_agent.get("nodes", []))
link_count = len(updated_agent.get("links", [])) link_count = len(updated_agent.get("links", []))
# Step 3: Preview or save
if not save: if not save:
return AgentPreviewResponse( return AgentPreviewResponse(
message=( message=(
@@ -218,7 +231,6 @@ class EditAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Save to library (creates a new version)
if not user_id: if not user_id:
return ErrorResponse( return ErrorResponse(
message="You must be logged in to save agents.", message="You must be logged in to save agents.",
@@ -236,7 +248,7 @@ class EditAgentTool(BaseTool):
agent_id=created_graph.id, agent_id=created_graph.id,
agent_name=created_graph.name, agent_name=created_graph.name,
library_agent_id=library_agent.id, library_agent_id=library_agent.id,
library_agent_link=f"/library/{library_agent.id}", library_agent_link=f"/library/agents/{library_agent.id}",
agent_page_link=f"/build?flowID={created_graph.id}", agent_page_link=f"/build?flowID={created_graph.id}",
session_id=session_id, session_id=session_id,
) )

View File

@@ -8,7 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db from backend.api.features.store import db as store_db
from backend.data import graph as graph_db from backend.data import graph as graph_db
from backend.data.graph import GraphModel from backend.data.graph import GraphModel
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput from backend.data.model import Credentials, CredentialsFieldInfo, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError from backend.util.exceptions import NotFoundError
@@ -266,13 +266,14 @@ async def match_user_credentials_to_graph(
credential_requirements, credential_requirements,
_node_fields, _node_fields,
) in aggregated_creds.items(): ) in aggregated_creds.items():
# Find first matching credential by provider and type # Find first matching credential by provider, type, and scopes
matching_cred = next( matching_cred = next(
( (
cred cred
for cred in available_creds for cred in available_creds
if cred.provider in credential_requirements.provider if cred.provider in credential_requirements.provider
and cred.type in credential_requirements.supported_types and cred.type in credential_requirements.supported_types
and _credential_has_required_scopes(cred, credential_requirements)
), ),
None, None,
) )
@@ -296,10 +297,17 @@ async def match_user_credentials_to_graph(
f"{credential_field_name} (validation failed: {e})" f"{credential_field_name} (validation failed: {e})"
) )
else: else:
# Build a helpful error message including scope requirements
error_parts = [
f"provider in {list(credential_requirements.provider)}",
f"type in {list(credential_requirements.supported_types)}",
]
if credential_requirements.required_scopes:
error_parts.append(
f"scopes including {list(credential_requirements.required_scopes)}"
)
missing_creds.append( missing_creds.append(
f"{credential_field_name} " f"{credential_field_name} (requires {', '.join(error_parts)})"
f"(requires provider in {list(credential_requirements.provider)}, "
f"type in {list(credential_requirements.supported_types)})"
) )
logger.info( logger.info(
@@ -309,6 +317,28 @@ async def match_user_credentials_to_graph(
return graph_credentials_inputs, missing_creds return graph_credentials_inputs, missing_creds
def _credential_has_required_scopes(
credential: Credentials,
requirements: CredentialsFieldInfo,
) -> bool:
"""
Check if a credential has all the scopes required by the block.
For OAuth2 credentials, verifies that the credential's scopes are a superset
of the required scopes. For other credential types, returns True (no scope check).
"""
# Only OAuth2 credentials have scopes to check
if credential.type != "oauth2":
return True
# If no scopes are required, any credential matches
if not requirements.required_scopes:
return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes)
async def check_user_has_required_credentials( async def check_user_has_required_credentials(
user_id: str, user_id: str,
required_credentials: list[CredentialsMetaInput], required_credentials: list[CredentialsMetaInput],

View File

@@ -39,6 +39,7 @@ async def list_library_agents(
sort_by: library_model.LibraryAgentSort = library_model.LibraryAgentSort.UPDATED_AT, sort_by: library_model.LibraryAgentSort = library_model.LibraryAgentSort.UPDATED_AT,
page: int = 1, page: int = 1,
page_size: int = 50, page_size: int = 50,
include_executions: bool = False,
) -> library_model.LibraryAgentResponse: ) -> library_model.LibraryAgentResponse:
""" """
Retrieves a paginated list of LibraryAgent records for a given user. Retrieves a paginated list of LibraryAgent records for a given user.
@@ -49,6 +50,9 @@ async def list_library_agents(
sort_by: Sorting field (createdAt, updatedAt, isFavorite, isCreatedByUser). sort_by: Sorting field (createdAt, updatedAt, isFavorite, isCreatedByUser).
page: Current page (1-indexed). page: Current page (1-indexed).
page_size: Number of items per page. page_size: Number of items per page.
include_executions: Whether to include execution data for status calculation.
Defaults to False for performance (UI fetches status separately).
Set to True when accurate status/metrics are needed (e.g., agent generator).
Returns: Returns:
A LibraryAgentResponse containing the list of agents and pagination details. A LibraryAgentResponse containing the list of agents and pagination details.
@@ -76,24 +80,32 @@ async def list_library_agents(
"isArchived": False, "isArchived": False,
} }
# Build search filter if applicable
if search_term: if search_term:
where_clause["OR"] = [ words = [w.strip() for w in search_term.split() if len(w.strip()) >= 3]
{ if words:
"AgentGraph": { or_conditions: list[prisma.types.LibraryAgentWhereInput] = []
"is": {"name": {"contains": search_term, "mode": "insensitive"}} for word in words:
} or_conditions.append(
}, {
{ "AgentGraph": {
"AgentGraph": { "is": {"name": {"contains": word, "mode": "insensitive"}}
"is": { }
"description": {"contains": search_term, "mode": "insensitive"}
} }
} )
}, or_conditions.append(
] {
"AgentGraph": {
"is": {
"description": {
"contains": word,
"mode": "insensitive",
}
}
}
}
)
where_clause["OR"] = or_conditions
# Determine sorting
order_by: prisma.types.LibraryAgentOrderByInput | None = None order_by: prisma.types.LibraryAgentOrderByInput | None = None
if sort_by == library_model.LibraryAgentSort.CREATED_AT: if sort_by == library_model.LibraryAgentSort.CREATED_AT:
@@ -105,7 +117,7 @@ async def list_library_agents(
library_agents = await prisma.models.LibraryAgent.prisma().find_many( library_agents = await prisma.models.LibraryAgent.prisma().find_many(
where=where_clause, where=where_clause,
include=library_agent_include( include=library_agent_include(
user_id, include_nodes=False, include_executions=False user_id, include_nodes=False, include_executions=include_executions
), ),
order=order_by, order=order_by,
skip=(page - 1) * page_size, skip=(page - 1) * page_size,

View File

@@ -9,6 +9,7 @@ import pydantic
from backend.data.block import BlockInput from backend.data.block import BlockInput
from backend.data.graph import GraphModel, GraphSettings, GraphTriggerInfo from backend.data.graph import GraphModel, GraphSettings, GraphTriggerInfo
from backend.data.model import CredentialsMetaInput, is_credentials_field_name from backend.data.model import CredentialsMetaInput, is_credentials_field_name
from backend.util.json import loads as json_loads
from backend.util.models import Pagination from backend.util.models import Pagination
if TYPE_CHECKING: if TYPE_CHECKING:
@@ -16,10 +17,10 @@ if TYPE_CHECKING:
class LibraryAgentStatus(str, Enum): class LibraryAgentStatus(str, Enum):
COMPLETED = "COMPLETED" # All runs completed COMPLETED = "COMPLETED"
HEALTHY = "HEALTHY" # Agent is running (not all runs have completed) HEALTHY = "HEALTHY"
WAITING = "WAITING" # Agent is queued or waiting to start WAITING = "WAITING"
ERROR = "ERROR" # Agent is in an error state ERROR = "ERROR"
class MarketplaceListingCreator(pydantic.BaseModel): class MarketplaceListingCreator(pydantic.BaseModel):
@@ -39,6 +40,30 @@ class MarketplaceListing(pydantic.BaseModel):
creator: MarketplaceListingCreator creator: MarketplaceListingCreator
class RecentExecution(pydantic.BaseModel):
"""Summary of a recent execution for quality assessment.
Used by the LLM to understand the agent's recent performance with specific examples
rather than just aggregate statistics.
"""
status: str
correctness_score: float | None = None
activity_summary: str | None = None
def _parse_settings(settings: dict | str | None) -> GraphSettings:
"""Parse settings from database, handling both dict and string formats."""
if settings is None:
return GraphSettings()
try:
if isinstance(settings, str):
settings = json_loads(settings)
return GraphSettings.model_validate(settings)
except Exception:
return GraphSettings()
class LibraryAgent(pydantic.BaseModel): class LibraryAgent(pydantic.BaseModel):
""" """
Represents an agent in the library, including metadata for display and Represents an agent in the library, including metadata for display and
@@ -48,7 +73,7 @@ class LibraryAgent(pydantic.BaseModel):
id: str id: str
graph_id: str graph_id: str
graph_version: int graph_version: int
owner_user_id: str # ID of user who owns/created this agent graph owner_user_id: str
image_url: str | None image_url: str | None
@@ -64,7 +89,7 @@ class LibraryAgent(pydantic.BaseModel):
description: str description: str
instructions: str | None = None instructions: str | None = None
input_schema: dict[str, Any] # Should be BlockIOObjectSubSchema in frontend input_schema: dict[str, Any]
output_schema: dict[str, Any] output_schema: dict[str, Any]
credentials_input_schema: dict[str, Any] | None = pydantic.Field( credentials_input_schema: dict[str, Any] | None = pydantic.Field(
description="Input schema for credentials required by the agent", description="Input schema for credentials required by the agent",
@@ -81,25 +106,19 @@ class LibraryAgent(pydantic.BaseModel):
) )
trigger_setup_info: Optional[GraphTriggerInfo] = None trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
new_output: bool new_output: bool
execution_count: int = 0
# Whether the user can access the underlying graph success_rate: float | None = None
avg_correctness_score: float | None = None
recent_executions: list[RecentExecution] = pydantic.Field(
default_factory=list,
description="List of recent executions with status, score, and summary",
)
can_access_graph: bool can_access_graph: bool
# Indicates if this agent is the latest version
is_latest_version: bool is_latest_version: bool
# Whether the agent is marked as favorite by the user
is_favorite: bool is_favorite: bool
# Recommended schedule cron (from marketplace agents)
recommended_schedule_cron: str | None = None recommended_schedule_cron: str | None = None
# User-specific settings for this library agent
settings: GraphSettings = pydantic.Field(default_factory=GraphSettings) settings: GraphSettings = pydantic.Field(default_factory=GraphSettings)
# Marketplace listing information if the agent has been published
marketplace_listing: Optional["MarketplaceListing"] = None marketplace_listing: Optional["MarketplaceListing"] = None
@staticmethod @staticmethod
@@ -123,7 +142,6 @@ class LibraryAgent(pydantic.BaseModel):
agent_updated_at = agent.AgentGraph.updatedAt agent_updated_at = agent.AgentGraph.updatedAt
lib_agent_updated_at = agent.updatedAt lib_agent_updated_at = agent.updatedAt
# Compute updated_at as the latest between library agent and graph
updated_at = ( updated_at = (
max(agent_updated_at, lib_agent_updated_at) max(agent_updated_at, lib_agent_updated_at)
if agent_updated_at if agent_updated_at
@@ -136,7 +154,6 @@ class LibraryAgent(pydantic.BaseModel):
creator_name = agent.Creator.name or "Unknown" creator_name = agent.Creator.name or "Unknown"
creator_image_url = agent.Creator.avatarUrl or "" creator_image_url = agent.Creator.avatarUrl or ""
# Logic to calculate status and new_output
week_ago = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta( week_ago = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(
days=7 days=7
) )
@@ -145,13 +162,55 @@ class LibraryAgent(pydantic.BaseModel):
status = status_result.status status = status_result.status
new_output = status_result.new_output new_output = status_result.new_output
# Check if user can access the graph execution_count = len(executions)
can_access_graph = agent.AgentGraph.userId == agent.userId success_rate: float | None = None
avg_correctness_score: float | None = None
if execution_count > 0:
success_count = sum(
1
for e in executions
if e.executionStatus == prisma.enums.AgentExecutionStatus.COMPLETED
)
success_rate = (success_count / execution_count) * 100
# Hard-coded to True until a method to check is implemented correctness_scores = []
for e in executions:
if e.stats and isinstance(e.stats, dict):
score = e.stats.get("correctness_score")
if score is not None and isinstance(score, (int, float)):
correctness_scores.append(float(score))
if correctness_scores:
avg_correctness_score = sum(correctness_scores) / len(
correctness_scores
)
recent_executions: list[RecentExecution] = []
for e in executions:
exec_score: float | None = None
exec_summary: str | None = None
if e.stats and isinstance(e.stats, dict):
score = e.stats.get("correctness_score")
if score is not None and isinstance(score, (int, float)):
exec_score = float(score)
summary = e.stats.get("activity_status")
if summary is not None and isinstance(summary, str):
exec_summary = summary
exec_status = (
e.executionStatus.value
if hasattr(e.executionStatus, "value")
else str(e.executionStatus)
)
recent_executions.append(
RecentExecution(
status=exec_status,
correctness_score=exec_score,
activity_summary=exec_summary,
)
)
can_access_graph = agent.AgentGraph.userId == agent.userId
is_latest_version = True is_latest_version = True
# Build marketplace_listing if available
marketplace_listing_data = None marketplace_listing_data = None
if store_listing and store_listing.ActiveVersion and profile: if store_listing and store_listing.ActiveVersion and profile:
creator_data = MarketplaceListingCreator( creator_data = MarketplaceListingCreator(
@@ -190,11 +249,15 @@ class LibraryAgent(pydantic.BaseModel):
has_sensitive_action=graph.has_sensitive_action, has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info, trigger_setup_info=graph.trigger_setup_info,
new_output=new_output, new_output=new_output,
execution_count=execution_count,
success_rate=success_rate,
avg_correctness_score=avg_correctness_score,
recent_executions=recent_executions,
can_access_graph=can_access_graph, can_access_graph=can_access_graph,
is_latest_version=is_latest_version, is_latest_version=is_latest_version,
is_favorite=agent.isFavorite, is_favorite=agent.isFavorite,
recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron, recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron,
settings=GraphSettings.model_validate(agent.settings), settings=_parse_settings(agent.settings),
marketplace_listing=marketplace_listing_data, marketplace_listing=marketplace_listing_data,
) )
@@ -220,18 +283,15 @@ def _calculate_agent_status(
if not executions: if not executions:
return AgentStatusResult(status=LibraryAgentStatus.COMPLETED, new_output=False) return AgentStatusResult(status=LibraryAgentStatus.COMPLETED, new_output=False)
# Track how many times each execution status appears
status_counts = {status: 0 for status in prisma.enums.AgentExecutionStatus} status_counts = {status: 0 for status in prisma.enums.AgentExecutionStatus}
new_output = False new_output = False
for execution in executions: for execution in executions:
# Check if there's a completed run more recent than `recent_threshold`
if execution.createdAt >= recent_threshold: if execution.createdAt >= recent_threshold:
if execution.executionStatus == prisma.enums.AgentExecutionStatus.COMPLETED: if execution.executionStatus == prisma.enums.AgentExecutionStatus.COMPLETED:
new_output = True new_output = True
status_counts[execution.executionStatus] += 1 status_counts[execution.executionStatus] += 1
# Determine the final status based on counts
if status_counts[prisma.enums.AgentExecutionStatus.FAILED] > 0: if status_counts[prisma.enums.AgentExecutionStatus.FAILED] > 0:
return AgentStatusResult(status=LibraryAgentStatus.ERROR, new_output=new_output) return AgentStatusResult(status=LibraryAgentStatus.ERROR, new_output=new_output)
elif status_counts[prisma.enums.AgentExecutionStatus.QUEUED] > 0: elif status_counts[prisma.enums.AgentExecutionStatus.QUEUED] > 0:

View File

@@ -115,7 +115,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101" CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101"
CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929" CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001" CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307" CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models # AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo" AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
@@ -280,9 +279,6 @@ MODEL_METADATA = {
LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata( LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata(
"anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2 "anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2
), # claude-haiku-4-5-20251001 ), # claude-haiku-4-5-20251001
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 64000, "Claude 3.7 Sonnet", "Anthropic", "Anthropic", 2
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_HAIKU: ModelMetadata( LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1 "anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1
), # claude-3-haiku-20240307 ), # claude-3-haiku-20240307

View File

@@ -83,7 +83,7 @@ class StagehandRecommendedLlmModel(str, Enum):
GPT41_MINI = "gpt-4.1-mini-2025-04-14" GPT41_MINI = "gpt-4.1-mini-2025-04-14"
# Anthropic # Anthropic
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219" CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929"
@property @property
def provider_name(self) -> str: def provider_name(self) -> str:
@@ -137,7 +137,7 @@ class StagehandObserveBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField( model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model", title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)", description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET, default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False, advanced=False,
) )
model_credentials: AICredentials = AICredentialsField() model_credentials: AICredentials = AICredentialsField()
@@ -230,7 +230,7 @@ class StagehandActBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField( model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model", title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)", description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET, default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False, advanced=False,
) )
model_credentials: AICredentials = AICredentialsField() model_credentials: AICredentials = AICredentialsField()
@@ -330,7 +330,7 @@ class StagehandExtractBlock(Block):
model: StagehandRecommendedLlmModel = SchemaField( model: StagehandRecommendedLlmModel = SchemaField(
title="LLM Model", title="LLM Model",
description="LLM to use for Stagehand (provider is inferred)", description="LLM to use for Stagehand (provider is inferred)",
default=StagehandRecommendedLlmModel.CLAUDE_3_7_SONNET, default=StagehandRecommendedLlmModel.CLAUDE_4_5_SONNET,
advanced=False, advanced=False,
) )
model_credentials: AICredentials = AICredentialsField() model_credentials: AICredentials = AICredentialsField()

View File

@@ -81,7 +81,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.CLAUDE_4_5_HAIKU: 4, LlmModel.CLAUDE_4_5_HAIKU: 4,
LlmModel.CLAUDE_4_5_OPUS: 14, LlmModel.CLAUDE_4_5_OPUS: 14,
LlmModel.CLAUDE_4_5_SONNET: 9, LlmModel.CLAUDE_4_5_SONNET: 9,
LlmModel.CLAUDE_3_7_SONNET: 5,
LlmModel.CLAUDE_3_HAIKU: 1, LlmModel.CLAUDE_3_HAIKU: 1,
LlmModel.AIML_API_QWEN2_5_72B: 1, LlmModel.AIML_API_QWEN2_5_72B: 1,
LlmModel.AIML_API_LLAMA3_1_70B: 1, LlmModel.AIML_API_LLAMA3_1_70B: 1,

View File

@@ -666,10 +666,16 @@ class CredentialsFieldInfo(BaseModel, Generic[CP, CT]):
if not (self.discriminator and self.discriminator_mapping): if not (self.discriminator and self.discriminator_mapping):
return self return self
try:
provider = self.discriminator_mapping[discriminator_value]
except KeyError:
raise ValueError(
f"Model '{discriminator_value}' is not supported. "
"It may have been deprecated. Please update your agent configuration."
)
return CredentialsFieldInfo( return CredentialsFieldInfo(
credentials_provider=frozenset( credentials_provider=frozenset([provider]),
[self.discriminator_mapping[discriminator_value]]
),
credentials_types=self.supported_types, credentials_types=self.supported_types,
credentials_scopes=self.required_scopes, credentials_scopes=self.required_scopes,
discriminator=self.discriminator, discriminator=self.discriminator,

View File

@@ -0,0 +1,22 @@
-- Migrate Claude 3.7 Sonnet to Claude 4.5 Sonnet
-- This updates all AgentNode blocks that use the deprecated Claude 3.7 Sonnet model
-- Anthropic is retiring claude-3-7-sonnet-20250219 on February 19, 2026
-- Update AgentNode constant inputs
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';
-- Update AgentPreset input overrides (stored in AgentNodeExecutionInputOutput)
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"claude-sonnet-4-5-20250929"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'claude-3-7-sonnet-20250219';

View File

@@ -31,6 +31,10 @@
"has_sensitive_action": false, "has_sensitive_action": false,
"trigger_setup_info": null, "trigger_setup_info": null,
"new_output": false, "new_output": false,
"execution_count": 0,
"success_rate": null,
"avg_correctness_score": null,
"recent_executions": [],
"can_access_graph": true, "can_access_graph": true,
"is_latest_version": true, "is_latest_version": true,
"is_favorite": false, "is_favorite": false,
@@ -72,6 +76,10 @@
"has_sensitive_action": false, "has_sensitive_action": false,
"trigger_setup_info": null, "trigger_setup_info": null,
"new_output": false, "new_output": false,
"execution_count": 0,
"success_rate": null,
"avg_correctness_score": null,
"recent_executions": [],
"can_access_graph": false, "can_access_graph": false,
"is_latest_version": true, "is_latest_version": true,
"is_favorite": false, "is_favorite": false,

View File

@@ -57,7 +57,8 @@ class TestDecomposeGoal:
result = await core.decompose_goal("Build a chatbot") result = await core.decompose_goal("Build a chatbot")
mock_external.assert_called_once_with("Build a chatbot", "") # library_agents defaults to None
mock_external.assert_called_once_with("Build a chatbot", "", None)
assert result == expected_result assert result == expected_result
@pytest.mark.asyncio @pytest.mark.asyncio
@@ -74,7 +75,8 @@ class TestDecomposeGoal:
await core.decompose_goal("Build a chatbot", "Use Python") await core.decompose_goal("Build a chatbot", "Use Python")
mock_external.assert_called_once_with("Build a chatbot", "Use Python") # library_agents defaults to None
mock_external.assert_called_once_with("Build a chatbot", "Use Python", None)
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_returns_none_on_service_failure(self): async def test_returns_none_on_service_failure(self):
@@ -109,7 +111,8 @@ class TestGenerateAgent:
instructions = {"type": "instructions", "steps": ["Step 1"]} instructions = {"type": "instructions", "steps": ["Step 1"]}
result = await core.generate_agent(instructions) result = await core.generate_agent(instructions)
mock_external.assert_called_once_with(instructions) # library_agents defaults to None
mock_external.assert_called_once_with(instructions, None)
# Result should have id, version, is_active added if not present # Result should have id, version, is_active added if not present
assert result is not None assert result is not None
assert result["name"] == "Test Agent" assert result["name"] == "Test Agent"
@@ -174,7 +177,8 @@ class TestGenerateAgentPatch:
current_agent = {"nodes": [], "links": []} current_agent = {"nodes": [], "links": []}
result = await core.generate_agent_patch("Add a node", current_agent) result = await core.generate_agent_patch("Add a node", current_agent)
mock_external.assert_called_once_with("Add a node", current_agent) # library_agents defaults to None
mock_external.assert_called_once_with("Add a node", current_agent, None)
assert result == expected_result assert result == expected_result
@pytest.mark.asyncio @pytest.mark.asyncio

View File

@@ -0,0 +1,841 @@
"""
Tests for library agent fetching functionality in agent generator.
This test suite verifies the search-based library agent fetching,
including the combination of library and marketplace agents.
"""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from backend.api.features.chat.tools.agent_generator import core
class TestGetLibraryAgentsForGeneration:
"""Test get_library_agents_for_generation function."""
@pytest.mark.asyncio
async def test_fetches_agents_with_search_term(self):
"""Test that search_term is passed to the library db."""
# Create a mock agent with proper attribute values
mock_agent = MagicMock()
mock_agent.graph_id = "agent-123"
mock_agent.graph_version = 1
mock_agent.name = "Email Agent"
mock_agent.description = "Sends emails"
mock_agent.input_schema = {"properties": {}}
mock_agent.output_schema = {"properties": {}}
mock_agent.recent_executions = []
mock_response = MagicMock()
mock_response.agents = [mock_agent]
with patch.object(
core.library_db,
"list_library_agents",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_list:
result = await core.get_library_agents_for_generation(
user_id="user-123",
search_query="send email",
)
mock_list.assert_called_once_with(
user_id="user-123",
search_term="send email",
page=1,
page_size=15,
include_executions=True,
)
# Verify result format
assert len(result) == 1
assert result[0]["graph_id"] == "agent-123"
assert result[0]["name"] == "Email Agent"
@pytest.mark.asyncio
async def test_excludes_specified_graph_id(self):
"""Test that agents with excluded graph_id are filtered out."""
mock_response = MagicMock()
mock_response.agents = [
MagicMock(
graph_id="agent-123",
graph_version=1,
name="Agent 1",
description="First agent",
input_schema={},
output_schema={},
recent_executions=[],
),
MagicMock(
graph_id="agent-456",
graph_version=1,
name="Agent 2",
description="Second agent",
input_schema={},
output_schema={},
recent_executions=[],
),
]
with patch.object(
core.library_db,
"list_library_agents",
new_callable=AsyncMock,
return_value=mock_response,
):
result = await core.get_library_agents_for_generation(
user_id="user-123",
exclude_graph_id="agent-123",
)
# Verify the excluded agent is not in results
assert len(result) == 1
assert result[0]["graph_id"] == "agent-456"
@pytest.mark.asyncio
async def test_respects_max_results(self):
"""Test that max_results parameter limits the page_size."""
mock_response = MagicMock()
mock_response.agents = []
with patch.object(
core.library_db,
"list_library_agents",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_list:
await core.get_library_agents_for_generation(
user_id="user-123",
max_results=5,
)
mock_list.assert_called_once_with(
user_id="user-123",
search_term=None,
page=1,
page_size=5,
include_executions=True,
)
class TestSearchMarketplaceAgentsForGeneration:
"""Test search_marketplace_agents_for_generation function."""
@pytest.mark.asyncio
async def test_searches_marketplace_with_query(self):
"""Test that marketplace is searched with the query."""
mock_response = MagicMock()
mock_response.agents = [
MagicMock(
agent_name="Public Agent",
description="A public agent",
sub_heading="Does something useful",
creator="creator-1",
)
]
# The store_db is dynamically imported, so patch the import path
with patch(
"backend.api.features.store.db.get_store_agents",
new_callable=AsyncMock,
return_value=mock_response,
) as mock_search:
result = await core.search_marketplace_agents_for_generation(
search_query="automation",
max_results=10,
)
mock_search.assert_called_once_with(
search_query="automation",
page=1,
page_size=10,
)
assert len(result) == 1
assert result[0]["name"] == "Public Agent"
assert result[0]["is_marketplace_agent"] is True
@pytest.mark.asyncio
async def test_handles_marketplace_error_gracefully(self):
"""Test that marketplace errors don't crash the function."""
with patch(
"backend.api.features.store.db.get_store_agents",
new_callable=AsyncMock,
side_effect=Exception("Marketplace unavailable"),
):
result = await core.search_marketplace_agents_for_generation(
search_query="test"
)
# Should return empty list, not raise exception
assert result == []
class TestGetAllRelevantAgentsForGeneration:
"""Test get_all_relevant_agents_for_generation function."""
@pytest.mark.asyncio
async def test_combines_library_and_marketplace_agents(self):
"""Test that agents from both sources are combined."""
library_agents = [
{
"graph_id": "lib-123",
"graph_version": 1,
"name": "Library Agent",
"description": "From library",
"input_schema": {},
"output_schema": {},
}
]
marketplace_agents = [
{
"name": "Market Agent",
"description": "From marketplace",
"sub_heading": "Sub heading",
"creator": "creator-1",
"is_marketplace_agent": True,
}
]
with patch.object(
core,
"get_library_agents_for_generation",
new_callable=AsyncMock,
return_value=library_agents,
):
with patch.object(
core,
"search_marketplace_agents_for_generation",
new_callable=AsyncMock,
return_value=marketplace_agents,
):
result = await core.get_all_relevant_agents_for_generation(
user_id="user-123",
search_query="test query",
include_marketplace=True,
)
# Library agents should come first
assert len(result) == 2
assert result[0]["name"] == "Library Agent"
assert result[1]["name"] == "Market Agent"
@pytest.mark.asyncio
async def test_deduplicates_by_name(self):
"""Test that marketplace agents with same name as library are excluded."""
library_agents = [
{
"graph_id": "lib-123",
"graph_version": 1,
"name": "Shared Agent",
"description": "From library",
"input_schema": {},
"output_schema": {},
}
]
marketplace_agents = [
{
"name": "Shared Agent", # Same name, should be deduplicated
"description": "From marketplace",
"sub_heading": "Sub heading",
"creator": "creator-1",
"is_marketplace_agent": True,
},
{
"name": "Unique Agent",
"description": "Only in marketplace",
"sub_heading": "Sub heading",
"creator": "creator-2",
"is_marketplace_agent": True,
},
]
with patch.object(
core,
"get_library_agents_for_generation",
new_callable=AsyncMock,
return_value=library_agents,
):
with patch.object(
core,
"search_marketplace_agents_for_generation",
new_callable=AsyncMock,
return_value=marketplace_agents,
):
result = await core.get_all_relevant_agents_for_generation(
user_id="user-123",
search_query="test",
include_marketplace=True,
)
# Shared Agent from marketplace should be excluded
assert len(result) == 2
names = [a["name"] for a in result]
assert "Shared Agent" in names
assert "Unique Agent" in names
@pytest.mark.asyncio
async def test_skips_marketplace_when_disabled(self):
"""Test that marketplace is not searched when include_marketplace=False."""
library_agents = [
{
"graph_id": "lib-123",
"graph_version": 1,
"name": "Library Agent",
"description": "From library",
"input_schema": {},
"output_schema": {},
}
]
with patch.object(
core,
"get_library_agents_for_generation",
new_callable=AsyncMock,
return_value=library_agents,
):
with patch.object(
core,
"search_marketplace_agents_for_generation",
new_callable=AsyncMock,
) as mock_marketplace:
result = await core.get_all_relevant_agents_for_generation(
user_id="user-123",
search_query="test",
include_marketplace=False,
)
# Marketplace should not be called
mock_marketplace.assert_not_called()
assert len(result) == 1
@pytest.mark.asyncio
async def test_skips_marketplace_when_no_search_query(self):
"""Test that marketplace is not searched without a search query."""
library_agents = [
{
"graph_id": "lib-123",
"graph_version": 1,
"name": "Library Agent",
"description": "From library",
"input_schema": {},
"output_schema": {},
}
]
with patch.object(
core,
"get_library_agents_for_generation",
new_callable=AsyncMock,
return_value=library_agents,
):
with patch.object(
core,
"search_marketplace_agents_for_generation",
new_callable=AsyncMock,
) as mock_marketplace:
result = await core.get_all_relevant_agents_for_generation(
user_id="user-123",
search_query=None, # No search query
include_marketplace=True,
)
# Marketplace should not be called without search query
mock_marketplace.assert_not_called()
assert len(result) == 1
class TestExtractSearchTermsFromSteps:
"""Test extract_search_terms_from_steps function."""
def test_extracts_terms_from_instructions_type(self):
"""Test extraction from valid instructions decomposition result."""
decomposition_result = {
"type": "instructions",
"steps": [
{
"description": "Send an email notification",
"block_name": "GmailSendBlock",
},
{"description": "Fetch weather data", "action": "Get weather API"},
],
}
result = core.extract_search_terms_from_steps(decomposition_result)
assert "Send an email notification" in result
assert "GmailSendBlock" in result
assert "Fetch weather data" in result
assert "Get weather API" in result
def test_returns_empty_for_non_instructions_type(self):
"""Test that non-instructions types return empty list."""
decomposition_result = {
"type": "clarifying_questions",
"questions": [{"question": "What email?"}],
}
result = core.extract_search_terms_from_steps(decomposition_result)
assert result == []
def test_deduplicates_terms_case_insensitively(self):
"""Test that duplicate terms are removed (case-insensitive)."""
decomposition_result = {
"type": "instructions",
"steps": [
{"description": "Send Email", "name": "send email"},
{"description": "Other task"},
],
}
result = core.extract_search_terms_from_steps(decomposition_result)
# Should only have one "send email" variant
email_terms = [t for t in result if "email" in t.lower()]
assert len(email_terms) == 1
def test_filters_short_terms(self):
"""Test that terms with 3 or fewer characters are filtered out."""
decomposition_result = {
"type": "instructions",
"steps": [
{"description": "ab", "action": "xyz"}, # Both too short
{"description": "Valid term here"},
],
}
result = core.extract_search_terms_from_steps(decomposition_result)
assert "ab" not in result
assert "xyz" not in result
assert "Valid term here" in result
def test_handles_empty_steps(self):
"""Test handling of empty steps list."""
decomposition_result = {
"type": "instructions",
"steps": [],
}
result = core.extract_search_terms_from_steps(decomposition_result)
assert result == []
class TestEnrichLibraryAgentsFromSteps:
"""Test enrich_library_agents_from_steps function."""
@pytest.mark.asyncio
async def test_enriches_with_additional_agents(self):
"""Test that additional agents are found based on steps."""
existing_agents = [
{
"graph_id": "existing-123",
"graph_version": 1,
"name": "Existing Agent",
"description": "Already fetched",
"input_schema": {},
"output_schema": {},
}
]
additional_agents = [
{
"graph_id": "new-456",
"graph_version": 1,
"name": "Email Agent",
"description": "For sending emails",
"input_schema": {},
"output_schema": {},
}
]
decomposition_result = {
"type": "instructions",
"steps": [
{"description": "Send email notification"},
],
}
with patch.object(
core,
"get_all_relevant_agents_for_generation",
new_callable=AsyncMock,
return_value=additional_agents,
):
result = await core.enrich_library_agents_from_steps(
user_id="user-123",
decomposition_result=decomposition_result,
existing_agents=existing_agents,
)
# Should have both existing and new agents
assert len(result) == 2
names = [a["name"] for a in result]
assert "Existing Agent" in names
assert "Email Agent" in names
@pytest.mark.asyncio
async def test_deduplicates_by_graph_id(self):
"""Test that agents with same graph_id are not duplicated."""
existing_agents = [
{
"graph_id": "agent-123",
"graph_version": 1,
"name": "Existing Agent",
"description": "Already fetched",
"input_schema": {},
"output_schema": {},
}
]
# Additional search returns same agent
additional_agents = [
{
"graph_id": "agent-123", # Same ID
"graph_version": 1,
"name": "Existing Agent Copy",
"description": "Same agent different name",
"input_schema": {},
"output_schema": {},
}
]
decomposition_result = {
"type": "instructions",
"steps": [{"description": "Some action"}],
}
with patch.object(
core,
"get_all_relevant_agents_for_generation",
new_callable=AsyncMock,
return_value=additional_agents,
):
result = await core.enrich_library_agents_from_steps(
user_id="user-123",
decomposition_result=decomposition_result,
existing_agents=existing_agents,
)
# Should not duplicate
assert len(result) == 1
@pytest.mark.asyncio
async def test_deduplicates_by_name(self):
"""Test that agents with same name are not duplicated."""
existing_agents = [
{
"graph_id": "agent-123",
"graph_version": 1,
"name": "Email Agent",
"description": "Already fetched",
"input_schema": {},
"output_schema": {},
}
]
# Additional search returns agent with same name but different ID
additional_agents = [
{
"graph_id": "agent-456", # Different ID
"graph_version": 1,
"name": "Email Agent", # Same name
"description": "Different agent same name",
"input_schema": {},
"output_schema": {},
}
]
decomposition_result = {
"type": "instructions",
"steps": [{"description": "Send email"}],
}
with patch.object(
core,
"get_all_relevant_agents_for_generation",
new_callable=AsyncMock,
return_value=additional_agents,
):
result = await core.enrich_library_agents_from_steps(
user_id="user-123",
decomposition_result=decomposition_result,
existing_agents=existing_agents,
)
# Should not duplicate by name
assert len(result) == 1
assert result[0].get("graph_id") == "agent-123" # Original kept
@pytest.mark.asyncio
async def test_returns_existing_when_no_steps(self):
"""Test that existing agents are returned when no search terms extracted."""
existing_agents = [
{
"graph_id": "existing-123",
"graph_version": 1,
"name": "Existing Agent",
"description": "Already fetched",
"input_schema": {},
"output_schema": {},
}
]
decomposition_result = {
"type": "clarifying_questions", # Not instructions type
"questions": [],
}
result = await core.enrich_library_agents_from_steps(
user_id="user-123",
decomposition_result=decomposition_result,
existing_agents=existing_agents,
)
# Should return existing unchanged
assert result == existing_agents
@pytest.mark.asyncio
async def test_limits_search_terms_to_three(self):
"""Test that only first 3 search terms are used."""
existing_agents = []
decomposition_result = {
"type": "instructions",
"steps": [
{"description": "First action"},
{"description": "Second action"},
{"description": "Third action"},
{"description": "Fourth action"},
{"description": "Fifth action"},
],
}
call_count = 0
async def mock_get_agents(*args, **kwargs):
nonlocal call_count
call_count += 1
return []
with patch.object(
core,
"get_all_relevant_agents_for_generation",
side_effect=mock_get_agents,
):
await core.enrich_library_agents_from_steps(
user_id="user-123",
decomposition_result=decomposition_result,
existing_agents=existing_agents,
)
# Should only make 3 calls (limited to first 3 terms)
assert call_count == 3
class TestExtractUuidsFromText:
"""Test extract_uuids_from_text function."""
def test_extracts_single_uuid(self):
"""Test extraction of a single UUID from text."""
text = "Use my agent 46631191-e8a8-486f-ad90-84f89738321d for this task"
result = core.extract_uuids_from_text(text)
assert len(result) == 1
assert "46631191-e8a8-486f-ad90-84f89738321d" in result
def test_extracts_multiple_uuids(self):
"""Test extraction of multiple UUIDs from text."""
text = (
"Combine agents 11111111-1111-4111-8111-111111111111 "
"and 22222222-2222-4222-9222-222222222222"
)
result = core.extract_uuids_from_text(text)
assert len(result) == 2
assert "11111111-1111-4111-8111-111111111111" in result
assert "22222222-2222-4222-9222-222222222222" in result
def test_deduplicates_uuids(self):
"""Test that duplicate UUIDs are deduplicated."""
text = (
"Use 46631191-e8a8-486f-ad90-84f89738321d twice: "
"46631191-e8a8-486f-ad90-84f89738321d"
)
result = core.extract_uuids_from_text(text)
assert len(result) == 1
def test_normalizes_to_lowercase(self):
"""Test that UUIDs are normalized to lowercase."""
text = "Use 46631191-E8A8-486F-AD90-84F89738321D"
result = core.extract_uuids_from_text(text)
assert result[0] == "46631191-e8a8-486f-ad90-84f89738321d"
def test_returns_empty_for_no_uuids(self):
"""Test that empty list is returned when no UUIDs found."""
text = "Create an email agent that sends notifications"
result = core.extract_uuids_from_text(text)
assert result == []
def test_ignores_invalid_uuids(self):
"""Test that invalid UUID-like strings are ignored."""
text = "Not a valid UUID: 12345678-1234-1234-1234-123456789abc"
result = core.extract_uuids_from_text(text)
# UUID v4 requires specific patterns (4 in third group, 8/9/a/b in fourth)
assert len(result) == 0
class TestGetLibraryAgentById:
"""Test get_library_agent_by_id function (and its alias get_library_agent_by_graph_id)."""
@pytest.mark.asyncio
async def test_returns_agent_when_found_by_graph_id(self):
"""Test that agent is returned when found by graph_id."""
mock_agent = MagicMock()
mock_agent.graph_id = "agent-123"
mock_agent.graph_version = 1
mock_agent.name = "Test Agent"
mock_agent.description = "Test description"
mock_agent.input_schema = {"properties": {}}
mock_agent.output_schema = {"properties": {}}
with patch.object(
core.library_db,
"get_library_agent_by_graph_id",
new_callable=AsyncMock,
return_value=mock_agent,
):
result = await core.get_library_agent_by_id("user-123", "agent-123")
assert result is not None
assert result["graph_id"] == "agent-123"
assert result["name"] == "Test Agent"
@pytest.mark.asyncio
async def test_falls_back_to_library_agent_id(self):
"""Test that lookup falls back to library agent ID when graph_id not found."""
mock_agent = MagicMock()
mock_agent.graph_id = "graph-456" # Different from the lookup ID
mock_agent.graph_version = 1
mock_agent.name = "Library Agent"
mock_agent.description = "Found by library ID"
mock_agent.input_schema = {"properties": {}}
mock_agent.output_schema = {"properties": {}}
with (
patch.object(
core.library_db,
"get_library_agent_by_graph_id",
new_callable=AsyncMock,
return_value=None, # Not found by graph_id
),
patch.object(
core.library_db,
"get_library_agent",
new_callable=AsyncMock,
return_value=mock_agent, # Found by library ID
),
):
result = await core.get_library_agent_by_id("user-123", "library-id-123")
assert result is not None
assert result["graph_id"] == "graph-456"
assert result["name"] == "Library Agent"
@pytest.mark.asyncio
async def test_returns_none_when_not_found_by_either_method(self):
"""Test that None is returned when agent not found by either method."""
with (
patch.object(
core.library_db,
"get_library_agent_by_graph_id",
new_callable=AsyncMock,
return_value=None,
),
patch.object(
core.library_db,
"get_library_agent",
new_callable=AsyncMock,
side_effect=core.NotFoundError("Not found"),
),
):
result = await core.get_library_agent_by_id("user-123", "nonexistent")
assert result is None
@pytest.mark.asyncio
async def test_returns_none_on_exception(self):
"""Test that None is returned when exception occurs in both lookups."""
with (
patch.object(
core.library_db,
"get_library_agent_by_graph_id",
new_callable=AsyncMock,
side_effect=Exception("Database error"),
),
patch.object(
core.library_db,
"get_library_agent",
new_callable=AsyncMock,
side_effect=Exception("Database error"),
),
):
result = await core.get_library_agent_by_id("user-123", "agent-123")
assert result is None
@pytest.mark.asyncio
async def test_alias_works(self):
"""Test that get_library_agent_by_graph_id is an alias for get_library_agent_by_id."""
assert core.get_library_agent_by_graph_id is core.get_library_agent_by_id
class TestGetAllRelevantAgentsWithUuids:
"""Test UUID extraction in get_all_relevant_agents_for_generation."""
@pytest.mark.asyncio
async def test_fetches_explicitly_mentioned_agents(self):
"""Test that agents mentioned by UUID are fetched directly."""
mock_agent = MagicMock()
mock_agent.graph_id = "46631191-e8a8-486f-ad90-84f89738321d"
mock_agent.graph_version = 1
mock_agent.name = "Mentioned Agent"
mock_agent.description = "Explicitly mentioned"
mock_agent.input_schema = {}
mock_agent.output_schema = {}
mock_response = MagicMock()
mock_response.agents = []
with (
patch.object(
core.library_db,
"get_library_agent_by_graph_id",
new_callable=AsyncMock,
return_value=mock_agent,
),
patch.object(
core.library_db,
"list_library_agents",
new_callable=AsyncMock,
return_value=mock_response,
),
):
result = await core.get_all_relevant_agents_for_generation(
user_id="user-123",
search_query="Use agent 46631191-e8a8-486f-ad90-84f89738321d",
include_marketplace=False,
)
assert len(result) == 1
assert result[0].get("graph_id") == "46631191-e8a8-486f-ad90-84f89738321d"
assert result[0].get("name") == "Mentioned Agent"
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -433,5 +433,139 @@ class TestGetBlocksExternal:
assert result is None assert result is None
class TestLibraryAgentsPassthrough:
"""Test that library_agents are passed correctly in all requests."""
def setup_method(self):
"""Reset client singleton before each test."""
service._settings = None
service._client = None
@pytest.mark.asyncio
async def test_decompose_goal_passes_library_agents(self):
"""Test that library_agents are included in decompose goal payload."""
library_agents = [
{
"graph_id": "agent-123",
"graph_version": 1,
"name": "Email Sender",
"description": "Sends emails",
"input_schema": {"properties": {"to": {"type": "string"}}},
"output_schema": {"properties": {"sent": {"type": "boolean"}}},
},
]
mock_response = MagicMock()
mock_response.json.return_value = {
"success": True,
"type": "instructions",
"steps": ["Step 1"],
}
mock_response.raise_for_status = MagicMock()
mock_client = AsyncMock()
mock_client.post.return_value = mock_response
with patch.object(service, "_get_client", return_value=mock_client):
await service.decompose_goal_external(
"Send an email",
library_agents=library_agents,
)
# Verify library_agents was passed in the payload
call_args = mock_client.post.call_args
assert call_args[1]["json"]["library_agents"] == library_agents
@pytest.mark.asyncio
async def test_generate_agent_passes_library_agents(self):
"""Test that library_agents are included in generate agent payload."""
library_agents = [
{
"graph_id": "agent-456",
"graph_version": 2,
"name": "Data Fetcher",
"description": "Fetches data from API",
"input_schema": {"properties": {"url": {"type": "string"}}},
"output_schema": {"properties": {"data": {"type": "object"}}},
},
]
mock_response = MagicMock()
mock_response.json.return_value = {
"success": True,
"agent_json": {"name": "Test Agent", "nodes": []},
}
mock_response.raise_for_status = MagicMock()
mock_client = AsyncMock()
mock_client.post.return_value = mock_response
with patch.object(service, "_get_client", return_value=mock_client):
await service.generate_agent_external(
{"steps": ["Step 1"]},
library_agents=library_agents,
)
# Verify library_agents was passed in the payload
call_args = mock_client.post.call_args
assert call_args[1]["json"]["library_agents"] == library_agents
@pytest.mark.asyncio
async def test_generate_agent_patch_passes_library_agents(self):
"""Test that library_agents are included in patch generation payload."""
library_agents = [
{
"graph_id": "agent-789",
"graph_version": 1,
"name": "Slack Notifier",
"description": "Sends Slack messages",
"input_schema": {"properties": {"message": {"type": "string"}}},
"output_schema": {"properties": {"success": {"type": "boolean"}}},
},
]
mock_response = MagicMock()
mock_response.json.return_value = {
"success": True,
"agent_json": {"name": "Updated Agent", "nodes": []},
}
mock_response.raise_for_status = MagicMock()
mock_client = AsyncMock()
mock_client.post.return_value = mock_response
with patch.object(service, "_get_client", return_value=mock_client):
await service.generate_agent_patch_external(
"Add error handling",
{"name": "Original Agent", "nodes": []},
library_agents=library_agents,
)
# Verify library_agents was passed in the payload
call_args = mock_client.post.call_args
assert call_args[1]["json"]["library_agents"] == library_agents
@pytest.mark.asyncio
async def test_decompose_goal_without_library_agents(self):
"""Test that decompose goal works without library_agents."""
mock_response = MagicMock()
mock_response.json.return_value = {
"success": True,
"type": "instructions",
"steps": ["Step 1"],
}
mock_response.raise_for_status = MagicMock()
mock_client = AsyncMock()
mock_client.post.return_value = mock_response
with patch.object(service, "_get_client", return_value=mock_client):
await service.decompose_goal_external("Build a workflow")
# Verify library_agents was NOT passed when not provided
call_args = mock_client.post.call_args
assert "library_agents" not in call_args[1]["json"]
if __name__ == "__main__": if __name__ == "__main__":
pytest.main([__file__, "-v"]) pytest.main([__file__, "-v"])

View File

@@ -43,19 +43,24 @@ faker = Faker()
# Constants for data generation limits (reduced for E2E tests) # Constants for data generation limits (reduced for E2E tests)
NUM_USERS = 15 NUM_USERS = 15
NUM_AGENT_BLOCKS = 30 NUM_AGENT_BLOCKS = 30
MIN_GRAPHS_PER_USER = 15 MIN_GRAPHS_PER_USER = 25
MAX_GRAPHS_PER_USER = 15 MAX_GRAPHS_PER_USER = 25
MIN_NODES_PER_GRAPH = 3 MIN_NODES_PER_GRAPH = 3
MAX_NODES_PER_GRAPH = 6 MAX_NODES_PER_GRAPH = 6
MIN_PRESETS_PER_USER = 2 MIN_PRESETS_PER_USER = 2
MAX_PRESETS_PER_USER = 3 MAX_PRESETS_PER_USER = 3
MIN_AGENTS_PER_USER = 15 MIN_AGENTS_PER_USER = 25
MAX_AGENTS_PER_USER = 15 MAX_AGENTS_PER_USER = 25
MIN_EXECUTIONS_PER_GRAPH = 2 MIN_EXECUTIONS_PER_GRAPH = 2
MAX_EXECUTIONS_PER_GRAPH = 8 MAX_EXECUTIONS_PER_GRAPH = 8
MIN_REVIEWS_PER_VERSION = 2 MIN_REVIEWS_PER_VERSION = 2
MAX_REVIEWS_PER_VERSION = 5 MAX_REVIEWS_PER_VERSION = 5
# Guaranteed minimums for marketplace tests (deterministic)
GUARANTEED_FEATURED_AGENTS = 8
GUARANTEED_FEATURED_CREATORS = 5
GUARANTEED_TOP_AGENTS = 10
def get_image(): def get_image():
"""Generate a consistent image URL using picsum.photos service.""" """Generate a consistent image URL using picsum.photos service."""
@@ -385,7 +390,7 @@ class TestDataCreator:
library_agents = [] library_agents = []
for user in self.users: for user in self.users:
num_agents = 10 # Create exactly 10 agents per user num_agents = random.randint(MIN_AGENTS_PER_USER, MAX_AGENTS_PER_USER)
# Get available graphs for this user # Get available graphs for this user
user_graphs = [ user_graphs = [
@@ -507,14 +512,17 @@ class TestDataCreator:
existing_profiles, min(num_creators, len(existing_profiles)) existing_profiles, min(num_creators, len(existing_profiles))
) )
# Mark about 50% of creators as featured (more for testing) # Guarantee at least GUARANTEED_FEATURED_CREATORS featured creators
num_featured = max(2, int(num_creators * 0.5)) num_featured = max(GUARANTEED_FEATURED_CREATORS, int(num_creators * 0.5))
num_featured = min( num_featured = min(
num_featured, len(selected_profiles) num_featured, len(selected_profiles)
) # Don't exceed available profiles ) # Don't exceed available profiles
featured_profile_ids = set( featured_profile_ids = set(
random.sample([p.id for p in selected_profiles], num_featured) random.sample([p.id for p in selected_profiles], num_featured)
) )
print(
f"🎯 Creating {num_featured} featured creators (min: {GUARANTEED_FEATURED_CREATORS})"
)
for profile in selected_profiles: for profile in selected_profiles:
try: try:
@@ -545,21 +553,25 @@ class TestDataCreator:
return profiles return profiles
async def create_test_store_submissions(self) -> List[Dict[str, Any]]: async def create_test_store_submissions(self) -> List[Dict[str, Any]]:
"""Create test store submissions using the API function.""" """Create test store submissions using the API function.
DETERMINISTIC: Guarantees minimum featured agents for E2E tests.
"""
print("Creating test store submissions...") print("Creating test store submissions...")
submissions = [] submissions = []
approved_submissions = [] approved_submissions = []
featured_count = 0
submission_counter = 0
# Create a special test submission for test123@gmail.com # Create a special test submission for test123@gmail.com (ALWAYS approved + featured)
test_user = next( test_user = next(
(user for user in self.users if user["email"] == "test123@gmail.com"), None (user for user in self.users if user["email"] == "test123@gmail.com"), None
) )
if test_user: if test_user and self.agent_graphs:
# Special test data for consistent testing
test_submission_data = { test_submission_data = {
"user_id": test_user["id"], "user_id": test_user["id"],
"agent_id": self.agent_graphs[0]["id"], # Use first available graph "agent_id": self.agent_graphs[0]["id"],
"agent_version": 1, "agent_version": 1,
"slug": "test-agent-submission", "slug": "test-agent-submission",
"name": "Test Agent Submission", "name": "Test Agent Submission",
@@ -580,37 +592,24 @@ class TestDataCreator:
submissions.append(test_submission.model_dump()) submissions.append(test_submission.model_dump())
print("✅ Created special test store submission for test123@gmail.com") print("✅ Created special test store submission for test123@gmail.com")
# Randomly approve, reject, or leave pending the test submission # ALWAYS approve and feature the test submission
if test_submission.store_listing_version_id: if test_submission.store_listing_version_id:
random_value = random.random() approved_submission = await review_store_submission(
if random_value < 0.4: # 40% chance to approve store_listing_version_id=test_submission.store_listing_version_id,
approved_submission = await review_store_submission( is_approved=True,
store_listing_version_id=test_submission.store_listing_version_id, external_comments="Test submission approved",
is_approved=True, internal_comments="Auto-approved test submission",
external_comments="Test submission approved", reviewer_id=test_user["id"],
internal_comments="Auto-approved test submission", )
reviewer_id=test_user["id"], approved_submissions.append(approved_submission.model_dump())
) print("✅ Approved test store submission")
approved_submissions.append(approved_submission.model_dump())
print("✅ Approved test store submission")
# Mark approved submission as featured await prisma.storelistingversion.update(
await prisma.storelistingversion.update( where={"id": test_submission.store_listing_version_id},
where={"id": test_submission.store_listing_version_id}, data={"isFeatured": True},
data={"isFeatured": True}, )
) featured_count += 1
print("🌟 Marked test agent as FEATURED") print("🌟 Marked test agent as FEATURED")
elif random_value < 0.7: # 30% chance to reject (40% to 70%)
await review_store_submission(
store_listing_version_id=test_submission.store_listing_version_id,
is_approved=False,
external_comments="Test submission rejected - needs improvements",
internal_comments="Auto-rejected test submission for E2E testing",
reviewer_id=test_user["id"],
)
print("❌ Rejected test store submission")
else: # 30% chance to leave pending (70% to 100%)
print("⏳ Left test submission pending for review")
except Exception as e: except Exception as e:
print(f"Error creating test store submission: {e}") print(f"Error creating test store submission: {e}")
@@ -620,7 +619,6 @@ class TestDataCreator:
# Create regular submissions for all users # Create regular submissions for all users
for user in self.users: for user in self.users:
# Get available graphs for this specific user
user_graphs = [ user_graphs = [
g for g in self.agent_graphs if g.get("userId") == user["id"] g for g in self.agent_graphs if g.get("userId") == user["id"]
] ]
@@ -631,18 +629,17 @@ class TestDataCreator:
) )
continue continue
# Create exactly 4 store submissions per user
for submission_index in range(4): for submission_index in range(4):
graph = random.choice(user_graphs) graph = random.choice(user_graphs)
submission_counter += 1
try: try:
print( print(
f"Creating store submission for user {user['id']} with graph {graph['id']} (owner: {graph.get('userId')})" f"Creating store submission for user {user['id']} with graph {graph['id']}"
) )
# Use the API function to create store submission with correct parameters
submission = await create_store_submission( submission = await create_store_submission(
user_id=user["id"], # Must match graph's userId user_id=user["id"],
agent_id=graph["id"], agent_id=graph["id"],
agent_version=graph.get("version", 1), agent_version=graph.get("version", 1),
slug=faker.slug(), slug=faker.slug(),
@@ -651,22 +648,24 @@ class TestDataCreator:
video_url=get_video_url() if random.random() < 0.3 else None, video_url=get_video_url() if random.random() < 0.3 else None,
image_urls=[get_image() for _ in range(3)], image_urls=[get_image() for _ in range(3)],
description=faker.text(), description=faker.text(),
categories=[ categories=[get_category()],
get_category()
], # Single category from predefined list
changes_summary="Initial E2E test submission", changes_summary="Initial E2E test submission",
) )
submissions.append(submission.model_dump()) submissions.append(submission.model_dump())
print(f"✅ Created store submission: {submission.name}") print(f"✅ Created store submission: {submission.name}")
# Randomly approve, reject, or leave pending the submission
if submission.store_listing_version_id: if submission.store_listing_version_id:
random_value = random.random() # DETERMINISTIC: First N submissions are always approved
if random_value < 0.4: # 40% chance to approve # First GUARANTEED_FEATURED_AGENTS of those are always featured
try: should_approve = (
# Pick a random user as the reviewer (admin) submission_counter <= GUARANTEED_TOP_AGENTS
reviewer_id = random.choice(self.users)["id"] or random.random() < 0.4
)
should_feature = featured_count < GUARANTEED_FEATURED_AGENTS
if should_approve:
try:
reviewer_id = random.choice(self.users)["id"]
approved_submission = await review_store_submission( approved_submission = await review_store_submission(
store_listing_version_id=submission.store_listing_version_id, store_listing_version_id=submission.store_listing_version_id,
is_approved=True, is_approved=True,
@@ -681,16 +680,7 @@ class TestDataCreator:
f"✅ Approved store submission: {submission.name}" f"✅ Approved store submission: {submission.name}"
) )
# Mark some agents as featured during creation (30% chance) if should_feature:
# More likely for creators and first submissions
is_creator = user["id"] in [
p.get("userId") for p in self.profiles
]
feature_chance = (
0.5 if is_creator else 0.2
) # 50% for creators, 20% for others
if random.random() < feature_chance:
try: try:
await prisma.storelistingversion.update( await prisma.storelistingversion.update(
where={ where={
@@ -698,8 +688,25 @@ class TestDataCreator:
}, },
data={"isFeatured": True}, data={"isFeatured": True},
) )
featured_count += 1
print( print(
f"🌟 Marked agent as FEATURED: {submission.name}" f"🌟 Marked agent as FEATURED ({featured_count}/{GUARANTEED_FEATURED_AGENTS}): {submission.name}"
)
except Exception as e:
print(
f"Warning: Could not mark submission as featured: {e}"
)
elif random.random() < 0.2:
try:
await prisma.storelistingversion.update(
where={
"id": submission.store_listing_version_id
},
data={"isFeatured": True},
)
featured_count += 1
print(
f"🌟 Marked agent as FEATURED (bonus): {submission.name}"
) )
except Exception as e: except Exception as e:
print( print(
@@ -710,11 +717,9 @@ class TestDataCreator:
print( print(
f"Warning: Could not approve submission {submission.name}: {e}" f"Warning: Could not approve submission {submission.name}: {e}"
) )
elif random_value < 0.7: # 30% chance to reject (40% to 70%) elif random.random() < 0.5:
try: try:
# Pick a random user as the reviewer (admin)
reviewer_id = random.choice(self.users)["id"] reviewer_id = random.choice(self.users)["id"]
await review_store_submission( await review_store_submission(
store_listing_version_id=submission.store_listing_version_id, store_listing_version_id=submission.store_listing_version_id,
is_approved=False, is_approved=False,
@@ -729,7 +734,7 @@ class TestDataCreator:
print( print(
f"Warning: Could not reject submission {submission.name}: {e}" f"Warning: Could not reject submission {submission.name}: {e}"
) )
else: # 30% chance to leave pending (70% to 100%) else:
print( print(
f"⏳ Left submission pending for review: {submission.name}" f"⏳ Left submission pending for review: {submission.name}"
) )
@@ -743,9 +748,13 @@ class TestDataCreator:
traceback.print_exc() traceback.print_exc()
continue continue
print("\n📊 Store Submissions Summary:")
print(f" Created: {len(submissions)}")
print(f" Approved: {len(approved_submissions)}")
print( print(
f"Created {len(submissions)} store submissions, approved {len(approved_submissions)}" f" Featured: {featured_count} (guaranteed min: {GUARANTEED_FEATURED_AGENTS})"
) )
self.store_submissions = submissions self.store_submissions = submissions
return submissions return submissions
@@ -825,12 +834,15 @@ class TestDataCreator:
print(f"✅ Agent blocks available: {len(self.agent_blocks)}") print(f"✅ Agent blocks available: {len(self.agent_blocks)}")
print(f"✅ Agent graphs created: {len(self.agent_graphs)}") print(f"✅ Agent graphs created: {len(self.agent_graphs)}")
print(f"✅ Library agents created: {len(self.library_agents)}") print(f"✅ Library agents created: {len(self.library_agents)}")
print(f"✅ Creator profiles updated: {len(self.profiles)} (some featured)") print(f"✅ Creator profiles updated: {len(self.profiles)}")
print( print(f"✅ Store submissions created: {len(self.store_submissions)}")
f"✅ Store submissions created: {len(self.store_submissions)} (some marked as featured during creation)"
)
print(f"✅ API keys created: {len(self.api_keys)}") print(f"✅ API keys created: {len(self.api_keys)}")
print(f"✅ Presets created: {len(self.presets)}") print(f"✅ Presets created: {len(self.presets)}")
print("\n🎯 Deterministic Guarantees:")
print(f" • Featured agents: >= {GUARANTEED_FEATURED_AGENTS}")
print(f" • Featured creators: >= {GUARANTEED_FEATURED_CREATORS}")
print(f" • Top agents (approved): >= {GUARANTEED_TOP_AGENTS}")
print(f" • Library agents per user: >= {MIN_AGENTS_PER_USER}")
print("\n🚀 Your E2E test database is ready to use!") print("\n🚀 Your E2E test database is ready to use!")

View File

@@ -528,6 +528,10 @@ export const CustomNode = React.memo(
const getInputPropKey = (key: string) => const getInputPropKey = (key: string) =>
nodeType == BlockUIType.AGENT ? `inputs.${key}` : key; nodeType == BlockUIType.AGENT ? `inputs.${key}` : key;
// For AGENT blocks, handle IDs need the inputs_#_ prefix to match link sink_names
const getHandleKey = (key: string) =>
nodeType == BlockUIType.AGENT ? `inputs_#_${key}` : key;
return keys.map(([propKey, propSchema]) => { return keys.map(([propKey, propSchema]) => {
const isRequired = data.inputSchema.required?.includes(propKey); const isRequired = data.inputSchema.required?.includes(propKey);
const isAdvanced = propSchema.advanced; const isAdvanced = propSchema.advanced;
@@ -544,7 +548,8 @@ export const CustomNode = React.memo(
!propKey.endsWith("_credentials") && !propKey.endsWith("_credentials") &&
// For OUTPUT blocks, only show the 'value' (hides 'name') input connection handle // For OUTPUT blocks, only show the 'value' (hides 'name') input connection handle
!(nodeType == BlockUIType.OUTPUT && propKey == "name"); !(nodeType == BlockUIType.OUTPUT && propKey == "name");
const isConnected = isInputHandleConnected(propKey); const handleKey = getHandleKey(propKey);
const isConnected = isInputHandleConnected(handleKey);
return ( return (
!isHidden && !isHidden &&
@@ -562,12 +567,12 @@ export const CustomNode = React.memo(
propSchema.discriminator propSchema.discriminator
) ? ( ) ? (
<NodeHandle <NodeHandle
keyName={propKey} keyName={handleKey}
isConnected={isConnected} isConnected={isConnected}
isRequired={isRequired} isRequired={isRequired}
schema={propSchema} schema={propSchema}
side="left" side="left"
isBroken={isInputHandleBroken(propKey)} isBroken={isInputHandleBroken(handleKey)}
/> />
) : ( ) : (
propKey !== "credentials" && propKey !== "credentials" &&
@@ -857,7 +862,7 @@ export const CustomNode = React.memo(
})(); })();
const hasAdvancedFields = const hasAdvancedFields =
data.inputSchema && data.inputSchema?.properties &&
Object.entries(data.inputSchema.properties).some(([key, value]) => { Object.entries(data.inputSchema.properties).some(([key, value]) => {
return ( return (
value.advanced === true && !data.inputSchema.required?.includes(key) value.advanced === true && !data.inputSchema.required?.includes(key)

View File

@@ -7981,6 +7981,25 @@
] ]
}, },
"new_output": { "type": "boolean", "title": "New Output" }, "new_output": { "type": "boolean", "title": "New Output" },
"execution_count": {
"type": "integer",
"title": "Execution Count",
"default": 0
},
"success_rate": {
"anyOf": [{ "type": "number" }, { "type": "null" }],
"title": "Success Rate"
},
"avg_correctness_score": {
"anyOf": [{ "type": "number" }, { "type": "null" }],
"title": "Avg Correctness Score"
},
"recent_executions": {
"items": { "$ref": "#/components/schemas/RecentExecution" },
"type": "array",
"title": "Recent Executions",
"description": "List of recent executions with status, score, and summary"
},
"can_access_graph": { "can_access_graph": {
"type": "boolean", "type": "boolean",
"title": "Can Access Graph" "title": "Can Access Graph"
@@ -9374,6 +9393,23 @@
"required": ["providers", "pagination"], "required": ["providers", "pagination"],
"title": "ProviderResponse" "title": "ProviderResponse"
}, },
"RecentExecution": {
"properties": {
"status": { "type": "string", "title": "Status" },
"correctness_score": {
"anyOf": [{ "type": "number" }, { "type": "null" }],
"title": "Correctness Score"
},
"activity_summary": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Activity Summary"
}
},
"type": "object",
"required": ["status"],
"title": "RecentExecution",
"description": "Summary of a recent execution for quality assessment.\n\nUsed by the LLM to understand the agent's recent performance with specific examples\nrather than just aggregate statistics."
},
"RefundRequest": { "RefundRequest": {
"properties": { "properties": {
"id": { "type": "string", "title": "Id" }, "id": { "type": "string", "title": "Id" },

View File

@@ -57,6 +57,7 @@ export function ChatInput({
isStreaming, isStreaming,
value, value,
baseHandleKeyDown, baseHandleKeyDown,
inputId,
}); });
return ( return (

View File

@@ -15,6 +15,7 @@ interface Args {
isStreaming?: boolean; isStreaming?: boolean;
value: string; value: string;
baseHandleKeyDown: (event: KeyboardEvent<HTMLTextAreaElement>) => void; baseHandleKeyDown: (event: KeyboardEvent<HTMLTextAreaElement>) => void;
inputId?: string;
} }
export function useVoiceRecording({ export function useVoiceRecording({
@@ -23,6 +24,7 @@ export function useVoiceRecording({
isStreaming = false, isStreaming = false,
value, value,
baseHandleKeyDown, baseHandleKeyDown,
inputId,
}: Args) { }: Args) {
const [isRecording, setIsRecording] = useState(false); const [isRecording, setIsRecording] = useState(false);
const [isTranscribing, setIsTranscribing] = useState(false); const [isTranscribing, setIsTranscribing] = useState(false);
@@ -103,7 +105,7 @@ export function useVoiceRecording({
setIsTranscribing(false); setIsTranscribing(false);
} }
}, },
[handleTranscription], [handleTranscription, inputId],
); );
const stopRecording = useCallback(() => { const stopRecording = useCallback(() => {
@@ -201,6 +203,15 @@ export function useVoiceRecording({
} }
}, [error, toast]); }, [error, toast]);
useEffect(() => {
if (!isTranscribing && inputId) {
const inputElement = document.getElementById(inputId);
if (inputElement) {
inputElement.focus();
}
}
}, [isTranscribing, inputId]);
const handleKeyDown = useCallback( const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => { (event: KeyboardEvent<HTMLTextAreaElement>) => {
if (event.key === " " && !value.trim() && !isTranscribing) { if (event.key === " " && !value.trim() && !isTranscribing) {

View File

@@ -156,11 +156,20 @@ export function ChatMessage({
} }
if (isClarificationNeeded && message.type === "clarification_needed") { if (isClarificationNeeded && message.type === "clarification_needed") {
// Check if user already replied after this clarification (answered)
const hasUserReplyAfter =
index >= 0 &&
messages
.slice(index + 1)
.some((m) => m.type === "message" && m.role === "user");
return ( return (
<ClarificationQuestionsWidget <ClarificationQuestionsWidget
questions={message.questions} questions={message.questions}
message={message.message} message={message.message}
sessionId={message.sessionId}
onSubmitAnswers={handleClarificationAnswers} onSubmitAnswers={handleClarificationAnswers}
isAnswered={hasUserReplyAfter}
className={className} className={className}
/> />
); );

View File

@@ -6,7 +6,7 @@ import { Input } from "@/components/atoms/Input/Input";
import { Text } from "@/components/atoms/Text/Text"; import { Text } from "@/components/atoms/Text/Text";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
import { CheckCircleIcon, QuestionIcon } from "@phosphor-icons/react"; import { CheckCircleIcon, QuestionIcon } from "@phosphor-icons/react";
import { useState } from "react"; import { useState, useEffect, useRef } from "react";
export interface ClarifyingQuestion { export interface ClarifyingQuestion {
question: string; question: string;
@@ -17,39 +17,97 @@ export interface ClarifyingQuestion {
interface Props { interface Props {
questions: ClarifyingQuestion[]; questions: ClarifyingQuestion[];
message: string; message: string;
sessionId?: string;
onSubmitAnswers: (answers: Record<string, string>) => void; onSubmitAnswers: (answers: Record<string, string>) => void;
onCancel?: () => void; onCancel?: () => void;
isAnswered?: boolean;
className?: string; className?: string;
} }
function getStorageKey(sessionId?: string): string | null {
if (!sessionId) return null;
return `clarification_answers_${sessionId}`;
}
export function ClarificationQuestionsWidget({ export function ClarificationQuestionsWidget({
questions, questions,
message, message,
sessionId,
onSubmitAnswers, onSubmitAnswers,
onCancel, onCancel,
isAnswered = false,
className, className,
}: Props) { }: Props) {
const [answers, setAnswers] = useState<Record<string, string>>({}); const [answers, setAnswers] = useState<Record<string, string>>({});
const [isSubmitted, setIsSubmitted] = useState(false); const [isSubmitted, setIsSubmitted] = useState(false);
const lastSessionIdRef = useRef<string | undefined>(undefined);
useEffect(() => {
const storageKey = getStorageKey(sessionId);
if (!storageKey) {
setAnswers({});
setIsSubmitted(false);
lastSessionIdRef.current = sessionId;
return;
}
try {
const saved = localStorage.getItem(storageKey);
if (saved) {
const parsed = JSON.parse(saved) as Record<string, string>;
setAnswers(parsed);
} else {
setAnswers({});
}
setIsSubmitted(false);
} catch {
setAnswers({});
setIsSubmitted(false);
}
lastSessionIdRef.current = sessionId;
}, [sessionId]);
useEffect(() => {
if (lastSessionIdRef.current !== sessionId) {
return;
}
const storageKey = getStorageKey(sessionId);
if (!storageKey) return;
const hasAnswers = Object.values(answers).some((v) => v.trim());
try {
if (hasAnswers) {
localStorage.setItem(storageKey, JSON.stringify(answers));
} else {
localStorage.removeItem(storageKey);
}
} catch {}
}, [answers, sessionId]);
function handleAnswerChange(keyword: string, value: string) { function handleAnswerChange(keyword: string, value: string) {
setAnswers((prev) => ({ ...prev, [keyword]: value })); setAnswers((prev) => ({ ...prev, [keyword]: value }));
} }
function handleSubmit() { function handleSubmit() {
// Check if all questions are answered
const allAnswered = questions.every((q) => answers[q.keyword]?.trim()); const allAnswered = questions.every((q) => answers[q.keyword]?.trim());
if (!allAnswered) { if (!allAnswered) {
return; return;
} }
setIsSubmitted(true); setIsSubmitted(true);
onSubmitAnswers(answers); onSubmitAnswers(answers);
const storageKey = getStorageKey(sessionId);
try {
if (storageKey) {
localStorage.removeItem(storageKey);
}
} catch {}
} }
const allAnswered = questions.every((q) => answers[q.keyword]?.trim()); const allAnswered = questions.every((q) => answers[q.keyword]?.trim());
// Show submitted state after answers are submitted // Show submitted state if answered from conversation or just submitted
if (isSubmitted) { if (isAnswered || isSubmitted) {
return ( return (
<div <div
className={cn( className={cn(

View File

@@ -30,9 +30,9 @@ export function getErrorMessage(result: unknown): string {
} }
if (typeof result === "object" && result !== null) { if (typeof result === "object" && result !== null) {
const response = result as Record<string, unknown>; const response = result as Record<string, unknown>;
if (response.error) return stripInternalReasoning(String(response.error));
if (response.message) if (response.message)
return stripInternalReasoning(String(response.message)); return stripInternalReasoning(String(response.message));
if (response.error) return stripInternalReasoning(String(response.error));
} }
return "An error occurred"; return "An error occurred";
} }
@@ -363,8 +363,8 @@ export function formatToolResponse(result: unknown, toolName: string): string {
case "error": case "error":
const errorMsg = const errorMsg =
(response.error as string) || response.message || "An error occurred"; (response.message as string) || response.error || "An error occurred";
return `Error: ${errorMsg}`; return stripInternalReasoning(String(errorMsg));
case "no_results": case "no_results":
const suggestions = (response.suggestions as string[]) || []; const suggestions = (response.suggestions as string[]) || [];

View File

@@ -59,12 +59,13 @@ test.describe("Library", () => {
}); });
test("pagination works correctly", async ({ page }, testInfo) => { test("pagination works correctly", async ({ page }, testInfo) => {
test.setTimeout(testInfo.timeout * 3); // Increase timeout for pagination operations test.setTimeout(testInfo.timeout * 3);
await page.goto("/library"); await page.goto("/library");
const PAGE_SIZE = 20;
const paginationResult = await libraryPage.testPagination(); const paginationResult = await libraryPage.testPagination();
if (paginationResult.initialCount >= 10) { if (paginationResult.initialCount >= PAGE_SIZE) {
expect(paginationResult.finalCount).toBeGreaterThanOrEqual( expect(paginationResult.finalCount).toBeGreaterThanOrEqual(
paginationResult.initialCount, paginationResult.initialCount,
); );
@@ -133,7 +134,10 @@ test.describe("Library", () => {
test.expect(clearedSearchValue).toBe(""); test.expect(clearedSearchValue).toBe("");
}); });
test("pagination while searching works correctly", async ({ page }) => { test("pagination while searching works correctly", async ({
page,
}, testInfo) => {
test.setTimeout(testInfo.timeout * 3);
await page.goto("/library"); await page.goto("/library");
const allAgents = await libraryPage.getAgents(); const allAgents = await libraryPage.getAgents();
@@ -152,9 +156,10 @@ test.describe("Library", () => {
); );
expect(matchingResults.length).toEqual(initialSearchResults.length); expect(matchingResults.length).toEqual(initialSearchResults.length);
const PAGE_SIZE = 20;
const searchPaginationResult = await libraryPage.testPagination(); const searchPaginationResult = await libraryPage.testPagination();
if (searchPaginationResult.initialCount >= 10) { if (searchPaginationResult.initialCount >= PAGE_SIZE) {
expect(searchPaginationResult.finalCount).toBeGreaterThanOrEqual( expect(searchPaginationResult.finalCount).toBeGreaterThanOrEqual(
searchPaginationResult.initialCount, searchPaginationResult.initialCount,
); );

View File

@@ -69,9 +69,12 @@ test.describe("Marketplace Creator Page Basic Functionality", () => {
await marketplacePage.getFirstCreatorProfile(page); await marketplacePage.getFirstCreatorProfile(page);
await firstCreatorProfile.click(); await firstCreatorProfile.click();
await page.waitForURL("**/marketplace/creator/**"); await page.waitForURL("**/marketplace/creator/**");
await page.waitForLoadState("networkidle").catch(() => {});
const firstAgent = page const firstAgent = page
.locator('[data-testid="store-card"]:visible') .locator('[data-testid="store-card"]:visible')
.first(); .first();
await firstAgent.waitFor({ state: "visible", timeout: 30000 });
await firstAgent.click(); await firstAgent.click();
await page.waitForURL("**/marketplace/agent/**"); await page.waitForURL("**/marketplace/agent/**");

View File

@@ -77,7 +77,6 @@ test.describe("Marketplace Basic Functionality", () => {
const firstFeaturedAgent = const firstFeaturedAgent =
await marketplacePage.getFirstFeaturedAgent(page); await marketplacePage.getFirstFeaturedAgent(page);
await firstFeaturedAgent.waitFor({ state: "visible" });
await firstFeaturedAgent.click(); await firstFeaturedAgent.click();
await page.waitForURL("**/marketplace/agent/**"); await page.waitForURL("**/marketplace/agent/**");
await matchesUrl(page, /\/marketplace\/agent\/.+/); await matchesUrl(page, /\/marketplace\/agent\/.+/);
@@ -116,7 +115,15 @@ test.describe("Marketplace Basic Functionality", () => {
const searchTerm = page.getByText("DummyInput").first(); const searchTerm = page.getByText("DummyInput").first();
await isVisible(searchTerm); await isVisible(searchTerm);
await page.waitForTimeout(10000); await page.waitForLoadState("networkidle").catch(() => {});
await page
.waitForFunction(
() =>
document.querySelectorAll('[data-testid="store-card"]').length > 0,
{ timeout: 15000 },
)
.catch(() => console.log("No search results appeared within timeout"));
const results = await marketplacePage.getSearchResultsCount(page); const results = await marketplacePage.getSearchResultsCount(page);
expect(results).toBeGreaterThan(0); expect(results).toBeGreaterThan(0);

View File

@@ -300,21 +300,27 @@ export class LibraryPage extends BasePage {
async scrollToLoadMore(): Promise<void> { async scrollToLoadMore(): Promise<void> {
console.log(`scrolling to load more agents`); console.log(`scrolling to load more agents`);
// Get initial agent count const initialCount = await this.getAgentCountByListLength();
const initialCount = await this.getAgentCount(); console.log(`Initial agent count (DOM cards): ${initialCount}`);
console.log(`Initial agent count: ${initialCount}`);
// Scroll down to trigger pagination
await this.scrollToBottom(); await this.scrollToBottom();
// Wait for potential new agents to load await this.page
await this.page.waitForTimeout(2000); .waitForLoadState("networkidle", { timeout: 10000 })
.catch(() => console.log("Network idle timeout, continuing..."));
// Check if more agents loaded await this.page
const newCount = await this.getAgentCount(); .waitForFunction(
console.log(`New agent count after scroll: ${newCount}`); (prevCount) =>
document.querySelectorAll('[data-testid="library-agent-card"]')
.length > prevCount,
initialCount,
{ timeout: 5000 },
)
.catch(() => {});
return; const newCount = await this.getAgentCountByListLength();
console.log(`New agent count after scroll (DOM cards): ${newCount}`);
} }
async testPagination(): Promise<{ async testPagination(): Promise<{

View File

@@ -9,6 +9,7 @@ export class MarketplacePage extends BasePage {
async goto(page: Page) { async goto(page: Page) {
await page.goto("/marketplace"); await page.goto("/marketplace");
await page.waitForLoadState("networkidle").catch(() => {});
} }
async getMarketplaceTitle(page: Page) { async getMarketplaceTitle(page: Page) {
@@ -109,16 +110,24 @@ export class MarketplacePage extends BasePage {
async getFirstFeaturedAgent(page: Page) { async getFirstFeaturedAgent(page: Page) {
const { getId } = getSelectors(page); const { getId } = getSelectors(page);
return getId("featured-store-card").first(); const card = getId("featured-store-card").first();
await card.waitFor({ state: "visible", timeout: 30000 });
return card;
} }
async getFirstTopAgent() { async getFirstTopAgent() {
return this.page.locator('[data-testid="store-card"]:visible').first(); const card = this.page
.locator('[data-testid="store-card"]:visible')
.first();
await card.waitFor({ state: "visible", timeout: 30000 });
return card;
} }
async getFirstCreatorProfile(page: Page) { async getFirstCreatorProfile(page: Page) {
const { getId } = getSelectors(page); const { getId } = getSelectors(page);
return getId("creator-card").first(); const card = getId("creator-card").first();
await card.waitFor({ state: "visible", timeout: 30000 });
return card;
} }
async getSearchResultsCount(page: Page) { async getSearchResultsCount(page: Page) {

View File

@@ -65,7 +65,7 @@ The result routes data to yes_output or no_output, enabling intelligent branchin
| condition | A plaintext English description of the condition to evaluate | str | Yes | | condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No | | yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No | | no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
### Outputs ### Outputs
@@ -103,7 +103,7 @@ The block sends the entire conversation history to the chosen LLM, including sys
|-------|-------------|------|----------| |-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No | | prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes | | messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No | | max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No | | ollama_host | Ollama host for local models | str | No |
@@ -257,7 +257,7 @@ The block formulates a prompt based on the given focus or source data, sends it
|-------|-------------|------|----------| |-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No | | focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No | | source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No | | max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No | | force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No | | max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -424,7 +424,7 @@ The block sends the input prompt to a chosen LLM, along with any system prompts
| prompt | The prompt to send to the language model. | str | Yes | | prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes | | expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No | | list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No | | force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No | | sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No | | conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |
@@ -464,7 +464,7 @@ The block sends the input prompt to a chosen LLM, processes the response, and re
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes | | prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No | | sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No | | retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No | | prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
@@ -501,7 +501,7 @@ The block splits the input text into smaller chunks, sends each chunk to an LLM
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| text | The text to summarize. | str | Yes | | text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| focus | The topic to focus on in the summary | str | No | | focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No | | style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No | | max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -763,7 +763,7 @@ Configure agent_mode_max_iterations to control loop behavior: 0 for single decis
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes | | prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-7-sonnet-20250219" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No | | model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No | | multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No | | sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No | | conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |

View File

@@ -20,7 +20,7 @@ Configure timeouts for DOM settlement and page loading. Variables can be passed
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes | | browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No | | model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| url | URL to navigate to. | str | Yes | | url | URL to navigate to. | str | Yes |
| action | Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step. | List[str] | Yes | | action | Action to perform. Suggested actions are: click, fill, type, press, scroll, select from dropdown. For multi-step actions, add an entry for each step. | List[str] | Yes |
| variables | Variables to use in the action. Variables contains data you want the action to use. | Dict[str, str] | No | | variables | Variables to use in the action. Variables contains data you want the action to use. | Dict[str, str] | No |
@@ -65,7 +65,7 @@ Supports searching within iframes and configurable timeouts for dynamic content
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes | | browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No | | model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| url | URL to navigate to. | str | Yes | | url | URL to navigate to. | str | Yes |
| instruction | Natural language description of elements or actions to discover. | str | Yes | | instruction | Natural language description of elements or actions to discover. | str | Yes |
| iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No | | iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No |
@@ -106,7 +106,7 @@ Use this to explore a page's interactive elements before building automated work
| Input | Description | Type | Required | | Input | Description | Type | Required |
|-------|-------------|------|----------| |-------|-------------|------|----------|
| browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes | | browserbase_project_id | Browserbase project ID (required if using Browserbase) | str | Yes |
| model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-3-7-sonnet-20250219" | No | | model | LLM to use for Stagehand (provider is inferred) | "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "claude-sonnet-4-5-20250929" | No |
| url | URL to navigate to. | str | Yes | | url | URL to navigate to. | str | Yes |
| instruction | Natural language description of elements or actions to discover. | str | Yes | | instruction | Natural language description of elements or actions to discover. | str | Yes |
| iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No | | iframes | Whether to search within iframes. If True, Stagehand will search for actions within iframes. | bool | No |