Compare commits

..

7 Commits

Author SHA1 Message Date
Bentlybro
4b3611ca43 add migration and docs update for deprecated models
- Add migration to update existing graphs using gpt-4-turbo -> gpt-4o
- Add migration to update existing graphs using gpt-3.5-turbo -> gpt-4o-mini
- Update llm.md docs to remove deprecated models from model lists
2026-02-16 12:03:59 +00:00
Bentlybro
cd6271b787 chore(backend): remove deprecated OpenAI GPT-4-turbo and GPT-3.5-turbo models
Remove deprecated OpenAI models that are being sunset in 2026:
- GPT-4-turbo (shutdown: March 26, 2026)
- GPT-3.5-turbo (shutdown: September 28, 2026)

Users should migrate to the newer GPT-4.1, GPT-4o, or GPT-5 model families
which are already available in the platform.

Reference: https://platform.openai.com/docs/deprecations
2026-02-16 11:58:11 +00:00
Ubbe
223df9d3da feat(frontend): improve create/edit copilot UX (#12117)
## Changes 🏗️

Make the UX nicer when running long tasks in Copilot, like creating an
agent, editing it or running a task.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and play the game!

<!-- greptile_comment -->

<details><summary><h3>Greptile Summary</h3></summary>

This PR replaces the static progress bar and idle wait screens with an
interactive mini-game across the Create, Edit, and Run Agent copilot
tools. The existing mini-game (a simple runner with projectile-dodge
boss encounters) is significantly overhauled into a two-mode game: a
runner mode with animated tree obstacles and a duel mode featuring a
melee boss fight with attack, guard, and movement mechanics.
Sprite-based rendering replaces the previous shape-drawing approach.

- **Create/Edit/Run Agent UX**: All three tool views now show the
mini-game with contextual overlays during long-running operations,
replacing the progress bar in EditAgent and adding the game to RunAgent
- **Game mechanics overhaul**: Boss encounters changed from
projectile-dodging to melee duel with attack (Z), block (X), movement
(arrows), and jump (Space) controls
- **Sprite rendering**: Added 9 sprite sheet assets for characters,
trees, and boss animations with fallback to shape rendering if images
fail to load
- **UI overlays**: Added React-managed overlay states for idle,
boss-intro, boss-defeated, and game-over screens with continue/retry
buttons
- **Minor issues found**: Unused `isRunActive` variable in
`MiniGame.tsx`, unreachable "leaving" boss phase in `useMiniGame.ts`,
and a missing `expanded` property in `getAccordionMeta` return type
annotation in `EditAgent.tsx`
- **Unused asset**: `archer-shoot.png` is included in the PR but never
imported or referenced in any code
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge — it only affects the copilot mini-game UX
with no backend or data model changes.
- The changes are entirely frontend/cosmetic, scoped to the copilot
tools' waiting UX. The mini-game logic is self-contained in a
canvas-based hook and doesn't affect any application state, API calls,
or routing. The issues found are minor (unused variable, dead code, type
annotation gap, unused asset) and don't impact runtime behavior.
- `useMiniGame.ts` has the most complex logic changes (boss AI, death
animations, sprite rendering) and contains unreachable dead code in the
"leaving" phase handler. `EditAgent.tsx` has a return type annotation
that doesn't include `expanded`.
</details>


<details><summary><h3>Flowchart</h3></summary>

```mermaid
flowchart TD
    A[Game Idle] -->|"Start button"| B[Run Mode]
    B -->|"Jump over trees"| C{Score >= Threshold?}
    C -->|No| B
    C -->|"Yes, obstacles clear"| D[Boss Intro Overlay]
    D -->|"Continue button"| E[Duel Mode]
    E -->|"Attack Z / Guard X / Move ←→"| F{Boss HP <= 0?}
    F -->|No| G{Player hit & not guarding?}
    G -->|No| E
    G -->|Yes| H[Player Death Animation]
    H --> I[Game Over Overlay]
    I -->|"Retry button"| B
    F -->|Yes| J[Boss Death Animation]
    J --> K[Boss Defeated Overlay]
    K -->|"Continue button"| L[Reset Boss & Resume Run]
    L --> B
```
</details>


<sub>Last reviewed commit: ad80e24</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-16 10:53:08 +00:00
Ubbe
187ab04745 refactor(frontend): remove OldAgentLibraryView and NEW_AGENT_RUNS flag (#12088)
## Summary
- Removes the deprecated `OldAgentLibraryView` directory (13 files,
~2200 lines deleted)
- Removes the `NEW_AGENT_RUNS` feature flag from the `Flag` enum and
defaults
- Removes the legacy agent library page at `library/legacy/[id]`
- Moves shared `CronScheduler` components to
`src/components/contextual/CronScheduler/`
- Moves `agent-run-draft-view` and `agent-status-chip` to
`legacy-builder/` (co-located with their only consumer)
- Updates all import paths in consuming files (`AgentInfoStep`,
`SaveControl`, `RunnerInputUI`, `useRunGraph`)

## Test plan
- [x] `pnpm format` passes
- [x] `pnpm types` passes (no TypeScript errors)
- [x] No remaining references to `OldAgentLibraryView`,
`NEW_AGENT_RUNS`, or `new-agent-runs` in the codebase
- [x] Verify `RunnerInputUI` dialog still works in the legacy builder
- [x] Verify `AgentInfoStep` cron scheduling works in the publish modal
- [x] Verify `SaveControl` cron scheduling works in the legacy builder

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

This PR removes deprecated code from the legacy agent library view
system and consolidates the codebase to use the new agent runs
implementation exclusively. The refactor successfully removes ~2200
lines of code across 13 deleted files while properly relocating shared
components.

**Key changes:**
- Removed the entire `OldAgentLibraryView` directory and its 13
component files
- Removed the `NEW_AGENT_RUNS` feature flag from the `Flag` enum and
defaults
- Deleted the legacy agent library page route at `library/legacy/[id]`
- Moved `CronScheduler` components to
`src/components/contextual/CronScheduler/` for shared use across the
application
- Moved `agent-run-draft-view` and `agent-status-chip` to
`legacy-builder/` directory, co-locating them with their only consumer
- Updated `useRunGraph.ts` to import `GraphExecutionMeta` from the
generated API models instead of the deleted custom type definition
- Updated all import paths in consuming components (`AgentInfoStep`,
`SaveControl`, `RunnerInputUI`)

**Technical notes:**
- The new import path for `GraphExecutionMeta`
(`@/app/api/__generated__/models/graphExecutionMeta`) will be generated
when running `pnpm generate:api` from the OpenAPI spec
- All references to the old code have been cleanly removed from the
codebase
- The refactor maintains proper separation of concerns by moving shared
components to contextual locations
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge with minimal risk, pending manual
verification of the UI components mentioned in the test plan
- The refactor is well-structured and all code changes are correct. The
score of 4 (rather than 5) reflects that the PR author has marked three
manual testing items as incomplete in the test plan: verifying
`RunnerInputUI` dialog, `AgentInfoStep` cron scheduling, and
`SaveControl` cron scheduling. While the code changes are sound, these
UI components should be manually tested before merging to ensure the
moved components work correctly in their new locations.
- No files require special attention. The author should complete the
manual testing checklist items for `RunnerInputUI`, `AgentInfoStep`, and
`SaveControl` as noted in the test plan.
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Dev as Developer
    participant FE as Frontend Build
    participant API as Backend API
    participant Gen as Generated Types

    Note over Dev,Gen: Refactor: Remove OldAgentLibraryView & NEW_AGENT_RUNS flag

    Dev->>FE: Delete OldAgentLibraryView (13 files, ~2200 lines)
    Dev->>FE: Remove NEW_AGENT_RUNS from Flag enum
    Dev->>FE: Delete library/legacy/[id]/page.tsx
    
    Dev->>FE: Move CronScheduler → src/components/contextual/
    Dev->>FE: Move agent-run-draft-view → legacy-builder/
    Dev->>FE: Move agent-status-chip → legacy-builder/
    
    Dev->>FE: Update RunnerInputUI import path
    Dev->>FE: Update SaveControl import path
    Dev->>FE: Update AgentInfoStep import path
    
    Dev->>FE: Update useRunGraph.ts
    FE->>Gen: Import GraphExecutionMeta from generated models
    Note over Gen: Type available after pnpm generate:api
    
    Gen-->>API: Uses OpenAPI spec schema
    API-->>FE: Type-safe GraphExecutionMeta model
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 18:29:59 +08:00
Abhimanyu Yadav
e2d3c8a217 fix(frontend): Prevent node drag when selecting text in object editor key input (#11955)
## Summary
- Add `nodrag` class to the key name input wrapper in
`WrapIfAdditionalTemplate.tsx`
- This prevents the node from being dragged when users try to select
text in the key name input field
- Follows the same pattern used by other input components like
`TextWidget.tsx`

## Test plan
- [x] Open the new builder
- [x] Add a custom node with an Object input field
- [x] Try to select text in the key name input by clicking and dragging
- [x] Verify that text selection works without moving the block

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-16 06:59:33 +00:00
Eve
647c8ed8d4 feat(backend/blocks): enhance list concatenation with advanced operations (#12105)
## Summary

Enhances the existing `ConcatenateListsBlock` and adds five new
companion blocks for comprehensive list manipulation, addressing issue
#11139 ("Implement block to concatenate lists").

### Changes

- **Enhanced `ConcatenateListsBlock`** with optional deduplication
(`deduplicate`) and None-value filtering (`remove_none`), plus an output
`length` field
- **New `FlattenListBlock`**: Recursively flattens nested list
structures with configurable `max_depth`
- **New `InterleaveListsBlock`**: Round-robin interleaving of elements
from multiple lists
- **New `ZipListsBlock`**: Zips corresponding elements from multiple
lists with support for padding to longest or truncating to shortest
- **New `ListDifferenceBlock`**: Computes set difference between two
lists (regular or symmetric)
- **New `ListIntersectionBlock`**: Finds common elements between two
lists, preserving order

### Helper Utilities

Extracted reusable helper functions for validation, flattening,
deduplication, interleaving, chunking, and statistics computation to
support the blocks and enable future reuse.

### Test Coverage

Comprehensive test suite with 188 test functions across 29 test classes
covering:
- Built-in block test harness validation for all 6 blocks
- Manual edge-case tests for each block (empty inputs, large lists,
mixed types, nested structures)
- Internal method tests for all block classes
- Unit tests for all helper utility functions

Closes #11139

## Test plan

- [x] All files pass Python syntax validation (`ast.parse`)
- [x] Built-in `test_input`/`test_output` tests defined for all blocks
- [x] Manual tests cover edge cases: empty lists, large lists, mixed
types, nested structures, deduplication, None removal
- [x] Helper function tests validate all utility functions independently
- [x] All block IDs are valid UUID4
- [x] Block categories set to `BlockCategory.BASIC` for consistency with
existing list blocks


<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Enhanced `ConcatenateListsBlock` with deduplication and None-filtering
options, and added five new list manipulation blocks
(`FlattenListBlock`, `InterleaveListsBlock`, `ZipListsBlock`,
`ListDifferenceBlock`, `ListIntersectionBlock`) with comprehensive
helper functions and test coverage.

**Key Changes:**
- Enhanced `ConcatenateListsBlock` with `deduplicate` and `remove_none`
options, plus `length` output field
- Added `FlattenListBlock` for recursively flattening nested lists with
configurable `max_depth`
- Added `InterleaveListsBlock` for round-robin element interleaving
- Added `ZipListsBlock` with support for padding/truncation
- Added `ListDifferenceBlock` and `ListIntersectionBlock` for set
operations
- Extracted 12 reusable helper functions for validation, flattening,
deduplication, etc.
- Comprehensive test suite with 188 test functions covering edge cases

**Minor Issues:**
- Helper function `_deduplicate_list` has redundant logic in the `else`
branch that duplicates the `if` branch
- Three helper functions (`_filter_empty_collections`,
`_compute_list_statistics`, `_chunk_list`) are defined but unused -
consider removing unless planned for future use
- The `_make_hashable` function uses `hash(repr(item))` for unhashable
types, which correctly treats structurally identical dicts/lists as
duplicates
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- Safe to merge with minor style improvements recommended
- The implementation is well-structured with comprehensive test coverage
(188 tests), proper error handling, and follows existing block patterns.
All blocks use valid UUID4 IDs and correct categories. The helper
functions provide good code reuse. The minor issues are purely stylistic
(redundant code, unused helpers) and don't affect functionality or
safety.
- No files require special attention - both files are well-tested and
follow project conventions
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant Block as List Block
    participant Helper as Helper Functions
    participant Output
    
    User->>Block: Input (lists/parameters)
    Block->>Helper: _validate_all_lists()
    Helper-->>Block: validation result
    
    alt validation fails
        Block->>Output: error message
    else validation succeeds
        Block->>Helper: _concatenate_lists_simple() / _flatten_nested_list() / etc.
        Helper-->>Block: processed result
        
        opt deduplicate enabled
            Block->>Helper: _deduplicate_list()
            Helper-->>Block: deduplicated result
        end
        
        opt remove_none enabled
            Block->>Helper: _filter_none_values()
            Helper-->>Block: filtered result
        end
        
        Block->>Output: result + length
    end
    
    Output-->>User: Block outputs
```
</details>


<sub>Last reviewed commit: a6d5445</sub>

<!-- greptile_other_comments_section -->

<sub>(2/5) Greptile learns from your feedback when you react with thumbs
up/down!</sub>

<!-- /greptile_comment -->

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-16 05:39:53 +00:00
Zamil Majdy
27d94e395c feat(backend/sdk): enable WebSearch, block WebFetch, consolidate tool constants (#12108)
## Summary
- Enable Claude Agent SDK built-in **WebSearch** tool (Brave Search via
Anthropic API) for the CoPilot SDK agent
- Explicitly **block WebFetch** via `SDK_DISALLOWED_TOOLS`. The agent
uses the SSRF-protected `mcp__copilot__web_fetch` MCP tool instead
- **Consolidate** all tool security constants (`BLOCKED_TOOLS`,
`WORKSPACE_SCOPED_TOOLS`, `DANGEROUS_PATTERNS`, `SDK_DISALLOWED_TOOLS`)
into `tool_adapter.py` as a single source of truth — previously
scattered across `tool_adapter.py`, `security_hooks.py`, and inline in
`service.py`

## Changes
- `tool_adapter.py`: Add `WebSearch` to `_SDK_BUILTIN_TOOLS`, add
`SDK_DISALLOWED_TOOLS`, move security constants here
- `security_hooks.py`: Import constants from `tool_adapter.py` instead
of defining locally
- `service.py`: Use `SDK_DISALLOWED_TOOLS` instead of inline `["Bash"]`

## Test plan
- [x] All 21 security hooks tests pass
- [x] Ruff lint clean
- [x] All pre-commit hooks pass
- [ ] Verify WebSearch works in CoPilot chat (manual test)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Consolidates tool security constants into `tool_adapter.py` as single
source of truth, enables WebSearch (Brave via Anthropic API), and
explicitly blocks WebFetch to prevent SSRF attacks. The change improves
security by ensuring the agent uses the SSRF-protected
`mcp__copilot__web_fetch` tool instead of the built-in WebFetch which
can access internal networks like `localhost:8006`.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes improve security by blocking WebFetch (SSRF risk) while
enabling safe WebSearch. The consolidation of constants into a single
source of truth improves maintainability. All existing tests pass (21
security hooks tests), and the refactoring is straightforward with no
behavioral changes to existing security logic. The only suggestions are
minor improvements: adding a test for WebFetch blocking and considering
a lowercase alias for consistency.
- No files require special attention
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Agent as SDK Agent
    participant Hooks as Security Hooks
    participant TA as tool_adapter.py
    participant MCP as MCP Tools
    
    Note over TA: SDK_DISALLOWED_TOOLS = ["Bash", "WebFetch"]
    Note over TA: _SDK_BUILTIN_TOOLS includes WebSearch
    
    Agent->>Hooks: Request WebSearch (Brave API)
    Hooks->>TA: Check BLOCKED_TOOLS
    TA-->>Hooks: Not blocked
    Hooks-->>Agent: Allowed ✓
    Agent->>Agent: Execute via Anthropic API
    
    Agent->>Hooks: Request WebFetch (SSRF risk)
    Hooks->>TA: Check BLOCKED_TOOLS
    Note over TA: WebFetch in SDK_DISALLOWED_TOOLS
    TA-->>Hooks: Blocked
    Hooks-->>Agent: Denied ✗
    Note over Agent: Use mcp__copilot__web_fetch instead
    
    Agent->>Hooks: Request mcp__copilot__web_fetch
    Hooks->>MCP: Validate (MCP tool, not SDK builtin)
    MCP-->>Hooks: Has SSRF protection
    Hooks-->>Agent: Allowed ✓
    Agent->>MCP: Execute with SSRF checks
```
</details>


<sub>Last reviewed commit: 2d9975f</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-15 06:51:25 +00:00
54 changed files with 3082 additions and 2695 deletions

View File

@@ -11,45 +11,15 @@ import re
from collections.abc import Callable
from typing import Any, cast
from backend.api.features.chat.sdk.tool_adapter import MCP_TOOL_PREFIX
from backend.api.features.chat.sdk.tool_adapter import (
BLOCKED_TOOLS,
DANGEROUS_PATTERNS,
MCP_TOOL_PREFIX,
WORKSPACE_SCOPED_TOOLS,
)
logger = logging.getLogger(__name__)
# Tools that are blocked entirely (CLI/system access).
# "Bash" (capital) is the SDK built-in — it's NOT in allowed_tools but blocked
# here as defence-in-depth. The agent uses mcp__copilot__bash_exec instead,
# which has kernel-level network isolation (unshare --net).
BLOCKED_TOOLS = {
"Bash",
"bash",
"shell",
"exec",
"terminal",
"command",
}
# Tools allowed only when their path argument stays within the SDK workspace.
# The SDK uses these to handle oversized tool results (writes to tool-results/
# files, then reads them back) and for workspace file operations.
WORKSPACE_SCOPED_TOOLS = {"Read", "Write", "Edit", "Glob", "Grep"}
# Dangerous patterns in tool inputs
DANGEROUS_PATTERNS = [
r"sudo",
r"rm\s+-rf",
r"dd\s+if=",
r"/etc/passwd",
r"/etc/shadow",
r"chmod\s+777",
r"curl\s+.*\|.*sh",
r"wget\s+.*\|.*sh",
r"eval\s*\(",
r"exec\s*\(",
r"__import__",
r"os\.system",
r"subprocess",
]
def _deny(reason: str) -> dict[str, Any]:
"""Return a hook denial response."""

View File

@@ -41,6 +41,7 @@ from .response_adapter import SDKResponseAdapter
from .security_hooks import create_security_hooks
from .tool_adapter import (
COPILOT_TOOL_NAMES,
SDK_DISALLOWED_TOOLS,
LongRunningCallback,
create_copilot_mcp_server,
set_execution_context,
@@ -543,7 +544,7 @@ async def stream_chat_completion_sdk(
"system_prompt": system_prompt,
"mcp_servers": {"copilot": mcp_server},
"allowed_tools": COPILOT_TOOL_NAMES,
"disallowed_tools": ["Bash"],
"disallowed_tools": SDK_DISALLOWED_TOOLS,
"hooks": security_hooks,
"cwd": sdk_cwd,
"max_buffer_size": config.claude_agent_max_buffer_size,

View File

@@ -310,7 +310,48 @@ def create_copilot_mcp_server():
# Bash is NOT included — use the sandboxed MCP bash_exec tool instead,
# which provides kernel-level network isolation via unshare --net.
# Task allows spawning sub-agents (rate-limited by security hooks).
_SDK_BUILTIN_TOOLS = ["Read", "Write", "Edit", "Glob", "Grep", "Task"]
# WebSearch uses Brave Search via Anthropic's API — safe, no SSRF risk.
_SDK_BUILTIN_TOOLS = ["Read", "Write", "Edit", "Glob", "Grep", "Task", "WebSearch"]
# SDK built-in tools that must be explicitly blocked.
# Bash: dangerous — agent uses mcp__copilot__bash_exec with kernel-level
# network isolation (unshare --net) instead.
# WebFetch: SSRF risk — can reach internal network (localhost, 10.x, etc.).
# Agent uses the SSRF-protected mcp__copilot__web_fetch tool instead.
SDK_DISALLOWED_TOOLS = ["Bash", "WebFetch"]
# Tools that are blocked entirely in security hooks (defence-in-depth).
# Includes SDK_DISALLOWED_TOOLS plus common aliases/synonyms.
BLOCKED_TOOLS = {
*SDK_DISALLOWED_TOOLS,
"bash",
"shell",
"exec",
"terminal",
"command",
}
# Tools allowed only when their path argument stays within the SDK workspace.
# The SDK uses these to handle oversized tool results (writes to tool-results/
# files, then reads them back) and for workspace file operations.
WORKSPACE_SCOPED_TOOLS = {"Read", "Write", "Edit", "Glob", "Grep"}
# Dangerous patterns in tool inputs
DANGEROUS_PATTERNS = [
r"sudo",
r"rm\s+-rf",
r"dd\s+if=",
r"/etc/passwd",
r"/etc/shadow",
r"chmod\s+777",
r"curl\s+.*\|.*sh",
r"wget\s+.*\|.*sh",
r"eval\s*\(",
r"exec\s*\(",
r"__import__",
r"os\.system",
r"subprocess",
]
# List of tool names for allowed_tools configuration
# Include MCP tools, the MCP Read tool for oversized results,

View File

@@ -682,17 +682,219 @@ class ListIsEmptyBlock(Block):
yield "is_empty", len(input_data.list) == 0
# =============================================================================
# List Concatenation Helpers
# =============================================================================
def _validate_list_input(item: Any, index: int) -> str | None:
"""Validate that an item is a list. Returns error message or None."""
if item is None:
return None # None is acceptable, will be skipped
if not isinstance(item, list):
return (
f"Invalid input at index {index}: expected a list, "
f"got {type(item).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return None
def _validate_all_lists(lists: List[Any]) -> str | None:
"""Validate that all items in a sequence are lists. Returns first error or None."""
for idx, item in enumerate(lists):
error = _validate_list_input(item, idx)
if error is not None and item is not None:
return error
return None
def _concatenate_lists_simple(lists: List[List[Any]]) -> List[Any]:
"""Concatenate a sequence of lists into a single list, skipping None values."""
result: List[Any] = []
for lst in lists:
if lst is None:
continue
result.extend(lst)
return result
def _flatten_nested_list(nested: List[Any], max_depth: int = -1) -> List[Any]:
"""
Recursively flatten a nested list structure.
Args:
nested: The list to flatten.
max_depth: Maximum recursion depth. -1 means unlimited.
Returns:
A flat list with all nested elements extracted.
"""
result: List[Any] = []
_flatten_recursive(nested, result, current_depth=0, max_depth=max_depth)
return result
_MAX_FLATTEN_DEPTH = 1000
def _flatten_recursive(
items: List[Any],
result: List[Any],
current_depth: int,
max_depth: int,
) -> None:
"""Internal recursive helper for flattening nested lists."""
if current_depth > _MAX_FLATTEN_DEPTH:
raise RecursionError(
f"Flattening exceeded maximum depth of {_MAX_FLATTEN_DEPTH} levels. "
"Input may be too deeply nested."
)
for item in items:
if isinstance(item, list) and (max_depth == -1 or current_depth < max_depth):
_flatten_recursive(item, result, current_depth + 1, max_depth)
else:
result.append(item)
def _deduplicate_list(items: List[Any]) -> List[Any]:
"""
Remove duplicate elements from a list, preserving order of first occurrences.
Args:
items: The list to deduplicate.
Returns:
A list with duplicates removed, maintaining original order.
"""
seen: set = set()
result: List[Any] = []
for item in items:
item_id = _make_hashable(item)
if item_id not in seen:
seen.add(item_id)
result.append(item)
return result
def _make_hashable(item: Any):
"""
Create a hashable representation of any item for deduplication.
Converts unhashable types (dicts, lists) into deterministic tuple structures.
"""
if isinstance(item, dict):
return tuple(
sorted(
((_make_hashable(k), _make_hashable(v)) for k, v in item.items()),
key=lambda x: (str(type(x[0])), str(x[0])),
)
)
if isinstance(item, (list, tuple)):
return tuple(_make_hashable(i) for i in item)
if isinstance(item, set):
return frozenset(_make_hashable(i) for i in item)
return item
def _filter_none_values(items: List[Any]) -> List[Any]:
"""Remove None values from a list."""
return [item for item in items if item is not None]
def _compute_nesting_depth(
items: Any, current: int = 0, max_depth: int = _MAX_FLATTEN_DEPTH
) -> int:
"""
Compute the maximum nesting depth of a list structure using iteration to avoid RecursionError.
Uses a stack-based approach to handle deeply nested structures without hitting Python's
recursion limit (~1000 levels).
"""
if not isinstance(items, list):
return current
# Stack contains tuples of (item, depth)
stack = [(items, current)]
max_observed_depth = current
while stack:
item, depth = stack.pop()
if depth > max_depth:
return depth
if not isinstance(item, list):
max_observed_depth = max(max_observed_depth, depth)
continue
if len(item) == 0:
max_observed_depth = max(max_observed_depth, depth + 1)
continue
# Add all children to stack with incremented depth
for child in item:
stack.append((child, depth + 1))
return max_observed_depth
def _interleave_lists(lists: List[List[Any]]) -> List[Any]:
"""
Interleave elements from multiple lists in round-robin fashion.
Example: [[1,2,3], [a,b], [x,y,z]] -> [1, a, x, 2, b, y, 3, z]
"""
if not lists:
return []
filtered = [lst for lst in lists if lst is not None]
if not filtered:
return []
result: List[Any] = []
max_len = max(len(lst) for lst in filtered)
for i in range(max_len):
for lst in filtered:
if i < len(lst):
result.append(lst[i])
return result
# =============================================================================
# List Concatenation Blocks
# =============================================================================
class ConcatenateListsBlock(Block):
"""
Concatenates two or more lists into a single list.
This block accepts a list of lists and combines all their elements
in order into one flat output list. It supports options for
deduplication and None-filtering to provide flexible list merging
capabilities for workflow pipelines.
"""
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to concatenate together. All lists will be combined in order into a single list.",
placeholder="e.g., [[1, 2], [3, 4], [5, 6]]",
)
deduplicate: bool = SchemaField(
description="If True, remove duplicate elements from the concatenated result while preserving order.",
default=False,
advanced=True,
)
remove_none: bool = SchemaField(
description="If True, remove None values from the concatenated result.",
default=False,
advanced=True,
)
class Output(BlockSchemaOutput):
concatenated_list: List[Any] = SchemaField(
description="The concatenated list containing all elements from all input lists in order."
)
length: int = SchemaField(
description="The total number of elements in the concatenated list."
)
error: str = SchemaField(
description="Error message if concatenation failed due to invalid input types."
)
@@ -700,7 +902,7 @@ class ConcatenateListsBlock(Block):
def __init__(self):
super().__init__(
id="3cf9298b-5817-4141-9d80-7c2cc5199c8e",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order.",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order. Supports optional deduplication and None removal.",
categories={BlockCategory.BASIC},
input_schema=ConcatenateListsBlock.Input,
output_schema=ConcatenateListsBlock.Output,
@@ -709,29 +911,497 @@ class ConcatenateListsBlock(Block):
{"lists": [["a", "b"], ["c"], ["d", "e", "f"]]},
{"lists": [[1, 2], []]},
{"lists": []},
{"lists": [[1, 2, 2, 3], [3, 4]], "deduplicate": True},
{"lists": [[1, None, 2], [None, 3]], "remove_none": True},
],
test_output=[
("concatenated_list", [1, 2, 3, 4, 5, 6]),
("length", 6),
("concatenated_list", ["a", "b", "c", "d", "e", "f"]),
("length", 6),
("concatenated_list", [1, 2]),
("length", 2),
("concatenated_list", []),
("length", 0),
("concatenated_list", [1, 2, 3, 4]),
("length", 4),
("concatenated_list", [1, 2, 3]),
("length", 3),
],
)
def _validate_inputs(self, lists: List[Any]) -> str | None:
return _validate_all_lists(lists)
def _perform_concatenation(self, lists: List[List[Any]]) -> List[Any]:
return _concatenate_lists_simple(lists)
def _apply_deduplication(self, items: List[Any]) -> List[Any]:
return _deduplicate_list(items)
def _apply_none_removal(self, items: List[Any]) -> List[Any]:
return _filter_none_values(items)
def _post_process(
self, items: List[Any], deduplicate: bool, remove_none: bool
) -> List[Any]:
"""Apply all post-processing steps to the concatenated result."""
result = items
if remove_none:
result = self._apply_none_removal(result)
if deduplicate:
result = self._apply_deduplication(result)
return result
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
concatenated = []
for idx, lst in enumerate(input_data.lists):
if lst is None:
# Skip None values to avoid errors
continue
if not isinstance(lst, list):
# Type validation: each item must be a list
# Strings are iterable and would cause extend() to iterate character-by-character
# Non-iterable types would raise TypeError
yield "error", (
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return
concatenated.extend(lst)
yield "concatenated_list", concatenated
# Validate all inputs are lists
validation_error = self._validate_inputs(input_data.lists)
if validation_error is not None:
yield "error", validation_error
return
# Perform concatenation
concatenated = self._perform_concatenation(input_data.lists)
# Apply post-processing
result = self._post_process(
concatenated, input_data.deduplicate, input_data.remove_none
)
yield "concatenated_list", result
yield "length", len(result)
class FlattenListBlock(Block):
"""
Flattens a nested list structure into a single flat list.
This block takes a list that may contain nested lists at any depth
and produces a single-level list with all leaf elements. Useful
for normalizing data structures from multiple sources that may
have varying levels of nesting.
"""
class Input(BlockSchemaInput):
nested_list: List[Any] = SchemaField(
description="A potentially nested list to flatten into a single-level list.",
placeholder="e.g., [[1, [2, 3]], [4, [5, [6]]]]",
)
max_depth: int = SchemaField(
description="Maximum depth to flatten. -1 means flatten completely. 1 means flatten only one level.",
default=-1,
advanced=True,
)
class Output(BlockSchemaOutput):
flattened_list: List[Any] = SchemaField(
description="The flattened list with all nested elements extracted."
)
length: int = SchemaField(
description="The number of elements in the flattened list."
)
original_depth: int = SchemaField(
description="The maximum nesting depth of the original input list."
)
error: str = SchemaField(description="Error message if flattening failed.")
def __init__(self):
super().__init__(
id="cc45bb0f-d035-4756-96a7-fe3e36254b4d",
description="Flattens a nested list structure into a single flat list. Supports configurable maximum flattening depth.",
categories={BlockCategory.BASIC},
input_schema=FlattenListBlock.Input,
output_schema=FlattenListBlock.Output,
test_input=[
{"nested_list": [[1, 2], [3, [4, 5]]]},
{"nested_list": [1, [2, [3, [4]]]]},
{"nested_list": [1, [2, [3, [4]]], 5], "max_depth": 1},
{"nested_list": []},
{"nested_list": [1, 2, 3]},
],
test_output=[
("flattened_list", [1, 2, 3, 4, 5]),
("length", 5),
("original_depth", 3),
("flattened_list", [1, 2, 3, 4]),
("length", 4),
("original_depth", 4),
("flattened_list", [1, 2, [3, [4]], 5]),
("length", 4),
("original_depth", 4),
("flattened_list", []),
("length", 0),
("original_depth", 1),
("flattened_list", [1, 2, 3]),
("length", 3),
("original_depth", 1),
],
)
def _compute_depth(self, items: List[Any]) -> int:
"""Compute the nesting depth of the input list."""
return _compute_nesting_depth(items)
def _flatten(self, items: List[Any], max_depth: int) -> List[Any]:
"""Flatten the list to the specified depth."""
return _flatten_nested_list(items, max_depth=max_depth)
def _validate_max_depth(self, max_depth: int) -> str | None:
"""Validate the max_depth parameter."""
if max_depth < -1:
return f"max_depth must be -1 (unlimited) or a non-negative integer, got {max_depth}"
return None
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Validate max_depth
depth_error = self._validate_max_depth(input_data.max_depth)
if depth_error is not None:
yield "error", depth_error
return
original_depth = self._compute_depth(input_data.nested_list)
flattened = self._flatten(input_data.nested_list, input_data.max_depth)
yield "flattened_list", flattened
yield "length", len(flattened)
yield "original_depth", original_depth
class InterleaveListsBlock(Block):
"""
Interleaves elements from multiple lists in round-robin fashion.
Given multiple input lists, this block takes one element from each
list in turn, producing an output where elements alternate between
sources. Lists of different lengths are handled gracefully - shorter
lists simply stop contributing once exhausted.
"""
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to interleave. Elements will be taken in round-robin order.",
placeholder="e.g., [[1, 2, 3], ['a', 'b', 'c']]",
)
class Output(BlockSchemaOutput):
interleaved_list: List[Any] = SchemaField(
description="The interleaved list with elements alternating from each input list."
)
length: int = SchemaField(
description="The total number of elements in the interleaved list."
)
error: str = SchemaField(description="Error message if interleaving failed.")
def __init__(self):
super().__init__(
id="9f616084-1d9f-4f8e-bc00-5b9d2a75cd75",
description="Interleaves elements from multiple lists in round-robin fashion, alternating between sources.",
categories={BlockCategory.BASIC},
input_schema=InterleaveListsBlock.Input,
output_schema=InterleaveListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], ["a", "b", "c"]]},
{"lists": [[1, 2, 3], ["a", "b"], ["x", "y", "z"]]},
{"lists": [[1], [2], [3]]},
{"lists": []},
],
test_output=[
("interleaved_list", [1, "a", 2, "b", 3, "c"]),
("length", 6),
("interleaved_list", [1, "a", "x", 2, "b", "y", 3, "z"]),
("length", 8),
("interleaved_list", [1, 2, 3]),
("length", 3),
("interleaved_list", []),
("length", 0),
],
)
def _validate_inputs(self, lists: List[Any]) -> str | None:
return _validate_all_lists(lists)
def _interleave(self, lists: List[List[Any]]) -> List[Any]:
return _interleave_lists(lists)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
validation_error = self._validate_inputs(input_data.lists)
if validation_error is not None:
yield "error", validation_error
return
result = self._interleave(input_data.lists)
yield "interleaved_list", result
yield "length", len(result)
class ZipListsBlock(Block):
"""
Zips multiple lists together into a list of grouped tuples/lists.
Takes two or more input lists and combines corresponding elements
into sub-lists. For example, zipping [1,2,3] and ['a','b','c']
produces [[1,'a'], [2,'b'], [3,'c']]. Supports both truncating
to shortest list and padding to longest list with a fill value.
"""
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to zip together. Corresponding elements will be grouped.",
placeholder="e.g., [[1, 2, 3], ['a', 'b', 'c']]",
)
pad_to_longest: bool = SchemaField(
description="If True, pad shorter lists with fill_value to match the longest list. If False, truncate to shortest.",
default=False,
advanced=True,
)
fill_value: Any = SchemaField(
description="Value to use for padding when pad_to_longest is True.",
default=None,
advanced=True,
)
class Output(BlockSchemaOutput):
zipped_list: List[List[Any]] = SchemaField(
description="The zipped list of grouped elements."
)
length: int = SchemaField(
description="The number of groups in the zipped result."
)
error: str = SchemaField(description="Error message if zipping failed.")
def __init__(self):
super().__init__(
id="0d0e684f-5cb9-4c4b-b8d1-47a0860e0c07",
description="Zips multiple lists together into a list of grouped elements. Supports padding to longest or truncating to shortest.",
categories={BlockCategory.BASIC},
input_schema=ZipListsBlock.Input,
output_schema=ZipListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], ["a", "b", "c"]]},
{"lists": [[1, 2, 3], ["a", "b"]]},
{
"lists": [[1, 2], ["a", "b", "c"]],
"pad_to_longest": True,
"fill_value": 0,
},
{"lists": []},
],
test_output=[
("zipped_list", [[1, "a"], [2, "b"], [3, "c"]]),
("length", 3),
("zipped_list", [[1, "a"], [2, "b"]]),
("length", 2),
("zipped_list", [[1, "a"], [2, "b"], [0, "c"]]),
("length", 3),
("zipped_list", []),
("length", 0),
],
)
def _validate_inputs(self, lists: List[Any]) -> str | None:
return _validate_all_lists(lists)
def _zip_truncate(self, lists: List[List[Any]]) -> List[List[Any]]:
"""Zip lists, truncating to shortest."""
filtered = [lst for lst in lists if lst is not None]
if not filtered:
return []
return [list(group) for group in zip(*filtered)]
def _zip_pad(self, lists: List[List[Any]], fill_value: Any) -> List[List[Any]]:
"""Zip lists, padding shorter ones with fill_value."""
if not lists:
return []
lists = [lst for lst in lists if lst is not None]
if not lists:
return []
max_len = max(len(lst) for lst in lists)
result: List[List[Any]] = []
for i in range(max_len):
group: List[Any] = []
for lst in lists:
if i < len(lst):
group.append(lst[i])
else:
group.append(fill_value)
result.append(group)
return result
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
validation_error = self._validate_inputs(input_data.lists)
if validation_error is not None:
yield "error", validation_error
return
if not input_data.lists:
yield "zipped_list", []
yield "length", 0
return
if input_data.pad_to_longest:
result = self._zip_pad(input_data.lists, input_data.fill_value)
else:
result = self._zip_truncate(input_data.lists)
yield "zipped_list", result
yield "length", len(result)
class ListDifferenceBlock(Block):
"""
Computes the difference between two lists (elements in the first
list that are not in the second list).
This is useful for finding items that exist in one dataset but
not in another, such as finding new items, missing items, or
items that need to be processed.
"""
class Input(BlockSchemaInput):
list_a: List[Any] = SchemaField(
description="The primary list to check elements from.",
placeholder="e.g., [1, 2, 3, 4, 5]",
)
list_b: List[Any] = SchemaField(
description="The list to subtract. Elements found here will be removed from list_a.",
placeholder="e.g., [3, 4, 5, 6]",
)
symmetric: bool = SchemaField(
description="If True, compute symmetric difference (elements in either list but not both).",
default=False,
advanced=True,
)
class Output(BlockSchemaOutput):
difference: List[Any] = SchemaField(
description="Elements from list_a not found in list_b (or symmetric difference if enabled)."
)
length: int = SchemaField(
description="The number of elements in the difference result."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="05309873-9d61-447e-96b5-b804e2511829",
description="Computes the difference between two lists. Returns elements in the first list not found in the second, or symmetric difference.",
categories={BlockCategory.BASIC},
input_schema=ListDifferenceBlock.Input,
output_schema=ListDifferenceBlock.Output,
test_input=[
{"list_a": [1, 2, 3, 4, 5], "list_b": [3, 4, 5, 6, 7]},
{
"list_a": [1, 2, 3, 4, 5],
"list_b": [3, 4, 5, 6, 7],
"symmetric": True,
},
{"list_a": ["a", "b", "c"], "list_b": ["b"]},
{"list_a": [], "list_b": [1, 2, 3]},
],
test_output=[
("difference", [1, 2]),
("length", 2),
("difference", [1, 2, 6, 7]),
("length", 4),
("difference", ["a", "c"]),
("length", 2),
("difference", []),
("length", 0),
],
)
def _compute_difference(self, list_a: List[Any], list_b: List[Any]) -> List[Any]:
"""Compute elements in list_a not in list_b."""
b_hashes = {_make_hashable(item) for item in list_b}
return [item for item in list_a if _make_hashable(item) not in b_hashes]
def _compute_symmetric_difference(
self, list_a: List[Any], list_b: List[Any]
) -> List[Any]:
"""Compute elements in either list but not both."""
a_hashes = {_make_hashable(item) for item in list_a}
b_hashes = {_make_hashable(item) for item in list_b}
only_in_a = [item for item in list_a if _make_hashable(item) not in b_hashes]
only_in_b = [item for item in list_b if _make_hashable(item) not in a_hashes]
return only_in_a + only_in_b
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
if input_data.symmetric:
result = self._compute_symmetric_difference(
input_data.list_a, input_data.list_b
)
else:
result = self._compute_difference(input_data.list_a, input_data.list_b)
yield "difference", result
yield "length", len(result)
class ListIntersectionBlock(Block):
"""
Computes the intersection of two lists (elements present in both lists).
This is useful for finding common items between two datasets,
such as shared tags, mutual connections, or overlapping categories.
"""
class Input(BlockSchemaInput):
list_a: List[Any] = SchemaField(
description="The first list to intersect.",
placeholder="e.g., [1, 2, 3, 4, 5]",
)
list_b: List[Any] = SchemaField(
description="The second list to intersect.",
placeholder="e.g., [3, 4, 5, 6, 7]",
)
class Output(BlockSchemaOutput):
intersection: List[Any] = SchemaField(
description="Elements present in both list_a and list_b."
)
length: int = SchemaField(
description="The number of elements in the intersection."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="b6eb08b6-dbe3-411b-b9b4-2508cb311a1f",
description="Computes the intersection of two lists, returning only elements present in both.",
categories={BlockCategory.BASIC},
input_schema=ListIntersectionBlock.Input,
output_schema=ListIntersectionBlock.Output,
test_input=[
{"list_a": [1, 2, 3, 4, 5], "list_b": [3, 4, 5, 6, 7]},
{"list_a": ["a", "b", "c"], "list_b": ["c", "d", "e"]},
{"list_a": [1, 2], "list_b": [3, 4]},
{"list_a": [], "list_b": [1, 2, 3]},
],
test_output=[
("intersection", [3, 4, 5]),
("length", 3),
("intersection", ["c"]),
("length", 1),
("intersection", []),
("length", 0),
("intersection", []),
("length", 0),
],
)
def _compute_intersection(self, list_a: List[Any], list_b: List[Any]) -> List[Any]:
"""Compute elements present in both lists, preserving order from list_a."""
b_hashes = {_make_hashable(item) for item in list_b}
seen: set = set()
result: List[Any] = []
for item in list_a:
h = _make_hashable(item)
if h in b_hashes and h not in seen:
result.append(item)
seen.add(h)
return result
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
result = self._compute_intersection(input_data.list_a, input_data.list_b)
yield "intersection", result
yield "length", len(result)

View File

@@ -106,8 +106,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
GPT4O_MINI = "gpt-4o-mini"
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"
# Anthropic models
CLAUDE_4_1_OPUS = "claude-opus-4-1-20250805"
CLAUDE_4_OPUS = "claude-opus-4-20250514"
@@ -255,12 +253,6 @@ MODEL_METADATA = {
LlmModel.GPT4O: ModelMetadata(
"openai", 128000, 16384, "GPT-4o", "OpenAI", "OpenAI", 2
), # gpt-4o-2024-08-06
LlmModel.GPT4_TURBO: ModelMetadata(
"openai", 128000, 4096, "GPT-4 Turbo", "OpenAI", "OpenAI", 3
), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata(
"openai", 16385, 4096, "GPT-3.5 Turbo", "OpenAI", "OpenAI", 1
), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_4_1_OPUS: ModelMetadata(
"anthropic", 200000, 32000, "Claude Opus 4.1", "Anthropic", "Anthropic", 3

View File

@@ -75,8 +75,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.GPT41_MINI: 1,
LlmModel.GPT4O_MINI: 1,
LlmModel.GPT4O: 3,
LlmModel.GPT4_TURBO: 10,
LlmModel.GPT3_5_TURBO: 1,
LlmModel.CLAUDE_4_1_OPUS: 21,
LlmModel.CLAUDE_4_OPUS: 21,
LlmModel.CLAUDE_4_SONNET: 5,

View File

@@ -79,7 +79,7 @@ async def test_block_credit_usage(server: SpinTestServer):
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={
"model": "gpt-4-turbo",
"model": "gpt-4o",
"credentials": {
"id": openai_credentials.id,
"provider": openai_credentials.provider,
@@ -100,7 +100,7 @@ async def test_block_credit_usage(server: SpinTestServer):
graph_exec_id="test_graph_exec",
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={"model": "gpt-4-turbo", "api_key": "owned_api_key"},
inputs={"model": "gpt-4o", "api_key": "owned_api_key"},
execution_context=ExecutionContext(user_timezone="UTC"),
),
)

View File

@@ -867,67 +867,9 @@ class GraphModel(Graph, GraphMeta):
return node_errors
@staticmethod
def prune_invalid_links(graph: BaseGraph) -> int:
"""
Remove invalid/orphan links from the graph.
This removes links that:
- Reference non-existent source or sink nodes
- Reference invalid block IDs
Note: Pin name validation is handled separately in _validate_graph_structure.
Returns the number of links pruned.
"""
node_map = {v.id: v for v in graph.nodes}
original_count = len(graph.links)
valid_links = []
for link in graph.links:
source_node = node_map.get(link.source_id)
sink_node = node_map.get(link.sink_id)
# Skip if either node doesn't exist
if not source_node or not sink_node:
logger.warning(
f"Pruning orphan link: source={link.source_id}, sink={link.sink_id} "
f"- node(s) not found"
)
continue
# Skip if source block doesn't exist
source_block = get_block(source_node.block_id)
if not source_block:
logger.warning(
f"Pruning link with invalid source block: {source_node.block_id}"
)
continue
# Skip if sink block doesn't exist
sink_block = get_block(sink_node.block_id)
if not sink_block:
logger.warning(
f"Pruning link with invalid sink block: {sink_node.block_id}"
)
continue
valid_links.append(link)
graph.links = valid_links
pruned_count = original_count - len(valid_links)
if pruned_count > 0:
logger.info(f"Pruned {pruned_count} invalid link(s) from graph {graph.id}")
return pruned_count
@staticmethod
def _validate_graph_structure(graph: BaseGraph):
"""Validate graph structure (links, connections, etc.)"""
# First, prune invalid links to clean up orphan edges
GraphModel.prune_invalid_links(graph)
node_map = {v.id: v for v in graph.nodes}
def is_static_output_block(nid: str) -> bool:

View File

@@ -0,0 +1,42 @@
-- Migrate deprecated OpenAI GPT-4-turbo and GPT-3.5-turbo models
-- This updates all AgentNode blocks that use deprecated models
-- OpenAI is retiring these models:
-- - gpt-4-turbo: March 26, 2026 -> migrate to gpt-4o
-- - gpt-3.5-turbo: September 28, 2026 -> migrate to gpt-4o-mini
-- Update gpt-4-turbo to gpt-4o (staying in same capability tier)
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"gpt-4o"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'gpt-4-turbo';
-- Update gpt-3.5-turbo to gpt-4o-mini (appropriate replacement for lightweight model)
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"gpt-4o-mini"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'gpt-3.5-turbo';
-- Update AgentPreset input overrides (stored in AgentNodeExecutionInputOutput)
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"gpt-4o"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'gpt-4-turbo';
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"gpt-4o-mini"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'gpt-3.5-turbo';

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ import {
} from "@/app/api/__generated__/endpoints/graphs/graphs";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { parseAsInteger, parseAsString, useQueryStates } from "nuqs";
import { GraphExecutionMeta } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/use-agent-runs";
import { GraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
import { useGraphStore } from "@/app/(platform)/build/stores/graphStore";
import { useShallow } from "zustand/react/shallow";
import { useEffect, useState } from "react";

View File

@@ -133,23 +133,22 @@ export const useFlow = () => {
}
}, [availableGraphs, setAvailableSubGraphs]);
// adding nodes and links together to avoid race condition
// Links depend on nodes existing, so we must add nodes first
// adding nodes
useEffect(() => {
if (customNodes.length > 0) {
// Clear both stores to prevent stale data from previous graphs
useNodeStore.getState().setNodes([]);
useNodeStore.getState().clearResolutionState();
useEdgeStore.getState().setEdges([]);
addNodes(customNodes);
// Only add links after nodes are in the store
if (graph?.links) {
addLinks(graph.links);
}
}
}, [customNodes, graph?.links, addNodes, addLinks]);
}, [customNodes, addNodes]);
// adding links
useEffect(() => {
if (graph?.links) {
useEdgeStore.getState().setEdges([]);
addLinks(graph.links);
}
}, [graph?.links, addLinks]);
useEffect(() => {
if (customNodes.length > 0 && graph?.links) {

View File

@@ -1,6 +1,6 @@
import { useCallback } from "react";
import { AgentRunDraftView } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view";
import { AgentRunDraftView } from "@/app/(platform)/build/components/legacy-builder/agent-run-draft-view";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import type {
CredentialsMetaInput,

View File

@@ -18,7 +18,7 @@ import {
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useQueryClient } from "@tanstack/react-query";
import { getGetV2ListMySubmissionsQueryKey } from "@/app/api/__generated__/endpoints/store/store";
import { CronExpressionDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { CronExpressionDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog";
import { humanizeCronExpression } from "@/lib/cron-expression-utils";
import { CalendarClockIcon } from "lucide-react";

View File

@@ -20,7 +20,7 @@ import {
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { RunAgentInputs } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentInputs/RunAgentInputs";
import { ScheduleTaskDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { ScheduleTaskDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog";
import ActionButtonGroup from "@/components/__legacy__/action-button-group";
import type { ButtonAction } from "@/components/__legacy__/types";
import {
@@ -53,7 +53,10 @@ import { ClockIcon, CopyIcon, InfoIcon } from "@phosphor-icons/react";
import { CalendarClockIcon, Trash2Icon } from "lucide-react";
import { analytics } from "@/services/analytics";
import { AgentStatus, AgentStatusChip } from "./agent-status-chip";
import {
AgentStatus,
AgentStatusChip,
} from "@/app/(platform)/build/components/legacy-builder/agent-status-chip";
export function AgentRunDraftView({
graph,

View File

@@ -13,7 +13,6 @@ import { Graph } from "@/app/api/__generated__/models/graph";
import { useNodeStore } from "../stores/nodeStore";
import { useEdgeStore } from "../stores/edgeStore";
import { graphsEquivalent } from "../components/NewControlPanel/NewSaveControl/helpers";
import { linkToCustomEdge } from "../components/helper";
import { useGraphStore } from "../stores/graphStore";
import { useShallow } from "zustand/react/shallow";
import {
@@ -22,18 +21,6 @@ import {
getTempFlowId,
} from "@/services/builder-draft/draft-service";
/**
* Sync the edge store with the authoritative backend state.
* This ensures the frontend matches what the backend accepted after save.
*/
function syncEdgesWithBackend(links: GraphModel["links"]) {
if (links !== undefined) {
// Replace all edges with the authoritative backend state
const newEdges = links.map(linkToCustomEdge);
useEdgeStore.getState().setEdges(newEdges);
}
}
export type SaveGraphOptions = {
showToast?: boolean;
onSuccess?: (graph: GraphModel) => void;
@@ -77,9 +64,6 @@ export const useSaveGraph = ({
flowVersion: data.version,
});
// Sync edge store with authoritative backend state
syncEdgesWithBackend(data.links);
const tempFlowId = getTempFlowId();
if (tempFlowId) {
await draftService.deleteDraft(tempFlowId);
@@ -117,9 +101,6 @@ export const useSaveGraph = ({
flowVersion: data.version,
});
// Sync edge store with authoritative backend state
syncEdgesWithBackend(data.links);
// Clear the draft for this flow after successful save
if (data.id) {
await draftService.deleteDraft(data.id);

View File

@@ -120,64 +120,12 @@ export const useEdgeStore = create<EdgeStore>((set, get) => ({
isOutputConnected: (nodeId, handle) =>
get().edges.some((e) => e.source === nodeId && e.sourceHandle === handle),
getBackendLinks: () => {
// Filter out edges referencing non-existent nodes before converting to links
const nodeIds = new Set(useNodeStore.getState().nodes.map((n) => n.id));
const validEdges = get().edges.filter((edge) => {
const isValid = nodeIds.has(edge.source) && nodeIds.has(edge.target);
if (!isValid) {
console.warn(
`[EdgeStore] Filtering out invalid edge during save: source=${edge.source}, target=${edge.target}`,
);
}
return isValid;
});
return validEdges.map(customEdgeToLink);
},
getBackendLinks: () => get().edges.map(customEdgeToLink),
addLinks: (links) => {
// Get current node IDs to validate links
const nodeIds = new Set(useNodeStore.getState().nodes.map((n) => n.id));
// Convert and filter links in one pass, avoiding individual addEdge calls
// which would push to history for each edge (causing history pollution)
const newEdges: CustomEdge[] = [];
const existingEdges = get().edges;
for (const link of links) {
// Skip invalid links (orphan edges referencing non-existent nodes)
if (!nodeIds.has(link.source_id) || !nodeIds.has(link.sink_id)) {
console.warn(
`[EdgeStore] Skipping invalid link: source=${link.source_id}, sink=${link.sink_id} - node(s) not found`,
);
continue;
}
const edge = linkToCustomEdge(link);
// Skip if edge already exists
const exists = existingEdges.some(
(e) =>
e.source === edge.source &&
e.target === edge.target &&
e.sourceHandle === edge.sourceHandle &&
e.targetHandle === edge.targetHandle,
);
if (!exists) {
newEdges.push(edge);
}
}
if (newEdges.length > 0) {
// Bulk add all edges at once, pushing to history only once
const prevState = {
nodes: useNodeStore.getState().nodes,
edges: existingEdges,
};
set((state) => ({ edges: [...state.edges, ...newEdges] }));
useHistoryStore.getState().pushState(prevState);
}
links.forEach((link) => {
get().addEdge(linkToCustomEdge(link));
});
},
getAllHandleIdsOfANode: (nodeId) =>

View File

@@ -4,11 +4,11 @@ import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
BookOpenIcon,
CheckFatIcon,
PencilSimpleIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import Image from "next/image";
import NextLink from "next/link";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
@@ -24,6 +24,7 @@ import {
ClarificationQuestionsCard,
ClarifyingQuestion,
} from "./components/ClarificationQuestionsCard";
import sparklesImg from "./components/MiniGame/assets/sparkles.png";
import { MiniGame } from "./components/MiniGame/MiniGame";
import {
AccordionIcon,
@@ -83,7 +84,8 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
) {
return {
icon,
title: "Creating agent, this may take a few minutes. Sit back and relax.",
title:
"Creating agent, this may take a few minutes. Play while you wait.",
expanded: true,
};
}
@@ -167,16 +169,20 @@ export function CreateAgentTool({ part }: Props) {
{isAgentSavedOutput(output) && (
<div className="rounded-xl border border-border/60 bg-card p-4 shadow-sm">
<div className="flex items-baseline gap-2">
<CheckFatIcon
size={18}
weight="regular"
className="relative top-1 text-green-500"
<Image
src={sparklesImg}
alt="sparkles"
width={24}
height={24}
className="relative top-1"
/>
<Text
variant="body-medium"
className="text-blacks mb-2 text-[16px]"
className="mb-2 text-[16px] text-black"
>
{output.message}
Agent{" "}
<span className="text-violet-600">{output.agent_name}</span>{" "}
has been saved to your library!
</Text>
</div>
<div className="mt-3 flex flex-wrap gap-4">

View File

@@ -2,20 +2,65 @@
import { useMiniGame } from "./useMiniGame";
function Key({ children }: { children: React.ReactNode }) {
return <strong>[{children}]</strong>;
}
export function MiniGame() {
const { canvasRef } = useMiniGame();
const { canvasRef, activeMode, showOverlay, score, highScore, onContinue } =
useMiniGame();
const isRunActive =
activeMode === "run" || activeMode === "idle" || activeMode === "over";
let overlayText: string | undefined;
let buttonLabel = "Continue";
if (activeMode === "idle") {
buttonLabel = "Start";
} else if (activeMode === "boss-intro") {
overlayText = "Face the bandit!";
} else if (activeMode === "boss-defeated") {
overlayText = "Great job, keep on going";
} else if (activeMode === "over") {
overlayText = `Score: ${score} / Record: ${highScore}`;
buttonLabel = "Retry";
}
return (
<div
className="w-full overflow-hidden rounded-md bg-background text-foreground"
style={{ border: "1px solid #d17fff" }}
>
<canvas
ref={canvasRef}
tabIndex={0}
className="block w-full outline-none"
style={{ imageRendering: "pixelated" }}
/>
<div className="flex flex-col gap-2">
<p className="text-sm font-medium text-purple-500">
{isRunActive ? (
<>
Run mode: <Key>Space</Key> to jump
</>
) : (
<>
Duel mode: <Key></Key> to move · <Key>Z</Key> to attack ·{" "}
<Key>X</Key> to block · <Key>Space</Key> to jump
</>
)}
</p>
<div className="relative w-full overflow-hidden rounded-md border border-accent bg-background text-foreground">
<canvas
ref={canvasRef}
tabIndex={0}
className="block w-full outline-none"
/>
{showOverlay && (
<div className="absolute inset-0 flex flex-col items-center justify-center gap-3 bg-black/40">
{overlayText && (
<p className="text-lg font-bold text-white">{overlayText}</p>
)}
<button
type="button"
onClick={onContinue}
className="rounded-md bg-white px-4 py-2 text-sm font-semibold text-zinc-800 shadow-md transition-colors hover:bg-zinc-100"
>
{buttonLabel}
</button>
</div>
)}
</div>
</div>
);
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@@ -136,7 +136,7 @@ export function getAnimationText(part: {
if (isOperationPendingOutput(output)) return "Agent creation in progress";
if (isOperationInProgressOutput(output))
return "Agent creation already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentSavedOutput(output)) return `Saved ${output.agent_name}`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error creating agent";

View File

@@ -5,7 +5,6 @@ import type { ToolUIPart } from "ai";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ProgressBar } from "../../components/ProgressBar/ProgressBar";
import {
ContentCardDescription,
ContentCodeBlock,
@@ -15,7 +14,7 @@ import {
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useAsymptoticProgress } from "../../hooks/useAsymptoticProgress";
import { MiniGame } from "../CreateAgent/components/MiniGame/MiniGame";
import {
ClarificationQuestionsCard,
ClarifyingQuestion,
@@ -54,6 +53,7 @@ function getAccordionMeta(output: EditAgentToolOutput): {
title: string;
titleClassName?: string;
description?: string;
expanded?: boolean;
} {
const icon = <AccordionIcon />;
@@ -80,7 +80,11 @@ function getAccordionMeta(output: EditAgentToolOutput): {
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { icon: <OrbitLoader size={32} />, title: "Editing agent" };
return {
icon: <OrbitLoader size={32} />,
title: "Editing agent, this may take a few minutes. Play while you wait.",
expanded: true,
};
}
return {
icon: (
@@ -105,7 +109,6 @@ export function EditAgentTool({ part }: Props) {
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output));
const progress = useAsymptoticProgress(isOperating);
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
@@ -149,9 +152,9 @@ export function EditAgentTool({ part }: Props) {
<ToolAccordion {...getAccordionMeta(output)}>
{isOperating && (
<ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" />
<MiniGame />
<ContentHint>
This could take a few minutes, grab a coffee
This could take a few minutes play while you wait!
</ContentHint>
</ContentGrid>
)}

View File

@@ -2,8 +2,14 @@
import type { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { ContentMessage } from "../../components/ToolAccordion/AccordionContent";
import {
ContentGrid,
ContentHint,
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { MiniGame } from "../CreateAgent/components/MiniGame/MiniGame";
import {
getAccordionMeta,
getAnimationText,
@@ -60,6 +66,21 @@ export function RunAgentTool({ part }: Props) {
/>
</div>
{isStreaming && !output && (
<ToolAccordion
icon={<OrbitLoader size={32} />}
title="Running agent, this may take a few minutes. Play while you wait."
expanded={true}
>
<ContentGrid>
<MiniGame />
<ContentHint>
This could take a few minutes play while you wait!
</ContentHint>
</ContentGrid>
</ToolAccordion>
)}
{hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}>
{isRunAgentExecutionStartedOutput(output) && (

View File

@@ -1,631 +0,0 @@
"use client";
import { useParams, useRouter } from "next/navigation";
import { useQueryState } from "nuqs";
import React, {
useCallback,
useEffect,
useMemo,
useRef,
useState,
} from "react";
import {
Graph,
GraphExecution,
GraphExecutionID,
GraphExecutionMeta,
GraphID,
LibraryAgent,
LibraryAgentID,
LibraryAgentPreset,
LibraryAgentPresetID,
Schedule,
ScheduleID,
} from "@/lib/autogpt-server-api";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { exportAsJSONFile } from "@/lib/utils";
import DeleteConfirmDialog from "@/components/__legacy__/delete-confirm-dialog";
import type { ButtonAction } from "@/components/__legacy__/types";
import { Button } from "@/components/__legacy__/ui/button";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/__legacy__/ui/dialog";
import LoadingBox, { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import {
useToast,
useToastOnFail,
} from "@/components/molecules/Toast/use-toast";
import { AgentRunDetailsView } from "./components/agent-run-details-view";
import { AgentRunDraftView } from "./components/agent-run-draft-view";
import { CreatePresetDialog } from "./components/create-preset-dialog";
import { useAgentRunsInfinite } from "./use-agent-runs";
import { AgentRunsSelectorList } from "./components/agent-runs-selector-list";
import { AgentScheduleDetailsView } from "./components/agent-schedule-details-view";
export function OldAgentLibraryView() {
const { id: agentID }: { id: LibraryAgentID } = useParams();
const [executionId, setExecutionId] = useQueryState("executionId");
const toastOnFail = useToastOnFail();
const { toast } = useToast();
const router = useRouter();
const api = useBackendAPI();
// ============================ STATE =============================
const [graph, setGraph] = useState<Graph | null>(null); // Graph version corresponding to LibraryAgent
const [agent, setAgent] = useState<LibraryAgent | null>(null);
const agentRunsQuery = useAgentRunsInfinite(graph?.id); // only runs once graph.id is known
const agentRuns = agentRunsQuery.agentRuns;
const [agentPresets, setAgentPresets] = useState<LibraryAgentPreset[]>([]);
const [schedules, setSchedules] = useState<Schedule[]>([]);
const [selectedView, selectView] = useState<
| { type: "run"; id?: GraphExecutionID }
| { type: "preset"; id: LibraryAgentPresetID }
| { type: "schedule"; id: ScheduleID }
>({ type: "run" });
const [selectedRun, setSelectedRun] = useState<
GraphExecution | GraphExecutionMeta | null
>(null);
const selectedSchedule =
selectedView.type == "schedule"
? schedules.find((s) => s.id == selectedView.id)
: null;
const [isFirstLoad, setIsFirstLoad] = useState<boolean>(true);
const [agentDeleteDialogOpen, setAgentDeleteDialogOpen] =
useState<boolean>(false);
const [confirmingDeleteAgentRun, setConfirmingDeleteAgentRun] =
useState<GraphExecutionMeta | null>(null);
const [confirmingDeleteAgentPreset, setConfirmingDeleteAgentPreset] =
useState<LibraryAgentPresetID | null>(null);
const [copyAgentDialogOpen, setCopyAgentDialogOpen] = useState(false);
const [creatingPresetFromExecutionID, setCreatingPresetFromExecutionID] =
useState<GraphExecutionID | null>(null);
// Set page title with agent name
useEffect(() => {
if (agent) {
document.title = `${agent.name} - Library - AutoGPT Platform`;
}
}, [agent]);
const openRunDraftView = useCallback(() => {
selectView({ type: "run" });
}, []);
const selectRun = useCallback((id: GraphExecutionID) => {
selectView({ type: "run", id });
}, []);
const selectPreset = useCallback((id: LibraryAgentPresetID) => {
selectView({ type: "preset", id });
}, []);
const selectSchedule = useCallback((id: ScheduleID) => {
selectView({ type: "schedule", id });
}, []);
const graphVersions = useRef<Record<number, Graph>>({});
const loadingGraphVersions = useRef<Record<number, Promise<Graph>>>({});
const getGraphVersion = useCallback(
async (graphID: GraphID, version: number) => {
if (version in graphVersions.current)
return graphVersions.current[version];
if (version in loadingGraphVersions.current)
return loadingGraphVersions.current[version];
const pendingGraph = api.getGraph(graphID, version).then((graph) => {
graphVersions.current[version] = graph;
return graph;
});
// Cache promise as well to avoid duplicate requests
loadingGraphVersions.current[version] = pendingGraph;
return pendingGraph;
},
[api, graphVersions, loadingGraphVersions],
);
const lastRefresh = useRef<number>(0);
const refreshPageData = useCallback(() => {
if (Date.now() - lastRefresh.current < 2e3) return; // 2 second debounce
lastRefresh.current = Date.now();
api.getLibraryAgent(agentID).then((agent) => {
setAgent(agent);
getGraphVersion(agent.graph_id, agent.graph_version).then(
(_graph) =>
(graph && graph.version == _graph.version) || setGraph(_graph),
);
Promise.all([
agentRunsQuery.refetchRuns(),
api.listLibraryAgentPresets({
graph_id: agent.graph_id,
page_size: 100,
}),
]).then(([runsQueryResult, presets]) => {
setAgentPresets(presets.presets);
const newestAgentRunsResponse = runsQueryResult.data?.pages[0];
if (!newestAgentRunsResponse || newestAgentRunsResponse.status != 200)
return;
const newestAgentRuns = newestAgentRunsResponse.data.executions;
// Preload the corresponding graph versions for the latest 10 runs
new Set(
newestAgentRuns.slice(0, 10).map((run) => run.graph_version),
).forEach((version) => getGraphVersion(agent.graph_id, version));
});
});
}, [api, agentID, getGraphVersion, graph]);
// On first load: select the latest run
useEffect(() => {
// Only for first load or first execution
if (selectedView.id || !isFirstLoad) return;
if (agentRuns.length == 0 && agentPresets.length == 0) return;
setIsFirstLoad(false);
if (agentRuns.length > 0) {
// select latest run
const latestRun = agentRuns.reduce((latest, current) => {
if (!latest.started_at && !current.started_at) return latest;
if (!latest.started_at) return current;
if (!current.started_at) return latest;
return latest.started_at > current.started_at ? latest : current;
}, agentRuns[0]);
selectRun(latestRun.id as GraphExecutionID);
} else {
// select top preset
const latestPreset = agentPresets.toSorted(
(a, b) => b.updated_at.getTime() - a.updated_at.getTime(),
)[0];
selectPreset(latestPreset.id);
}
}, [
isFirstLoad,
selectedView.id,
agentRuns,
agentPresets,
selectRun,
selectPreset,
]);
useEffect(() => {
if (executionId) {
selectRun(executionId as GraphExecutionID);
setExecutionId(null);
}
}, [executionId, selectRun, setExecutionId]);
// Initial load
useEffect(() => {
refreshPageData();
// Show a toast when the WebSocket connection disconnects
let connectionToast: ReturnType<typeof toast> | null = null;
const cancelDisconnectHandler = api.onWebSocketDisconnect(() => {
connectionToast ??= toast({
title: "Connection to server was lost",
variant: "destructive",
description: (
<div className="flex items-center">
Trying to reconnect...
<LoadingSpinner className="ml-1.5 size-3.5" />
</div>
),
duration: Infinity,
dismissable: true,
});
});
const cancelConnectHandler = api.onWebSocketConnect(() => {
if (connectionToast)
connectionToast.update({
id: connectionToast.id,
title: "✅ Connection re-established",
variant: "default",
description: (
<div className="flex items-center">
Refreshing data...
<LoadingSpinner className="ml-1.5 size-3.5" />
</div>
),
duration: 2000,
dismissable: true,
});
connectionToast = null;
});
return () => {
cancelDisconnectHandler();
cancelConnectHandler();
};
}, []);
// Subscribe to WebSocket updates for agent runs
useEffect(() => {
if (!agent?.graph_id) return;
return api.onWebSocketConnect(() => {
refreshPageData(); // Sync up on (re)connect
// Subscribe to all executions for this agent
api.subscribeToGraphExecutions(agent.graph_id);
});
}, [api, agent?.graph_id, refreshPageData]);
// Handle execution updates
useEffect(() => {
const detachExecUpdateHandler = api.onWebSocketMessage(
"graph_execution_event",
(data) => {
if (data.graph_id != agent?.graph_id) return;
agentRunsQuery.upsertAgentRun(data);
if (data.id === selectedView.id) {
// Update currently viewed run
setSelectedRun(data);
}
},
);
return () => {
detachExecUpdateHandler();
};
}, [api, agent?.graph_id, selectedView.id]);
// Pre-load selectedRun based on selectedView
useEffect(() => {
if (selectedView.type != "run" || !selectedView.id) return;
const newSelectedRun = agentRuns.find((run) => run.id == selectedView.id);
if (selectedView.id !== selectedRun?.id) {
// Pull partial data from "cache" while waiting for the rest to load
setSelectedRun((newSelectedRun as GraphExecutionMeta) ?? null);
}
}, [api, selectedView, agentRuns, selectedRun?.id]);
// Load selectedRun based on selectedView; refresh on agent refresh
useEffect(() => {
if (selectedView.type != "run" || !selectedView.id || !agent) return;
api
.getGraphExecutionInfo(agent.graph_id, selectedView.id)
.then(async (run) => {
// Ensure corresponding graph version is available before rendering I/O
await getGraphVersion(run.graph_id, run.graph_version);
setSelectedRun(run);
});
}, [api, selectedView, agent, getGraphVersion]);
const fetchSchedules = useCallback(async () => {
if (!agent) return;
setSchedules(await api.listGraphExecutionSchedules(agent.graph_id));
}, [api, agent?.graph_id]);
useEffect(() => {
fetchSchedules();
}, [fetchSchedules]);
// =========================== ACTIONS ============================
const deleteRun = useCallback(
async (run: GraphExecutionMeta) => {
if (run.status == "RUNNING" || run.status == "QUEUED") {
await api.stopGraphExecution(run.graph_id, run.id);
}
await api.deleteGraphExecution(run.id);
setConfirmingDeleteAgentRun(null);
if (selectedView.type == "run" && selectedView.id == run.id) {
openRunDraftView();
}
agentRunsQuery.removeAgentRun(run.id);
},
[api, selectedView, openRunDraftView],
);
const deletePreset = useCallback(
async (presetID: LibraryAgentPresetID) => {
await api.deleteLibraryAgentPreset(presetID);
setConfirmingDeleteAgentPreset(null);
if (selectedView.type == "preset" && selectedView.id == presetID) {
openRunDraftView();
}
setAgentPresets((presets) => presets.filter((p) => p.id !== presetID));
},
[api, selectedView, openRunDraftView],
);
const deleteSchedule = useCallback(
async (scheduleID: ScheduleID) => {
const removedSchedule =
await api.deleteGraphExecutionSchedule(scheduleID);
setSchedules((schedules) => {
const newSchedules = schedules.filter(
(s) => s.id !== removedSchedule.id,
);
if (
selectedView.type == "schedule" &&
selectedView.id == removedSchedule.id
) {
if (newSchedules.length > 0) {
// Select next schedule if available
selectSchedule(newSchedules[0].id);
} else {
// Reset to draft view if current schedule was deleted
openRunDraftView();
}
}
return newSchedules;
});
openRunDraftView();
},
[schedules, api],
);
const handleCreatePresetFromRun = useCallback(
async (name: string, description: string) => {
if (!creatingPresetFromExecutionID) return;
await api
.createLibraryAgentPreset({
name,
description,
graph_execution_id: creatingPresetFromExecutionID,
})
.then((preset) => {
setAgentPresets((prev) => [...prev, preset]);
selectPreset(preset.id);
setCreatingPresetFromExecutionID(null);
})
.catch(toastOnFail("create a preset"));
},
[api, creatingPresetFromExecutionID, selectPreset, toast],
);
const downloadGraph = useCallback(
async () =>
agent &&
// Export sanitized graph from backend
api
.getGraph(agent.graph_id, agent.graph_version, true)
.then((graph) =>
exportAsJSONFile(graph, `${graph.name}_v${graph.version}.json`),
),
[api, agent],
);
const copyAgent = useCallback(async () => {
setCopyAgentDialogOpen(false);
api
.forkLibraryAgent(agentID)
.then((newAgent) => {
router.push(`/library/agents/${newAgent.id}`);
})
.catch((error) => {
console.error("Error copying agent:", error);
toast({
title: "Error copying agent",
description: `An error occurred while copying the agent: ${error.message}`,
variant: "destructive",
});
});
}, [agentID, api, router, toast]);
const agentActions: ButtonAction[] = useMemo(
() => [
{
label: "Customize agent",
href: `/build?flowID=${agent?.graph_id}&flowVersion=${agent?.graph_version}`,
disabled: !agent?.can_access_graph,
},
{ label: "Export agent to file", callback: downloadGraph },
...(!agent?.can_access_graph
? [
{
label: "Edit a copy",
callback: () => setCopyAgentDialogOpen(true),
},
]
: []),
{
label: "Delete agent",
callback: () => setAgentDeleteDialogOpen(true),
},
],
[agent, downloadGraph],
);
const runGraph =
graphVersions.current[selectedRun?.graph_version ?? 0] ?? graph;
const onCreateSchedule = useCallback(
(schedule: Schedule) => {
setSchedules((prev) => [...prev, schedule]);
selectSchedule(schedule.id);
},
[selectView],
);
const onCreatePreset = useCallback(
(preset: LibraryAgentPreset) => {
setAgentPresets((prev) => [...prev, preset]);
selectPreset(preset.id);
},
[selectPreset],
);
const onUpdatePreset = useCallback(
(updated: LibraryAgentPreset) => {
setAgentPresets((prev) =>
prev.map((p) => (p.id === updated.id ? updated : p)),
);
selectPreset(updated.id);
},
[selectPreset],
);
if (!agent || !graph) {
return <LoadingBox className="h-[90vh]" />;
}
return (
<div className="container justify-stretch p-0 pt-16 lg:flex">
{/* Sidebar w/ list of runs */}
{/* TODO: render this below header in sm and md layouts */}
<AgentRunsSelectorList
className="agpt-div w-full border-b pb-2 lg:w-auto lg:border-b-0 lg:border-r lg:pb-0"
agent={agent}
agentRunsQuery={agentRunsQuery}
agentPresets={agentPresets}
schedules={schedules}
selectedView={selectedView}
onSelectRun={selectRun}
onSelectPreset={selectPreset}
onSelectSchedule={selectSchedule}
onSelectDraftNewRun={openRunDraftView}
doDeleteRun={setConfirmingDeleteAgentRun}
doDeletePreset={setConfirmingDeleteAgentPreset}
doDeleteSchedule={deleteSchedule}
doCreatePresetFromRun={setCreatingPresetFromExecutionID}
/>
<div className="flex-1">
{/* Header */}
<div className="agpt-div w-full border-b">
<h1
data-testid="agent-title"
className="font-poppins text-3xl font-medium"
>
{
agent.name /* TODO: use dynamic/custom run title - https://github.com/Significant-Gravitas/AutoGPT/issues/9184 */
}
</h1>
</div>
{/* Run / Schedule views */}
{(selectedView.type == "run" && selectedView.id ? (
selectedRun && runGraph ? (
<AgentRunDetailsView
agent={agent}
graph={runGraph}
run={selectedRun}
agentActions={agentActions}
onRun={selectRun}
doDeleteRun={() => setConfirmingDeleteAgentRun(selectedRun)}
doCreatePresetFromRun={() =>
setCreatingPresetFromExecutionID(selectedRun.id)
}
/>
) : null
) : selectedView.type == "run" ? (
/* Draft new runs / Create new presets */
<AgentRunDraftView
graph={graph}
onRun={selectRun}
onCreateSchedule={onCreateSchedule}
onCreatePreset={onCreatePreset}
agentActions={agentActions}
recommendedScheduleCron={agent?.recommended_schedule_cron || null}
/>
) : selectedView.type == "preset" ? (
/* Edit & update presets */
<AgentRunDraftView
graph={graph}
agentPreset={
agentPresets.find((preset) => preset.id == selectedView.id)!
}
onRun={selectRun}
recommendedScheduleCron={agent?.recommended_schedule_cron || null}
onCreateSchedule={onCreateSchedule}
onUpdatePreset={onUpdatePreset}
doDeletePreset={setConfirmingDeleteAgentPreset}
agentActions={agentActions}
/>
) : selectedView.type == "schedule" ? (
selectedSchedule &&
graph && (
<AgentScheduleDetailsView
graph={graph}
schedule={selectedSchedule}
// agent={agent}
agentActions={agentActions}
onForcedRun={selectRun}
doDeleteSchedule={deleteSchedule}
/>
)
) : null) || <LoadingBox className="h-[70vh]" />}
<DeleteConfirmDialog
entityType="agent"
open={agentDeleteDialogOpen}
onOpenChange={setAgentDeleteDialogOpen}
onDoDelete={() =>
agent &&
api.deleteLibraryAgent(agent.id).then(() => router.push("/library"))
}
/>
<DeleteConfirmDialog
entityType="agent run"
open={!!confirmingDeleteAgentRun}
onOpenChange={(open) => !open && setConfirmingDeleteAgentRun(null)}
onDoDelete={() =>
confirmingDeleteAgentRun && deleteRun(confirmingDeleteAgentRun)
}
/>
<DeleteConfirmDialog
entityType={agent.has_external_trigger ? "trigger" : "agent preset"}
open={!!confirmingDeleteAgentPreset}
onOpenChange={(open) => !open && setConfirmingDeleteAgentPreset(null)}
onDoDelete={() =>
confirmingDeleteAgentPreset &&
deletePreset(confirmingDeleteAgentPreset)
}
/>
{/* Copy agent confirmation dialog */}
<Dialog
onOpenChange={setCopyAgentDialogOpen}
open={copyAgentDialogOpen}
>
<DialogContent>
<DialogHeader>
<DialogTitle>You&apos;re making an editable copy</DialogTitle>
<DialogDescription className="pt-2">
The original Marketplace agent stays the same and cannot be
edited. We&apos;ll save a new version of this agent to your
Library. From there, you can customize it however you&apos;d
like by clicking &quot;Customize agent&quot; this will open
the builder where you can see and modify the inner workings.
</DialogDescription>
</DialogHeader>
<DialogFooter className="justify-end">
<Button
type="button"
variant="outline"
onClick={() => setCopyAgentDialogOpen(false)}
>
Cancel
</Button>
<Button type="button" onClick={copyAgent}>
Continue
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
<CreatePresetDialog
open={!!creatingPresetFromExecutionID}
onOpenChange={() => setCreatingPresetFromExecutionID(null)}
onConfirm={handleCreatePresetFromRun}
/>
</div>
</div>
);
}

View File

@@ -1,445 +0,0 @@
"use client";
import { format, formatDistanceToNow, formatDistanceStrict } from "date-fns";
import React, { useCallback, useMemo, useEffect } from "react";
import {
Graph,
GraphExecution,
GraphExecutionID,
GraphExecutionMeta,
LibraryAgent,
} from "@/lib/autogpt-server-api";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import ActionButtonGroup from "@/components/__legacy__/action-button-group";
import type { ButtonAction } from "@/components/__legacy__/types";
import {
Card,
CardContent,
CardHeader,
CardTitle,
} from "@/components/__legacy__/ui/card";
import {
IconRefresh,
IconSquare,
IconCircleAlert,
} from "@/components/__legacy__/ui/icons";
import { Input } from "@/components/__legacy__/ui/input";
import LoadingBox from "@/components/__legacy__/ui/loading";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { useToastOnFail } from "@/components/molecules/Toast/use-toast";
import { AgentRunStatus, agentRunStatusMap } from "./agent-run-status-chip";
import useCredits from "@/hooks/useCredits";
import { AgentRunOutputView } from "./agent-run-output-view";
import { analytics } from "@/services/analytics";
import { PendingReviewsList } from "@/components/organisms/PendingReviewsList/PendingReviewsList";
import { usePendingReviewsForExecution } from "@/hooks/usePendingReviews";
export function AgentRunDetailsView({
agent,
graph,
run,
agentActions,
onRun,
doDeleteRun,
doCreatePresetFromRun,
}: {
agent: LibraryAgent;
graph: Graph;
run: GraphExecution | GraphExecutionMeta;
agentActions: ButtonAction[];
onRun: (runID: GraphExecutionID) => void;
doDeleteRun: () => void;
doCreatePresetFromRun: () => void;
}): React.ReactNode {
const api = useBackendAPI();
const { formatCredits } = useCredits();
const runStatus: AgentRunStatus = useMemo(
() => agentRunStatusMap[run.status],
[run],
);
const {
pendingReviews,
isLoading: reviewsLoading,
refetch: refetchReviews,
} = usePendingReviewsForExecution(run.id);
const toastOnFail = useToastOnFail();
// Refetch pending reviews when execution status changes to REVIEW
useEffect(() => {
if (runStatus === "review" && run.id) {
refetchReviews();
}
}, [runStatus, run.id, refetchReviews]);
const infoStats: { label: string; value: React.ReactNode }[] = useMemo(() => {
if (!run) return [];
return [
{
label: "Status",
value: runStatus.charAt(0).toUpperCase() + runStatus.slice(1),
},
{
label: "Started",
value: run.started_at
? `${formatDistanceToNow(run.started_at, { addSuffix: true })}, ${format(run.started_at, "HH:mm")}`
: "—",
},
...(run.stats
? [
{
label: "Duration",
value: formatDistanceStrict(0, run.stats.duration * 1000),
},
{ label: "Steps", value: run.stats.node_exec_count },
{ label: "Cost", value: formatCredits(run.stats.cost) },
]
: []),
];
}, [run, runStatus, formatCredits]);
const agentRunInputs:
| Record<
string,
{
title?: string;
/* type: BlockIOSubType; */
value: string | number | undefined;
}
>
| undefined = useMemo(() => {
if (!run.inputs) return undefined;
// TODO: show (link to) preset - https://github.com/Significant-Gravitas/AutoGPT/issues/9168
// Add type info from agent input schema
return Object.fromEntries(
Object.entries(run.inputs).map(([k, v]) => [
k,
{
title: graph.input_schema.properties[k]?.title,
// type: graph.input_schema.properties[k].type, // TODO: implement typed graph inputs
value: typeof v == "object" ? JSON.stringify(v, undefined, 2) : v,
},
]),
);
}, [graph, run]);
const runAgain = useCallback(() => {
if (
!run.inputs ||
!(graph.credentials_input_schema?.required ?? []).every(
(k) => k in (run.credential_inputs ?? {}),
)
)
return;
if (run.preset_id) {
return api
.executeLibraryAgentPreset(
run.preset_id,
run.inputs!,
run.credential_inputs!,
)
.then(({ id }) => {
analytics.sendDatafastEvent("run_agent", {
name: graph.name,
id: graph.id,
});
onRun(id);
})
.catch(toastOnFail("execute agent preset"));
}
return api
.executeGraph(
graph.id,
graph.version,
run.inputs!,
run.credential_inputs!,
"library",
)
.then(({ id }) => {
analytics.sendDatafastEvent("run_agent", {
name: graph.name,
id: graph.id,
});
onRun(id);
})
.catch(toastOnFail("execute agent"));
}, [api, graph, run, onRun, toastOnFail]);
const stopRun = useCallback(
() => api.stopGraphExecution(graph.id, run.id),
[api, graph.id, run.id],
);
const agentRunOutputs:
| Record<
string,
{
title?: string;
/* type: BlockIOSubType; */
values: Array<React.ReactNode>;
}
>
| null
| undefined = useMemo(() => {
if (!("outputs" in run)) return undefined;
if (!["running", "success", "failed", "stopped"].includes(runStatus))
return null;
// Add type info from agent input schema
return Object.fromEntries(
Object.entries(run.outputs).map(([k, vv]) => [
k,
{
title: graph.output_schema.properties[k].title,
/* type: agent.output_schema.properties[k].type */
values: vv.map((v) =>
typeof v == "object" ? JSON.stringify(v, undefined, 2) : v,
),
},
]),
);
}, [graph, run, runStatus]);
const runActions: ButtonAction[] = useMemo(
() => [
...(["running", "queued"].includes(runStatus)
? ([
{
label: (
<>
<IconSquare className="mr-2 size-4" />
Stop run
</>
),
variant: "secondary",
callback: stopRun,
},
] satisfies ButtonAction[])
: []),
...(["success", "failed", "stopped"].includes(runStatus) &&
!graph.has_external_trigger &&
(graph.credentials_input_schema?.required ?? []).every(
(k) => k in (run.credential_inputs ?? {}),
)
? [
{
label: (
<>
<IconRefresh className="mr-2 size-4" />
Run again
</>
),
callback: runAgain,
dataTestId: "run-again-button",
},
]
: []),
...(agent.can_access_graph
? [
{
label: "Open run in builder",
href: `/build?flowID=${run.graph_id}&flowVersion=${run.graph_version}&flowExecutionID=${run.id}`,
},
]
: []),
{ label: "Create preset from run", callback: doCreatePresetFromRun },
{ label: "Delete run", variant: "secondary", callback: doDeleteRun },
],
[
runStatus,
runAgain,
stopRun,
doDeleteRun,
doCreatePresetFromRun,
graph.has_external_trigger,
graph.credentials_input_schema?.required,
agent.can_access_graph,
run.graph_id,
run.graph_version,
run.id,
],
);
return (
<div className="agpt-div flex gap-6">
<div className="flex flex-1 flex-col gap-4">
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Info</CardTitle>
</CardHeader>
<CardContent>
<div className="flex justify-stretch gap-4">
{infoStats.map(({ label, value }) => (
<div key={label} className="flex-1">
<p className="text-sm font-medium text-black">{label}</p>
<p className="text-sm text-neutral-600">{value}</p>
</div>
))}
</div>
{run.status === "FAILED" && (
<div className="mt-4 rounded-md border border-red-200 bg-red-50 p-3 dark:border-red-800 dark:bg-red-900/20">
<p className="text-sm text-red-800 dark:text-red-200">
<strong>Error:</strong>{" "}
{run.stats?.error ||
"The execution failed due to an internal error. You can re-run the agent to retry."}
</p>
</div>
)}
</CardContent>
</Card>
{/* Smart Agent Execution Summary */}
{run.stats?.activity_status && (
<Card className="agpt-box">
<CardHeader>
<CardTitle className="flex items-center gap-2 font-poppins text-lg">
Task Summary
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<IconCircleAlert className="size-4 cursor-help text-neutral-500 hover:text-neutral-700" />
</TooltipTrigger>
<TooltipContent>
<p className="max-w-xs">
This AI-generated summary describes how the agent
handled your task. Its an experimental feature and may
occasionally be inaccurate.
</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<p className="text-sm leading-relaxed text-neutral-700">
{run.stats.activity_status}
</p>
{/* Correctness Score */}
{typeof run.stats.correctness_score === "number" && (
<div className="flex items-center gap-3 rounded-lg bg-neutral-50 p-3">
<div className="flex items-center gap-2">
<span className="text-sm font-medium text-neutral-600">
Success Estimate:
</span>
<div className="flex items-center gap-2">
<div className="relative h-2 w-16 overflow-hidden rounded-full bg-neutral-200">
<div
className={`h-full transition-all ${
run.stats.correctness_score >= 0.8
? "bg-green-500"
: run.stats.correctness_score >= 0.6
? "bg-yellow-500"
: run.stats.correctness_score >= 0.4
? "bg-orange-500"
: "bg-red-500"
}`}
style={{
width: `${Math.round(run.stats.correctness_score * 100)}%`,
}}
/>
</div>
<span className="text-sm font-medium">
{Math.round(run.stats.correctness_score * 100)}%
</span>
</div>
</div>
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<IconCircleAlert className="size-4 cursor-help text-neutral-400 hover:text-neutral-600" />
</TooltipTrigger>
<TooltipContent>
<p className="max-w-xs">
AI-generated estimate of how well this execution
achieved its intended purpose. This score indicates
{run.stats.correctness_score >= 0.8
? " the agent was highly successful."
: run.stats.correctness_score >= 0.6
? " the agent was mostly successful with minor issues."
: run.stats.correctness_score >= 0.4
? " the agent was partially successful with some gaps."
: " the agent had limited success with significant issues."}
</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
)}
</CardContent>
</Card>
)}
{agentRunOutputs !== null && (
<AgentRunOutputView agentRunOutputs={agentRunOutputs} />
)}
{/* Pending Reviews Section */}
{runStatus === "review" && (
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">
Pending Reviews ({pendingReviews.length})
</CardTitle>
</CardHeader>
<CardContent>
{reviewsLoading ? (
<LoadingBox spinnerSize={12} className="h-24" />
) : pendingReviews.length > 0 ? (
<PendingReviewsList
reviews={pendingReviews}
onReviewComplete={refetchReviews}
emptyMessage="No pending reviews for this execution"
/>
) : (
<div className="py-4 text-neutral-600">
No pending reviews for this execution
</div>
)}
</CardContent>
</Card>
)}
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Input</CardTitle>
</CardHeader>
<CardContent className="flex flex-col gap-4">
{agentRunInputs !== undefined ? (
Object.entries(agentRunInputs).map(([key, { title, value }]) => (
<div key={key} className="flex flex-col gap-1.5">
<label className="text-sm font-medium">{title || key}</label>
<Input value={value} className="rounded-full" disabled />
</div>
))
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
</div>
{/* Run / Agent Actions */}
<aside className="w-48 xl:w-56">
<div className="flex flex-col gap-8">
<ActionButtonGroup title="Run actions" actions={runActions} />
<ActionButtonGroup title="Agent actions" actions={agentActions} />
</div>
</aside>
</div>
);
}

View File

@@ -1,178 +0,0 @@
"use client";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import React, { useMemo } from "react";
import {
Card,
CardContent,
CardHeader,
CardTitle,
} from "@/components/__legacy__/ui/card";
import LoadingBox from "@/components/__legacy__/ui/loading";
import type { OutputMetadata } from "../../../../../../../../components/contextual/OutputRenderers";
import {
globalRegistry,
OutputActions,
OutputItem,
} from "../../../../../../../../components/contextual/OutputRenderers";
export function AgentRunOutputView({
agentRunOutputs,
}: {
agentRunOutputs:
| Record<
string,
{
title?: string;
/* type: BlockIOSubType; */
values: Array<React.ReactNode>;
}
>
| undefined;
}) {
const enableEnhancedOutputHandling = useGetFlag(
Flag.ENABLE_ENHANCED_OUTPUT_HANDLING,
);
// Prepare items for the renderer system
const outputItems = useMemo(() => {
if (!agentRunOutputs) return [];
const items: Array<{
key: string;
label: string;
value: unknown;
metadata?: OutputMetadata;
renderer: any;
}> = [];
Object.entries(agentRunOutputs).forEach(([key, { title, values }]) => {
values.forEach((value, index) => {
// Enhanced metadata extraction
const metadata: OutputMetadata = {};
// Type guard to safely access properties
if (
typeof value === "object" &&
value !== null &&
!React.isValidElement(value)
) {
const objValue = value as any;
if (objValue.type) metadata.type = objValue.type;
if (objValue.mimeType) metadata.mimeType = objValue.mimeType;
if (objValue.filename) metadata.filename = objValue.filename;
}
const renderer = globalRegistry.getRenderer(value, metadata);
if (renderer) {
items.push({
key: `${key}-${index}`,
label: index === 0 ? title || key : "",
value,
metadata,
renderer,
});
} else {
const textRenderer = globalRegistry
.getAllRenderers()
.find((r) => r.name === "TextRenderer");
if (textRenderer) {
items.push({
key: `${key}-${index}`,
label: index === 0 ? title || key : "",
value: JSON.stringify(value, null, 2),
metadata,
renderer: textRenderer,
});
}
}
});
});
return items;
}, [agentRunOutputs]);
return (
<>
{enableEnhancedOutputHandling ? (
<Card className="agpt-box" style={{ maxWidth: "950px" }}>
<CardHeader>
<div className="flex items-center justify-between">
<CardTitle className="font-poppins text-lg">Output</CardTitle>
{outputItems.length > 0 && (
<OutputActions
items={outputItems.map((item) => ({
value: item.value,
metadata: item.metadata,
renderer: item.renderer,
}))}
/>
)}
</div>
</CardHeader>
<CardContent
className="flex flex-col gap-4"
style={{ maxWidth: "660px" }}
>
{agentRunOutputs !== undefined ? (
outputItems.length > 0 ? (
outputItems.map((item) => (
<OutputItem
key={item.key}
value={item.value}
metadata={item.metadata}
renderer={item.renderer}
label={item.label}
/>
))
) : (
<p className="text-sm text-muted-foreground">
No outputs to display
</p>
)
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
) : (
<Card className="agpt-box" style={{ maxWidth: "950px" }}>
<CardHeader>
<CardTitle className="font-poppins text-lg">Output</CardTitle>
</CardHeader>
<CardContent
className="flex flex-col gap-4"
style={{ maxWidth: "660px" }}
>
{agentRunOutputs !== undefined ? (
Object.entries(agentRunOutputs).map(
([key, { title, values }]) => (
<div key={key} className="flex flex-col gap-1.5">
<label className="text-sm font-medium">
{title || key}
</label>
{values.map((value, i) => (
<p
className="resize-none overflow-x-auto whitespace-pre-wrap break-words border-none text-sm text-neutral-700 disabled:cursor-not-allowed"
key={i}
>
{value}
</p>
))}
{/* TODO: pretty type-dependent rendering */}
</div>
),
)
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
)}
</>
);
}

View File

@@ -1,68 +0,0 @@
import React from "react";
import { Badge } from "@/components/__legacy__/ui/badge";
import { GraphExecutionMeta } from "@/lib/autogpt-server-api/types";
export type AgentRunStatus =
| "success"
| "failed"
| "queued"
| "running"
| "stopped"
| "scheduled"
| "draft"
| "review";
export const agentRunStatusMap: Record<
GraphExecutionMeta["status"],
AgentRunStatus
> = {
INCOMPLETE: "draft",
COMPLETED: "success",
FAILED: "failed",
QUEUED: "queued",
RUNNING: "running",
TERMINATED: "stopped",
REVIEW: "review",
};
const statusData: Record<
AgentRunStatus,
{ label: string; variant: keyof typeof statusStyles }
> = {
success: { label: "Success", variant: "success" },
running: { label: "Running", variant: "info" },
failed: { label: "Failed", variant: "destructive" },
queued: { label: "Queued", variant: "warning" },
draft: { label: "Draft", variant: "secondary" },
stopped: { label: "Stopped", variant: "secondary" },
scheduled: { label: "Scheduled", variant: "secondary" },
review: { label: "In Review", variant: "warning" },
};
const statusStyles = {
success:
"bg-green-100 text-green-800 hover:bg-green-100 hover:text-green-800",
destructive: "bg-red-100 text-red-800 hover:bg-red-100 hover:text-red-800",
warning:
"bg-yellow-100 text-yellow-800 hover:bg-yellow-100 hover:text-yellow-800",
info: "bg-blue-100 text-blue-800 hover:bg-blue-100 hover:text-blue-800",
secondary:
"bg-slate-100 text-slate-800 hover:bg-slate-100 hover:text-slate-800",
};
export function AgentRunStatusChip({
status,
}: {
status: AgentRunStatus;
}): React.ReactElement {
return (
<Badge
variant="secondary"
className={`text-xs font-medium ${statusStyles[statusData[status]?.variant]} rounded-[45px] px-[9px] py-[3px]`}
>
{statusData[status]?.label}
</Badge>
);
}

View File

@@ -1,130 +0,0 @@
import React from "react";
import { formatDistanceToNow, isPast } from "date-fns";
import { cn } from "@/lib/utils";
import { Link2Icon, Link2OffIcon, MoreVertical } from "lucide-react";
import { Card, CardContent } from "@/components/__legacy__/ui/card";
import { Button } from "@/components/__legacy__/ui/button";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/__legacy__/ui/dropdown-menu";
import { AgentStatus, AgentStatusChip } from "./agent-status-chip";
import { AgentRunStatus, AgentRunStatusChip } from "./agent-run-status-chip";
import { PushPinSimpleIcon } from "@phosphor-icons/react";
export type AgentRunSummaryProps = (
| {
type: "run";
status: AgentRunStatus;
}
| {
type: "preset";
status?: undefined;
}
| {
type: "preset.triggered";
status: AgentStatus;
}
| {
type: "schedule";
status: "scheduled";
}
) & {
title: string;
timestamp?: number | Date;
selected?: boolean;
onClick?: () => void;
// onRename: () => void;
onDelete: () => void;
onPinAsPreset?: () => void;
className?: string;
};
export function AgentRunSummaryCard({
type,
status,
title,
timestamp,
selected = false,
onClick,
// onRename,
onDelete,
onPinAsPreset,
className,
}: AgentRunSummaryProps): React.ReactElement {
return (
<Card
className={cn(
"agpt-rounded-card cursor-pointer border-zinc-300",
selected ? "agpt-card-selected" : "",
className,
)}
onClick={onClick}
>
<CardContent className="relative p-2.5 lg:p-4">
{(type == "run" || type == "schedule") && (
<AgentRunStatusChip status={status} />
)}
{type == "preset" && (
<div className="flex items-center text-sm font-medium text-neutral-700">
<PushPinSimpleIcon className="mr-1 size-4 text-foreground" /> Preset
</div>
)}
{type == "preset.triggered" && (
<div className="flex items-center justify-between">
<AgentStatusChip status={status} />
<div className="flex items-center text-sm font-medium text-neutral-700">
{status == "inactive" ? (
<Link2OffIcon className="mr-1 size-4 text-foreground" />
) : (
<Link2Icon className="mr-1 size-4 text-foreground" />
)}{" "}
Trigger
</div>
</div>
)}
<div className="mt-5 flex items-center justify-between">
<h3 className="truncate pr-2 text-base font-medium text-neutral-900">
{title}
</h3>
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button variant="ghost" className="h-5 w-5 p-0">
<MoreVertical className="h-5 w-5" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent>
{onPinAsPreset && (
<DropdownMenuItem onClick={onPinAsPreset}>
Pin as a preset
</DropdownMenuItem>
)}
{/* <DropdownMenuItem onClick={onRename}>Rename</DropdownMenuItem> */}
<DropdownMenuItem onClick={onDelete}>Delete</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
{timestamp && (
<p
className="mt-1 text-sm font-normal text-neutral-500"
title={new Date(timestamp).toString()}
>
{isPast(timestamp) ? "Ran" : "Runs in"}{" "}
{formatDistanceToNow(timestamp, { addSuffix: true })}
</p>
)}
</CardContent>
</Card>
);
}

View File

@@ -1,237 +0,0 @@
"use client";
import { Plus } from "lucide-react";
import React, { useEffect, useState } from "react";
import {
GraphExecutionID,
GraphExecutionMeta,
LibraryAgent,
LibraryAgentPreset,
LibraryAgentPresetID,
Schedule,
ScheduleID,
} from "@/lib/autogpt-server-api";
import { cn } from "@/lib/utils";
import { Badge } from "@/components/__legacy__/ui/badge";
import { Button } from "@/components/atoms/Button/Button";
import LoadingBox, { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import { Separator } from "@/components/__legacy__/ui/separator";
import { ScrollArea } from "@/components/__legacy__/ui/scroll-area";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { AgentRunsQuery } from "../use-agent-runs";
import { agentRunStatusMap } from "./agent-run-status-chip";
import { AgentRunSummaryCard } from "./agent-run-summary-card";
interface AgentRunsSelectorListProps {
agent: LibraryAgent;
agentRunsQuery: AgentRunsQuery;
agentPresets: LibraryAgentPreset[];
schedules: Schedule[];
selectedView: { type: "run" | "preset" | "schedule"; id?: string };
allowDraftNewRun?: boolean;
onSelectRun: (id: GraphExecutionID) => void;
onSelectPreset: (preset: LibraryAgentPresetID) => void;
onSelectSchedule: (id: ScheduleID) => void;
onSelectDraftNewRun: () => void;
doDeleteRun: (id: GraphExecutionMeta) => void;
doDeletePreset: (id: LibraryAgentPresetID) => void;
doDeleteSchedule: (id: ScheduleID) => void;
doCreatePresetFromRun?: (id: GraphExecutionID) => void;
className?: string;
}
export function AgentRunsSelectorList({
agent,
agentRunsQuery: {
agentRuns,
agentRunCount,
agentRunsLoading,
hasMoreRuns,
fetchMoreRuns,
isFetchingMoreRuns,
},
agentPresets,
schedules,
selectedView,
allowDraftNewRun = true,
onSelectRun,
onSelectPreset,
onSelectSchedule,
onSelectDraftNewRun,
doDeleteRun,
doDeletePreset,
doDeleteSchedule,
doCreatePresetFromRun,
className,
}: AgentRunsSelectorListProps): React.ReactElement {
const [activeListTab, setActiveListTab] = useState<"runs" | "scheduled">(
"runs",
);
useEffect(() => {
if (selectedView.type === "schedule") {
setActiveListTab("scheduled");
} else {
setActiveListTab("runs");
}
}, [selectedView]);
const listItemClasses = "h-28 w-72 lg:w-full lg:h-32";
return (
<aside className={cn("flex flex-col gap-4", className)}>
{allowDraftNewRun ? (
<Button
className={"mb-4 hidden lg:flex"}
onClick={onSelectDraftNewRun}
leftIcon={<Plus className="h-6 w-6" />}
>
New {agent.has_external_trigger ? "trigger" : "run"}
</Button>
) : null}
<div className="flex gap-2">
<Badge
variant={activeListTab === "runs" ? "secondary" : "outline"}
className="cursor-pointer gap-2 rounded-full text-base"
onClick={() => setActiveListTab("runs")}
>
<span>Runs</span>
<span className="text-neutral-600">
{agentRunCount ?? <LoadingSpinner className="size-4" />}
</span>
</Badge>
<Badge
variant={activeListTab === "scheduled" ? "secondary" : "outline"}
className="cursor-pointer gap-2 rounded-full text-base"
onClick={() => setActiveListTab("scheduled")}
>
<span>Scheduled</span>
<span className="text-neutral-600">{schedules.length}</span>
</Badge>
</div>
{/* Runs / Schedules list */}
{agentRunsLoading && activeListTab === "runs" ? (
<LoadingBox className="h-28 w-full lg:h-[calc(100vh-300px)] lg:w-72 xl:w-80" />
) : (
<ScrollArea
className="w-full lg:h-[calc(100vh-300px)] lg:w-72 xl:w-80"
orientation={window.innerWidth >= 1024 ? "vertical" : "horizontal"}
>
<InfiniteScroll
direction={window.innerWidth >= 1024 ? "vertical" : "horizontal"}
hasNextPage={hasMoreRuns}
fetchNextPage={fetchMoreRuns}
isFetchingNextPage={isFetchingMoreRuns}
>
<div className="flex items-center gap-2 lg:flex-col">
{/* New Run button - only in small layouts */}
{allowDraftNewRun && (
<Button
size="large"
className={
"flex h-12 w-40 items-center gap-2 py-6 lg:hidden " +
(selectedView.type == "run" && !selectedView.id
? "agpt-card-selected text-accent"
: "")
}
onClick={onSelectDraftNewRun}
leftIcon={<Plus className="h-6 w-6" />}
>
New {agent.has_external_trigger ? "trigger" : "run"}
</Button>
)}
{activeListTab === "runs" ? (
<>
{agentPresets
.filter((preset) => preset.webhook) // Triggers
.toSorted(
(a, b) => b.updated_at.getTime() - a.updated_at.getTime(),
)
.map((preset) => (
<AgentRunSummaryCard
className={cn(listItemClasses, "lg:h-auto")}
key={preset.id}
type="preset.triggered"
status={preset.is_active ? "active" : "inactive"}
title={preset.name}
// timestamp={preset.last_run_time} // TODO: implement this
selected={selectedView.id === preset.id}
onClick={() => onSelectPreset(preset.id)}
onDelete={() => doDeletePreset(preset.id)}
/>
))}
{agentPresets
.filter((preset) => !preset.webhook) // Presets
.toSorted(
(a, b) => b.updated_at.getTime() - a.updated_at.getTime(),
)
.map((preset) => (
<AgentRunSummaryCard
className={cn(listItemClasses, "lg:h-auto")}
key={preset.id}
type="preset"
title={preset.name}
// timestamp={preset.last_run_time} // TODO: implement this
selected={selectedView.id === preset.id}
onClick={() => onSelectPreset(preset.id)}
onDelete={() => doDeletePreset(preset.id)}
/>
))}
{agentPresets.length > 0 && <Separator className="my-1" />}
{agentRuns
.toSorted((a, b) => {
const aTime = a.started_at?.getTime() ?? 0;
const bTime = b.started_at?.getTime() ?? 0;
return bTime - aTime;
})
.map((run) => (
<AgentRunSummaryCard
className={listItemClasses}
key={run.id}
type="run"
status={agentRunStatusMap[run.status]}
title={
(run.preset_id
? agentPresets.find((p) => p.id == run.preset_id)
?.name
: null) ?? agent.name
}
timestamp={run.started_at ?? undefined}
selected={selectedView.id === run.id}
onClick={() => onSelectRun(run.id)}
onDelete={() => doDeleteRun(run as GraphExecutionMeta)}
onPinAsPreset={
doCreatePresetFromRun
? () => doCreatePresetFromRun(run.id)
: undefined
}
/>
))}
</>
) : (
schedules.map((schedule) => (
<AgentRunSummaryCard
className={listItemClasses}
key={schedule.id}
type="schedule"
status="scheduled" // TODO: implement active/inactive status for schedules
title={schedule.name}
timestamp={schedule.next_run_time}
selected={selectedView.id === schedule.id}
onClick={() => onSelectSchedule(schedule.id)}
onDelete={() => doDeleteSchedule(schedule.id)}
/>
))
)}
</div>
</InfiniteScroll>
</ScrollArea>
)}
</aside>
);
}

View File

@@ -1,180 +0,0 @@
"use client";
import React, { useCallback, useMemo } from "react";
import {
Graph,
GraphExecutionID,
Schedule,
ScheduleID,
} from "@/lib/autogpt-server-api";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import ActionButtonGroup from "@/components/__legacy__/action-button-group";
import type { ButtonAction } from "@/components/__legacy__/types";
import {
Card,
CardContent,
CardHeader,
CardTitle,
} from "@/components/__legacy__/ui/card";
import { IconCross } from "@/components/__legacy__/ui/icons";
import { Input } from "@/components/__legacy__/ui/input";
import LoadingBox from "@/components/__legacy__/ui/loading";
import { useToastOnFail } from "@/components/molecules/Toast/use-toast";
import { humanizeCronExpression } from "@/lib/cron-expression-utils";
import { formatScheduleTime } from "@/lib/timezone-utils";
import { useUserTimezone } from "@/lib/hooks/useUserTimezone";
import { PlayIcon } from "lucide-react";
import { AgentRunStatus } from "./agent-run-status-chip";
export function AgentScheduleDetailsView({
graph,
schedule,
agentActions,
onForcedRun,
doDeleteSchedule,
}: {
graph: Graph;
schedule: Schedule;
agentActions: ButtonAction[];
onForcedRun: (runID: GraphExecutionID) => void;
doDeleteSchedule: (scheduleID: ScheduleID) => void;
}): React.ReactNode {
const api = useBackendAPI();
const selectedRunStatus: AgentRunStatus = "scheduled";
const toastOnFail = useToastOnFail();
// Get user's timezone for displaying schedule times
const userTimezone = useUserTimezone();
const infoStats: { label: string; value: React.ReactNode }[] = useMemo(() => {
return [
{
label: "Status",
value:
selectedRunStatus.charAt(0).toUpperCase() +
selectedRunStatus.slice(1),
},
{
label: "Schedule",
value: humanizeCronExpression(schedule.cron),
},
{
label: "Next run",
value: formatScheduleTime(schedule.next_run_time, userTimezone),
},
];
}, [schedule, selectedRunStatus, userTimezone]);
const agentRunInputs: Record<
string,
{ title?: string; /* type: BlockIOSubType; */ value: any }
> = useMemo(() => {
// TODO: show (link to) preset - https://github.com/Significant-Gravitas/AutoGPT/issues/9168
// Add type info from agent input schema
return Object.fromEntries(
Object.entries(schedule.input_data).map(([k, v]) => [
k,
{
title: graph.input_schema.properties[k].title,
/* TODO: type: agent.input_schema.properties[k].type */
value: v,
},
]),
);
}, [graph, schedule]);
const runNow = useCallback(
() =>
api
.executeGraph(
graph.id,
graph.version,
schedule.input_data,
schedule.input_credentials,
"library",
)
.then((run) => onForcedRun(run.id))
.catch(toastOnFail("execute agent")),
[api, graph, schedule, onForcedRun, toastOnFail],
);
const runActions: ButtonAction[] = useMemo(
() => [
{
label: (
<>
<PlayIcon className="mr-2 size-4" />
Run now
</>
),
callback: runNow,
},
{
label: (
<>
<IconCross className="mr-2 size-4 px-0.5" />
Delete schedule
</>
),
callback: () => doDeleteSchedule(schedule.id),
variant: "destructive",
},
],
[runNow],
);
return (
<div className="agpt-div flex gap-6">
<div className="flex flex-1 flex-col gap-4">
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Info</CardTitle>
</CardHeader>
<CardContent>
<div className="flex justify-stretch gap-4">
{infoStats.map(({ label, value }) => (
<div key={label} className="flex-1">
<p className="text-sm font-medium text-black">{label}</p>
<p className="text-sm text-neutral-600">{value}</p>
</div>
))}
</div>
</CardContent>
</Card>
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Input</CardTitle>
</CardHeader>
<CardContent className="flex flex-col gap-4">
{agentRunInputs !== undefined ? (
Object.entries(agentRunInputs).map(([key, { title, value }]) => (
<div key={key} className="flex flex-col gap-1.5">
<label className="text-sm font-medium">{title || key}</label>
<Input value={value} className="rounded-full" disabled />
</div>
))
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
</div>
{/* Run / Agent Actions */}
<aside className="w-48 xl:w-56">
<div className="flex flex-col gap-8">
<ActionButtonGroup title="Run actions" actions={runActions} />
<ActionButtonGroup title="Agent actions" actions={agentActions} />
</div>
</aside>
</div>
);
}

View File

@@ -1,100 +0,0 @@
"use client";
import React, { useState } from "react";
import { Button } from "@/components/__legacy__/ui/button";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/__legacy__/ui/dialog";
import { Input } from "@/components/__legacy__/ui/input";
import { Textarea } from "@/components/__legacy__/ui/textarea";
interface CreatePresetDialogProps {
open: boolean;
onOpenChange: (open: boolean) => void;
onConfirm: (name: string, description: string) => Promise<void> | void;
}
export function CreatePresetDialog({
open,
onOpenChange,
onConfirm,
}: CreatePresetDialogProps) {
const [name, setName] = useState("");
const [description, setDescription] = useState("");
const handleSubmit = async () => {
if (name.trim()) {
await onConfirm(name.trim(), description.trim());
setName("");
setDescription("");
onOpenChange(false);
}
};
const handleCancel = () => {
setName("");
setDescription("");
onOpenChange(false);
};
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === "Enter" && (e.metaKey || e.ctrlKey)) {
e.preventDefault();
handleSubmit();
}
};
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogContent className="sm:max-w-[425px]">
<DialogHeader>
<DialogTitle>Create Preset</DialogTitle>
<DialogDescription>
Give your preset a name and description to help identify it later.
</DialogDescription>
</DialogHeader>
<div className="grid gap-4 py-4">
<div className="grid gap-2">
<label htmlFor="preset-name" className="text-sm font-medium">
Name *
</label>
<Input
id="preset-name"
placeholder="Enter preset name"
value={name}
onChange={(e) => setName(e.target.value)}
onKeyDown={handleKeyDown}
autoFocus
/>
</div>
<div className="grid gap-2">
<label htmlFor="preset-description" className="text-sm font-medium">
Description
</label>
<Textarea
id="preset-description"
placeholder="Optional description"
value={description}
onChange={(e) => setDescription(e.target.value)}
onKeyDown={handleKeyDown}
rows={3}
/>
</div>
</div>
<DialogFooter>
<Button variant="outline" onClick={handleCancel}>
Cancel
</Button>
<Button onClick={handleSubmit} disabled={!name.trim()}>
Create Preset
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
);
}

View File

@@ -1,210 +0,0 @@
import {
GraphExecutionMeta as LegacyGraphExecutionMeta,
GraphID,
GraphExecutionID,
} from "@/lib/autogpt-server-api";
import { getQueryClient } from "@/lib/react-query/queryClient";
import {
getPaginatedTotalCount,
getPaginationNextPageNumber,
unpaginate,
} from "@/app/api/helpers";
import {
getV1ListGraphExecutionsResponse,
getV1ListGraphExecutionsResponse200,
useGetV1ListGraphExecutionsInfinite,
} from "@/app/api/__generated__/endpoints/graphs/graphs";
import { GraphExecutionsPaginated } from "@/app/api/__generated__/models/graphExecutionsPaginated";
import { GraphExecutionMeta as RawGraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
export type GraphExecutionMeta = Omit<
RawGraphExecutionMeta,
"id" | "user_id" | "graph_id" | "preset_id" | "stats"
> &
Pick<
LegacyGraphExecutionMeta,
"id" | "user_id" | "graph_id" | "preset_id" | "stats"
>;
/** Hook to fetch runs for a specific graph, with support for infinite scroll.
*
* @param graphID - The ID of the graph to fetch agent runs for. This parameter is
* optional in the sense that the hook doesn't run unless it is passed.
* This way, it can be used in components where the graph ID is not
* immediately available.
*/
export const useAgentRunsInfinite = (graphID?: GraphID) => {
const queryClient = getQueryClient();
const {
data: queryResults,
refetch: refetchRuns,
isPending: agentRunsLoading,
isRefetching: agentRunsReloading,
hasNextPage: hasMoreRuns,
fetchNextPage: fetchMoreRuns,
isFetchingNextPage: isFetchingMoreRuns,
queryKey,
} = useGetV1ListGraphExecutionsInfinite(
graphID!,
{ page: 1, page_size: 20 },
{
query: {
getNextPageParam: getPaginationNextPageNumber,
// Prevent query from running if graphID is not available (yet)
...(!graphID
? {
enabled: false,
queryFn: () =>
// Fake empty response if graphID is not available (yet)
Promise.resolve({
status: 200,
data: {
executions: [],
pagination: {
current_page: 1,
page_size: 20,
total_items: 0,
total_pages: 0,
},
},
headers: new Headers(),
} satisfies getV1ListGraphExecutionsResponse),
}
: {}),
},
},
queryClient,
);
const agentRuns = queryResults ? unpaginate(queryResults, "executions") : [];
const agentRunCount = getPaginatedTotalCount(queryResults);
const upsertAgentRun = (newAgentRun: GraphExecutionMeta) => {
queryClient.setQueryData(
queryKey,
(currentQueryData: typeof queryResults) => {
if (!currentQueryData?.pages || agentRunCount === undefined)
return currentQueryData;
const exists = currentQueryData.pages.some((page) => {
if (page.status !== 200) return false;
const response = page.data;
return response.executions.some((run) => run.id === newAgentRun.id);
});
if (exists) {
// If the run already exists, we update it
return {
...currentQueryData,
pages: currentQueryData.pages.map((page) => {
if (page.status !== 200) return page;
const response = page.data;
const executions = response.executions;
const index = executions.findIndex(
(run) => run.id === newAgentRun.id,
);
if (index === -1) return page;
const newExecutions = [...executions];
newExecutions[index] = newAgentRun;
return {
...page,
data: {
...response,
executions: newExecutions,
},
} satisfies getV1ListGraphExecutionsResponse;
}),
};
}
// If the run does not exist, we add it to the first page
const page = currentQueryData
.pages[0] as getV1ListGraphExecutionsResponse200 & {
headers: Headers;
};
const updatedExecutions = [newAgentRun, ...page.data.executions];
const updatedPage = {
...page,
data: {
...page.data,
executions: updatedExecutions,
},
} satisfies getV1ListGraphExecutionsResponse;
const updatedPages = [updatedPage, ...currentQueryData.pages.slice(1)];
return {
...currentQueryData,
pages: updatedPages.map(
// Increment the total runs count in the pagination info of all pages
(page) =>
page.status === 200
? {
...page,
data: {
...page.data,
pagination: {
...page.data.pagination,
total_items: agentRunCount + 1,
},
},
}
: page,
),
};
},
);
};
const removeAgentRun = (runID: GraphExecutionID) => {
queryClient.setQueryData(
[queryKey, { page: 1, page_size: 20 }],
(currentQueryData: typeof queryResults) => {
if (!currentQueryData?.pages) return currentQueryData;
let found = false;
return {
...currentQueryData,
pages: currentQueryData.pages.map((page) => {
const response = page.data as GraphExecutionsPaginated;
const filteredExecutions = response.executions.filter(
(run) => run.id !== runID,
);
if (filteredExecutions.length < response.executions.length) {
found = true;
}
return {
...page,
data: {
...response,
executions: filteredExecutions,
pagination: {
...response.pagination,
total_items:
response.pagination.total_items - (found ? 1 : 0),
},
},
};
}),
};
},
);
};
return {
agentRuns: agentRuns as GraphExecutionMeta[],
refetchRuns,
agentRunCount,
agentRunsLoading: agentRunsLoading || agentRunsReloading,
hasMoreRuns,
fetchMoreRuns,
isFetchingMoreRuns,
upsertAgentRun,
removeAgentRun,
};
};
export type AgentRunsQuery = ReturnType<typeof useAgentRunsInfinite>;

View File

@@ -1,7 +0,0 @@
"use client";
import { OldAgentLibraryView } from "../../agents/[id]/components/OldAgentLibraryView/OldAgentLibraryView";
export default function OldAgentLibraryPage() {
return <OldAgentLibraryView />;
}

View File

@@ -2,7 +2,7 @@ import { useEffect, useState } from "react";
import { Input } from "@/components/__legacy__/ui/input";
import { Button } from "@/components/__legacy__/ui/button";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { CronScheduler } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler";
import { CronScheduler } from "@/components/contextual/CronScheduler/cron-scheduler";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { getTimezoneDisplayName } from "@/lib/timezone-utils";
import { useUserTimezone } from "@/lib/hooks/useUserTimezone";

View File

@@ -1,6 +1,6 @@
"use client";
import { CronExpressionDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { CronExpressionDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog";
import { Form, FormField } from "@/components/__legacy__/ui/form";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";

View File

@@ -80,7 +80,7 @@ export default function WrapIfAdditionalTemplate(
uiSchema={uiSchema}
/>
{!isHandleConnected && (
<div className="flex flex-1 items-center gap-2">
<div className="nodrag flex flex-1 items-center gap-2">
<Input
label={""}
hideLabel={true}

View File

@@ -7,7 +7,6 @@ import { useFlags } from "launchdarkly-react-client-sdk";
export enum Flag {
BETA_BLOCKS = "beta-blocks",
NEW_BLOCK_MENU = "new-block-menu",
NEW_AGENT_RUNS = "new-agent-runs",
GRAPH_SEARCH = "graph-search",
ENABLE_ENHANCED_OUTPUT_HANDLING = "enable-enhanced-output-handling",
SHARE_EXECUTION_RESULTS = "share-execution-results",
@@ -22,7 +21,6 @@ const isPwMockEnabled = process.env.NEXT_PUBLIC_PW_TEST === "true";
const defaultFlags = {
[Flag.BETA_BLOCKS]: [],
[Flag.NEW_BLOCK_MENU]: false,
[Flag.NEW_AGENT_RUNS]: false,
[Flag.GRAPH_SEARCH]: false,
[Flag.ENABLE_ENHANCED_OUTPUT_HANDLING]: false,
[Flag.SHARE_EXECUTION_RESULTS]: false,

View File

@@ -0,0 +1,4 @@
declare module "*.png" {
const content: import("next/image").StaticImageData;
export default content;
}

View File

@@ -56,12 +56,16 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [File Store](block-integrations/basic.md#file-store) | Downloads and stores a file from a URL, data URI, or local path |
| [Find In Dictionary](block-integrations/basic.md#find-in-dictionary) | A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value |
| [Find In List](block-integrations/basic.md#find-in-list) | Finds the index of the value in the list |
| [Flatten List](block-integrations/basic.md#flatten-list) | Flattens a nested list structure into a single flat list |
| [Get All Memories](block-integrations/basic.md#get-all-memories) | Retrieve all memories from Mem0 with optional conversation filtering |
| [Get Latest Memory](block-integrations/basic.md#get-latest-memory) | Retrieve the latest memory from Mem0 with optional key filtering |
| [Get List Item](block-integrations/basic.md#get-list-item) | Returns the element at the given index |
| [Get Store Agent Details](block-integrations/system/store_operations.md#get-store-agent-details) | Get detailed information about an agent from the store |
| [Get Weather Information](block-integrations/basic.md#get-weather-information) | Retrieves weather information for a specified location using OpenWeatherMap API |
| [Human In The Loop](block-integrations/basic.md#human-in-the-loop) | Pause execution for human review |
| [Interleave Lists](block-integrations/basic.md#interleave-lists) | Interleaves elements from multiple lists in round-robin fashion, alternating between sources |
| [List Difference](block-integrations/basic.md#list-difference) | Computes the difference between two lists |
| [List Intersection](block-integrations/basic.md#list-intersection) | Computes the intersection of two lists, returning only elements present in both |
| [List Is Empty](block-integrations/basic.md#list-is-empty) | Checks if a list is empty |
| [List Library Agents](block-integrations/system/library_operations.md#list-library-agents) | List all agents in your personal library |
| [Note](block-integrations/basic.md#note) | A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes |
@@ -84,6 +88,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [Store Value](block-integrations/basic.md#store-value) | A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks |
| [Universal Type Converter](block-integrations/basic.md#universal-type-converter) | This block is used to convert a value to a universal type |
| [XML Parser](block-integrations/basic.md#xml-parser) | Parses XML using gravitasml to tokenize and coverts it to dict |
| [Zip Lists](block-integrations/basic.md#zip-lists) | Zips multiple lists together into a list of grouped elements |
## Data Processing

View File

@@ -637,7 +637,7 @@ This enables extensibility by allowing custom blocks to be added without modifyi
## Concatenate Lists
### What it is
Concatenates multiple lists into a single list. All elements from all input lists are combined in order.
Concatenates multiple lists into a single list. All elements from all input lists are combined in order. Supports optional deduplication and None removal.
### How it works
<!-- MANUAL: how_it_works -->
@@ -651,6 +651,8 @@ The block includes validation to ensure each item is actually a list. If a non-l
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| lists | A list of lists to concatenate together. All lists will be combined in order into a single list. | List[List[Any]] | Yes |
| deduplicate | If True, remove duplicate elements from the concatenated result while preserving order. | bool | No |
| remove_none | If True, remove None values from the concatenated result. | bool | No |
### Outputs
@@ -658,6 +660,7 @@ The block includes validation to ensure each item is actually a list. If a non-l
|--------|-------------|------|
| error | Error message if concatenation failed due to invalid input types. | str |
| concatenated_list | The concatenated list containing all elements from all input lists in order. | List[Any] |
| length | The total number of elements in the concatenated list. | int |
### Possible use case
<!-- MANUAL: use_case -->
@@ -820,6 +823,45 @@ This enables conditional logic based on list membership and helps locate items f
---
## Flatten List
### What it is
Flattens a nested list structure into a single flat list. Supports configurable maximum flattening depth.
### How it works
<!-- MANUAL: how_it_works -->
This block recursively traverses a nested list and extracts all leaf elements into a single flat list. You can control how deep the flattening goes with the max_depth parameter: set it to -1 to flatten completely, or to a positive integer to flatten only that many levels.
The block also reports the original nesting depth of the input, which is useful for understanding the structure of data coming from sources with varying levels of nesting.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| nested_list | A potentially nested list to flatten into a single-level list. | List[Any] | Yes |
| max_depth | Maximum depth to flatten. -1 means flatten completely. 1 means flatten only one level. | int | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if flattening failed. | str |
| flattened_list | The flattened list with all nested elements extracted. | List[Any] |
| length | The number of elements in the flattened list. | int |
| original_depth | The maximum nesting depth of the original input list. | int |
### Possible use case
<!-- MANUAL: use_case -->
**Normalizing API Responses**: Flatten nested JSON arrays from different API endpoints into a uniform single-level list for consistent processing.
**Aggregating Nested Results**: Combine results from recursive file searches or nested category trees into a flat list of items for display or export.
**Data Pipeline Cleanup**: Simplify deeply nested data structures from multiple transformation steps into a clean flat list before final output.
<!-- END MANUAL -->
---
## Get All Memories
### What it is
@@ -1012,6 +1054,120 @@ This enables human oversight at critical points in automated workflows, ensuring
---
## Interleave Lists
### What it is
Interleaves elements from multiple lists in round-robin fashion, alternating between sources.
### How it works
<!-- MANUAL: how_it_works -->
This block takes elements from each input list in round-robin order, picking one element from each list in turn. For example, given `[[1, 2, 3], ['a', 'b', 'c']]`, it produces `[1, 'a', 2, 'b', 3, 'c']`.
When lists have different lengths, shorter lists stop contributing once exhausted, and remaining elements from longer lists continue to be added in order.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| lists | A list of lists to interleave. Elements will be taken in round-robin order. | List[List[Any]] | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if interleaving failed. | str |
| interleaved_list | The interleaved list with elements alternating from each input list. | List[Any] |
| length | The total number of elements in the interleaved list. | int |
### Possible use case
<!-- MANUAL: use_case -->
**Balanced Content Mixing**: Alternate between content from different sources (e.g., mixing promotional and organic posts) for a balanced feed.
**Round-Robin Scheduling**: Distribute tasks evenly across workers or queues by interleaving items from separate task lists.
**Multi-Language Output**: Weave together translated text segments with their original counterparts for side-by-side comparison.
<!-- END MANUAL -->
---
## List Difference
### What it is
Computes the difference between two lists. Returns elements in the first list not found in the second, or symmetric difference.
### How it works
<!-- MANUAL: how_it_works -->
This block compares two lists and returns elements from list_a that do not appear in list_b. It uses hash-based lookup for efficient comparison. When symmetric mode is enabled, it returns elements that are in either list but not in both.
The order of elements from list_a is preserved in the output, and elements from list_b are appended when using symmetric difference.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| list_a | The primary list to check elements from. | List[Any] | Yes |
| list_b | The list to subtract. Elements found here will be removed from list_a. | List[Any] | Yes |
| symmetric | If True, compute symmetric difference (elements in either list but not both). | bool | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed. | str |
| difference | Elements from list_a not found in list_b (or symmetric difference if enabled). | List[Any] |
| length | The number of elements in the difference result. | int |
### Possible use case
<!-- MANUAL: use_case -->
**Change Detection**: Compare a current list of records against a previous snapshot to find newly added or removed items.
**Exclusion Filtering**: Remove items from a list that appear in a blocklist or already-processed list to avoid duplicates.
**Data Sync**: Identify which items exist in one system but not another to determine what needs to be synced.
<!-- END MANUAL -->
---
## List Intersection
### What it is
Computes the intersection of two lists, returning only elements present in both.
### How it works
<!-- MANUAL: how_it_works -->
This block finds elements that appear in both input lists by hashing elements from list_b for efficient lookup, then checking each element of list_a against that set. The output preserves the order from list_a and removes duplicates.
This is useful for finding common items between two datasets without needing to manually iterate or compare.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| list_a | The first list to intersect. | List[Any] | Yes |
| list_b | The second list to intersect. | List[Any] | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the operation failed. | str |
| intersection | Elements present in both list_a and list_b. | List[Any] |
| length | The number of elements in the intersection. | int |
### Possible use case
<!-- MANUAL: use_case -->
**Finding Common Tags**: Identify shared tags or categories between two items for recommendation or grouping purposes.
**Mutual Connections**: Find users or contacts that appear in both of two different lists, such as shared friends or overlapping team members.
**Feature Comparison**: Determine which features or capabilities are supported by both of two systems or products.
<!-- END MANUAL -->
---
## List Is Empty
### What it is
@@ -1452,3 +1608,42 @@ This makes XML data accessible using standard dictionary operations, allowing yo
<!-- END MANUAL -->
---
## Zip Lists
### What it is
Zips multiple lists together into a list of grouped elements. Supports padding to longest or truncating to shortest.
### How it works
<!-- MANUAL: how_it_works -->
This block pairs up corresponding elements from multiple input lists into sub-lists. For example, zipping `[[1, 2, 3], ['a', 'b', 'c']]` produces `[[1, 'a'], [2, 'b'], [3, 'c']]`.
By default, the result is truncated to the length of the shortest input list. Enable pad_to_longest to instead pad shorter lists with a fill_value so no elements from longer lists are lost.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| lists | A list of lists to zip together. Corresponding elements will be grouped. | List[List[Any]] | Yes |
| pad_to_longest | If True, pad shorter lists with fill_value to match the longest list. If False, truncate to shortest. | bool | No |
| fill_value | Value to use for padding when pad_to_longest is True. | Fill Value | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if zipping failed. | str |
| zipped_list | The zipped list of grouped elements. | List[List[Any]] |
| length | The number of groups in the zipped result. | int |
### Possible use case
<!-- MANUAL: use_case -->
**Creating Key-Value Pairs**: Combine a list of field names with a list of values to build structured records or dictionaries.
**Parallel Data Alignment**: Pair up corresponding items from separate data sources (e.g., names and email addresses) for processing together.
**Table Row Construction**: Group column data into rows by zipping each column's values together for CSV export or display.
<!-- END MANUAL -->
---

View File

@@ -65,7 +65,7 @@ The result routes data to yes_output or no_output, enabling intelligent branchin
| condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
### Outputs
@@ -103,7 +103,7 @@ The block sends the entire conversation history to the chosen LLM, including sys
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
@@ -257,7 +257,7 @@ The block formulates a prompt based on the given focus or source data, sends it
|-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -424,7 +424,7 @@ The block sends the input prompt to a chosen LLM, along with any system prompts
| prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |
@@ -464,7 +464,7 @@ The block sends the input prompt to a chosen LLM, processes the response, and re
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
@@ -501,7 +501,7 @@ The block splits the input text into smaller chunks, sends each chunk to an LLM
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -763,7 +763,7 @@ Configure agent_mode_max_iterations to control loop behavior: 0 for single decis
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |