Compare commits

...

4 Commits

Author SHA1 Message Date
Bentlybro
4b3611ca43 add migration and docs update for deprecated models
- Add migration to update existing graphs using gpt-4-turbo -> gpt-4o
- Add migration to update existing graphs using gpt-3.5-turbo -> gpt-4o-mini
- Update llm.md docs to remove deprecated models from model lists
2026-02-16 12:03:59 +00:00
Bentlybro
cd6271b787 chore(backend): remove deprecated OpenAI GPT-4-turbo and GPT-3.5-turbo models
Remove deprecated OpenAI models that are being sunset in 2026:
- GPT-4-turbo (shutdown: March 26, 2026)
- GPT-3.5-turbo (shutdown: September 28, 2026)

Users should migrate to the newer GPT-4.1, GPT-4o, or GPT-5 model families
which are already available in the platform.

Reference: https://platform.openai.com/docs/deprecations
2026-02-16 11:58:11 +00:00
Ubbe
223df9d3da feat(frontend): improve create/edit copilot UX (#12117)
## Changes 🏗️

Make the UX nicer when running long tasks in Copilot, like creating an
agent, editing it or running a task.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and play the game!

<!-- greptile_comment -->

<details><summary><h3>Greptile Summary</h3></summary>

This PR replaces the static progress bar and idle wait screens with an
interactive mini-game across the Create, Edit, and Run Agent copilot
tools. The existing mini-game (a simple runner with projectile-dodge
boss encounters) is significantly overhauled into a two-mode game: a
runner mode with animated tree obstacles and a duel mode featuring a
melee boss fight with attack, guard, and movement mechanics.
Sprite-based rendering replaces the previous shape-drawing approach.

- **Create/Edit/Run Agent UX**: All three tool views now show the
mini-game with contextual overlays during long-running operations,
replacing the progress bar in EditAgent and adding the game to RunAgent
- **Game mechanics overhaul**: Boss encounters changed from
projectile-dodging to melee duel with attack (Z), block (X), movement
(arrows), and jump (Space) controls
- **Sprite rendering**: Added 9 sprite sheet assets for characters,
trees, and boss animations with fallback to shape rendering if images
fail to load
- **UI overlays**: Added React-managed overlay states for idle,
boss-intro, boss-defeated, and game-over screens with continue/retry
buttons
- **Minor issues found**: Unused `isRunActive` variable in
`MiniGame.tsx`, unreachable "leaving" boss phase in `useMiniGame.ts`,
and a missing `expanded` property in `getAccordionMeta` return type
annotation in `EditAgent.tsx`
- **Unused asset**: `archer-shoot.png` is included in the PR but never
imported or referenced in any code
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge — it only affects the copilot mini-game UX
with no backend or data model changes.
- The changes are entirely frontend/cosmetic, scoped to the copilot
tools' waiting UX. The mini-game logic is self-contained in a
canvas-based hook and doesn't affect any application state, API calls,
or routing. The issues found are minor (unused variable, dead code, type
annotation gap, unused asset) and don't impact runtime behavior.
- `useMiniGame.ts` has the most complex logic changes (boss AI, death
animations, sprite rendering) and contains unreachable dead code in the
"leaving" phase handler. `EditAgent.tsx` has a return type annotation
that doesn't include `expanded`.
</details>


<details><summary><h3>Flowchart</h3></summary>

```mermaid
flowchart TD
    A[Game Idle] -->|"Start button"| B[Run Mode]
    B -->|"Jump over trees"| C{Score >= Threshold?}
    C -->|No| B
    C -->|"Yes, obstacles clear"| D[Boss Intro Overlay]
    D -->|"Continue button"| E[Duel Mode]
    E -->|"Attack Z / Guard X / Move ←→"| F{Boss HP <= 0?}
    F -->|No| G{Player hit & not guarding?}
    G -->|No| E
    G -->|Yes| H[Player Death Animation]
    H --> I[Game Over Overlay]
    I -->|"Retry button"| B
    F -->|Yes| J[Boss Death Animation]
    J --> K[Boss Defeated Overlay]
    K -->|"Continue button"| L[Reset Boss & Resume Run]
    L --> B
```
</details>


<sub>Last reviewed commit: ad80e24</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-16 10:53:08 +00:00
Ubbe
187ab04745 refactor(frontend): remove OldAgentLibraryView and NEW_AGENT_RUNS flag (#12088)
## Summary
- Removes the deprecated `OldAgentLibraryView` directory (13 files,
~2200 lines deleted)
- Removes the `NEW_AGENT_RUNS` feature flag from the `Flag` enum and
defaults
- Removes the legacy agent library page at `library/legacy/[id]`
- Moves shared `CronScheduler` components to
`src/components/contextual/CronScheduler/`
- Moves `agent-run-draft-view` and `agent-status-chip` to
`legacy-builder/` (co-located with their only consumer)
- Updates all import paths in consuming files (`AgentInfoStep`,
`SaveControl`, `RunnerInputUI`, `useRunGraph`)

## Test plan
- [x] `pnpm format` passes
- [x] `pnpm types` passes (no TypeScript errors)
- [x] No remaining references to `OldAgentLibraryView`,
`NEW_AGENT_RUNS`, or `new-agent-runs` in the codebase
- [x] Verify `RunnerInputUI` dialog still works in the legacy builder
- [x] Verify `AgentInfoStep` cron scheduling works in the publish modal
- [x] Verify `SaveControl` cron scheduling works in the legacy builder

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

This PR removes deprecated code from the legacy agent library view
system and consolidates the codebase to use the new agent runs
implementation exclusively. The refactor successfully removes ~2200
lines of code across 13 deleted files while properly relocating shared
components.

**Key changes:**
- Removed the entire `OldAgentLibraryView` directory and its 13
component files
- Removed the `NEW_AGENT_RUNS` feature flag from the `Flag` enum and
defaults
- Deleted the legacy agent library page route at `library/legacy/[id]`
- Moved `CronScheduler` components to
`src/components/contextual/CronScheduler/` for shared use across the
application
- Moved `agent-run-draft-view` and `agent-status-chip` to
`legacy-builder/` directory, co-locating them with their only consumer
- Updated `useRunGraph.ts` to import `GraphExecutionMeta` from the
generated API models instead of the deleted custom type definition
- Updated all import paths in consuming components (`AgentInfoStep`,
`SaveControl`, `RunnerInputUI`)

**Technical notes:**
- The new import path for `GraphExecutionMeta`
(`@/app/api/__generated__/models/graphExecutionMeta`) will be generated
when running `pnpm generate:api` from the OpenAPI spec
- All references to the old code have been cleanly removed from the
codebase
- The refactor maintains proper separation of concerns by moving shared
components to contextual locations
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge with minimal risk, pending manual
verification of the UI components mentioned in the test plan
- The refactor is well-structured and all code changes are correct. The
score of 4 (rather than 5) reflects that the PR author has marked three
manual testing items as incomplete in the test plan: verifying
`RunnerInputUI` dialog, `AgentInfoStep` cron scheduling, and
`SaveControl` cron scheduling. While the code changes are sound, these
UI components should be manually tested before merging to ensure the
moved components work correctly in their new locations.
- No files require special attention. The author should complete the
manual testing checklist items for `RunnerInputUI`, `AgentInfoStep`, and
`SaveControl` as noted in the test plan.
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Dev as Developer
    participant FE as Frontend Build
    participant API as Backend API
    participant Gen as Generated Types

    Note over Dev,Gen: Refactor: Remove OldAgentLibraryView & NEW_AGENT_RUNS flag

    Dev->>FE: Delete OldAgentLibraryView (13 files, ~2200 lines)
    Dev->>FE: Remove NEW_AGENT_RUNS from Flag enum
    Dev->>FE: Delete library/legacy/[id]/page.tsx
    
    Dev->>FE: Move CronScheduler → src/components/contextual/
    Dev->>FE: Move agent-run-draft-view → legacy-builder/
    Dev->>FE: Move agent-status-chip → legacy-builder/
    
    Dev->>FE: Update RunnerInputUI import path
    Dev->>FE: Update SaveControl import path
    Dev->>FE: Update AgentInfoStep import path
    
    Dev->>FE: Update useRunGraph.ts
    FE->>Gen: Import GraphExecutionMeta from generated models
    Note over Gen: Type available after pnpm generate:api
    
    Gen-->>API: Uses OpenAPI spec schema
    API-->>FE: Type-safe GraphExecutionMeta model
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 18:29:59 +08:00
42 changed files with 853 additions and 2494 deletions

View File

@@ -106,8 +106,6 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
GPT41_MINI = "gpt-4.1-mini-2025-04-14"
GPT4O_MINI = "gpt-4o-mini"
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"
# Anthropic models
CLAUDE_4_1_OPUS = "claude-opus-4-1-20250805"
CLAUDE_4_OPUS = "claude-opus-4-20250514"
@@ -255,12 +253,6 @@ MODEL_METADATA = {
LlmModel.GPT4O: ModelMetadata(
"openai", 128000, 16384, "GPT-4o", "OpenAI", "OpenAI", 2
), # gpt-4o-2024-08-06
LlmModel.GPT4_TURBO: ModelMetadata(
"openai", 128000, 4096, "GPT-4 Turbo", "OpenAI", "OpenAI", 3
), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata(
"openai", 16385, 4096, "GPT-3.5 Turbo", "OpenAI", "OpenAI", 1
), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_4_1_OPUS: ModelMetadata(
"anthropic", 200000, 32000, "Claude Opus 4.1", "Anthropic", "Anthropic", 3

View File

@@ -75,8 +75,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.GPT41_MINI: 1,
LlmModel.GPT4O_MINI: 1,
LlmModel.GPT4O: 3,
LlmModel.GPT4_TURBO: 10,
LlmModel.GPT3_5_TURBO: 1,
LlmModel.CLAUDE_4_1_OPUS: 21,
LlmModel.CLAUDE_4_OPUS: 21,
LlmModel.CLAUDE_4_SONNET: 5,

View File

@@ -79,7 +79,7 @@ async def test_block_credit_usage(server: SpinTestServer):
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={
"model": "gpt-4-turbo",
"model": "gpt-4o",
"credentials": {
"id": openai_credentials.id,
"provider": openai_credentials.provider,
@@ -100,7 +100,7 @@ async def test_block_credit_usage(server: SpinTestServer):
graph_exec_id="test_graph_exec",
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={"model": "gpt-4-turbo", "api_key": "owned_api_key"},
inputs={"model": "gpt-4o", "api_key": "owned_api_key"},
execution_context=ExecutionContext(user_timezone="UTC"),
),
)

View File

@@ -0,0 +1,42 @@
-- Migrate deprecated OpenAI GPT-4-turbo and GPT-3.5-turbo models
-- This updates all AgentNode blocks that use deprecated models
-- OpenAI is retiring these models:
-- - gpt-4-turbo: March 26, 2026 -> migrate to gpt-4o
-- - gpt-3.5-turbo: September 28, 2026 -> migrate to gpt-4o-mini
-- Update gpt-4-turbo to gpt-4o (staying in same capability tier)
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"gpt-4o"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'gpt-4-turbo';
-- Update gpt-3.5-turbo to gpt-4o-mini (appropriate replacement for lightweight model)
UPDATE "AgentNode"
SET "constantInput" = JSONB_SET(
"constantInput"::jsonb,
'{model}',
'"gpt-4o-mini"'::jsonb
)
WHERE "constantInput"::jsonb->>'model' = 'gpt-3.5-turbo';
-- Update AgentPreset input overrides (stored in AgentNodeExecutionInputOutput)
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"gpt-4o"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'gpt-4-turbo';
UPDATE "AgentNodeExecutionInputOutput"
SET "data" = JSONB_SET(
"data"::jsonb,
'{model}',
'"gpt-4o-mini"'::jsonb
)
WHERE "agentPresetId" IS NOT NULL
AND "data"::jsonb->>'model' = 'gpt-3.5-turbo';

View File

@@ -4,7 +4,7 @@ import {
} from "@/app/api/__generated__/endpoints/graphs/graphs";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { parseAsInteger, parseAsString, useQueryStates } from "nuqs";
import { GraphExecutionMeta } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/use-agent-runs";
import { GraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
import { useGraphStore } from "@/app/(platform)/build/stores/graphStore";
import { useShallow } from "zustand/react/shallow";
import { useEffect, useState } from "react";

View File

@@ -1,6 +1,6 @@
import { useCallback } from "react";
import { AgentRunDraftView } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view";
import { AgentRunDraftView } from "@/app/(platform)/build/components/legacy-builder/agent-run-draft-view";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import type {
CredentialsMetaInput,

View File

@@ -18,7 +18,7 @@ import {
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useQueryClient } from "@tanstack/react-query";
import { getGetV2ListMySubmissionsQueryKey } from "@/app/api/__generated__/endpoints/store/store";
import { CronExpressionDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { CronExpressionDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog";
import { humanizeCronExpression } from "@/lib/cron-expression-utils";
import { CalendarClockIcon } from "lucide-react";

View File

@@ -20,7 +20,7 @@ import {
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { RunAgentInputs } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentInputs/RunAgentInputs";
import { ScheduleTaskDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { ScheduleTaskDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog";
import ActionButtonGroup from "@/components/__legacy__/action-button-group";
import type { ButtonAction } from "@/components/__legacy__/types";
import {
@@ -53,7 +53,10 @@ import { ClockIcon, CopyIcon, InfoIcon } from "@phosphor-icons/react";
import { CalendarClockIcon, Trash2Icon } from "lucide-react";
import { analytics } from "@/services/analytics";
import { AgentStatus, AgentStatusChip } from "./agent-status-chip";
import {
AgentStatus,
AgentStatusChip,
} from "@/app/(platform)/build/components/legacy-builder/agent-status-chip";
export function AgentRunDraftView({
graph,

View File

@@ -4,11 +4,11 @@ import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
BookOpenIcon,
CheckFatIcon,
PencilSimpleIcon,
WarningDiamondIcon,
} from "@phosphor-icons/react";
import type { ToolUIPart } from "ai";
import Image from "next/image";
import NextLink from "next/link";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
@@ -24,6 +24,7 @@ import {
ClarificationQuestionsCard,
ClarifyingQuestion,
} from "./components/ClarificationQuestionsCard";
import sparklesImg from "./components/MiniGame/assets/sparkles.png";
import { MiniGame } from "./components/MiniGame/MiniGame";
import {
AccordionIcon,
@@ -83,7 +84,8 @@ function getAccordionMeta(output: CreateAgentToolOutput) {
) {
return {
icon,
title: "Creating agent, this may take a few minutes. Sit back and relax.",
title:
"Creating agent, this may take a few minutes. Play while you wait.",
expanded: true,
};
}
@@ -167,16 +169,20 @@ export function CreateAgentTool({ part }: Props) {
{isAgentSavedOutput(output) && (
<div className="rounded-xl border border-border/60 bg-card p-4 shadow-sm">
<div className="flex items-baseline gap-2">
<CheckFatIcon
size={18}
weight="regular"
className="relative top-1 text-green-500"
<Image
src={sparklesImg}
alt="sparkles"
width={24}
height={24}
className="relative top-1"
/>
<Text
variant="body-medium"
className="text-blacks mb-2 text-[16px]"
className="mb-2 text-[16px] text-black"
>
{output.message}
Agent{" "}
<span className="text-violet-600">{output.agent_name}</span>{" "}
has been saved to your library!
</Text>
</div>
<div className="mt-3 flex flex-wrap gap-4">

View File

@@ -2,20 +2,65 @@
import { useMiniGame } from "./useMiniGame";
function Key({ children }: { children: React.ReactNode }) {
return <strong>[{children}]</strong>;
}
export function MiniGame() {
const { canvasRef } = useMiniGame();
const { canvasRef, activeMode, showOverlay, score, highScore, onContinue } =
useMiniGame();
const isRunActive =
activeMode === "run" || activeMode === "idle" || activeMode === "over";
let overlayText: string | undefined;
let buttonLabel = "Continue";
if (activeMode === "idle") {
buttonLabel = "Start";
} else if (activeMode === "boss-intro") {
overlayText = "Face the bandit!";
} else if (activeMode === "boss-defeated") {
overlayText = "Great job, keep on going";
} else if (activeMode === "over") {
overlayText = `Score: ${score} / Record: ${highScore}`;
buttonLabel = "Retry";
}
return (
<div
className="w-full overflow-hidden rounded-md bg-background text-foreground"
style={{ border: "1px solid #d17fff" }}
>
<canvas
ref={canvasRef}
tabIndex={0}
className="block w-full outline-none"
style={{ imageRendering: "pixelated" }}
/>
<div className="flex flex-col gap-2">
<p className="text-sm font-medium text-purple-500">
{isRunActive ? (
<>
Run mode: <Key>Space</Key> to jump
</>
) : (
<>
Duel mode: <Key></Key> to move · <Key>Z</Key> to attack ·{" "}
<Key>X</Key> to block · <Key>Space</Key> to jump
</>
)}
</p>
<div className="relative w-full overflow-hidden rounded-md border border-accent bg-background text-foreground">
<canvas
ref={canvasRef}
tabIndex={0}
className="block w-full outline-none"
/>
{showOverlay && (
<div className="absolute inset-0 flex flex-col items-center justify-center gap-3 bg-black/40">
{overlayText && (
<p className="text-lg font-bold text-white">{overlayText}</p>
)}
<button
type="button"
onClick={onContinue}
className="rounded-md bg-white px-4 py-2 text-sm font-semibold text-zinc-800 shadow-md transition-colors hover:bg-zinc-100"
>
{buttonLabel}
</button>
</div>
)}
</div>
</div>
);
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@@ -136,7 +136,7 @@ export function getAnimationText(part: {
if (isOperationPendingOutput(output)) return "Agent creation in progress";
if (isOperationInProgressOutput(output))
return "Agent creation already in progress";
if (isAgentSavedOutput(output)) return `Saved "${output.agent_name}"`;
if (isAgentSavedOutput(output)) return `Saved ${output.agent_name}`;
if (isAgentPreviewOutput(output)) return `Preview "${output.agent_name}"`;
if (isClarificationNeededOutput(output)) return "Needs clarification";
return "Error creating agent";

View File

@@ -5,7 +5,6 @@ import type { ToolUIPart } from "ai";
import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ProgressBar } from "../../components/ProgressBar/ProgressBar";
import {
ContentCardDescription,
ContentCodeBlock,
@@ -15,7 +14,7 @@ import {
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { useAsymptoticProgress } from "../../hooks/useAsymptoticProgress";
import { MiniGame } from "../CreateAgent/components/MiniGame/MiniGame";
import {
ClarificationQuestionsCard,
ClarifyingQuestion,
@@ -54,6 +53,7 @@ function getAccordionMeta(output: EditAgentToolOutput): {
title: string;
titleClassName?: string;
description?: string;
expanded?: boolean;
} {
const icon = <AccordionIcon />;
@@ -80,7 +80,11 @@ function getAccordionMeta(output: EditAgentToolOutput): {
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output)
) {
return { icon: <OrbitLoader size={32} />, title: "Editing agent" };
return {
icon: <OrbitLoader size={32} />,
title: "Editing agent, this may take a few minutes. Play while you wait.",
expanded: true,
};
}
return {
icon: (
@@ -105,7 +109,6 @@ export function EditAgentTool({ part }: Props) {
(isOperationStartedOutput(output) ||
isOperationPendingOutput(output) ||
isOperationInProgressOutput(output));
const progress = useAsymptoticProgress(isOperating);
const hasExpandableContent =
part.state === "output-available" &&
!!output &&
@@ -149,9 +152,9 @@ export function EditAgentTool({ part }: Props) {
<ToolAccordion {...getAccordionMeta(output)}>
{isOperating && (
<ContentGrid>
<ProgressBar value={progress} className="max-w-[280px]" />
<MiniGame />
<ContentHint>
This could take a few minutes, grab a coffee
This could take a few minutes play while you wait!
</ContentHint>
</ContentGrid>
)}

View File

@@ -2,8 +2,14 @@
import type { ToolUIPart } from "ai";
import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation";
import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader";
import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion";
import { ContentMessage } from "../../components/ToolAccordion/AccordionContent";
import {
ContentGrid,
ContentHint,
ContentMessage,
} from "../../components/ToolAccordion/AccordionContent";
import { MiniGame } from "../CreateAgent/components/MiniGame/MiniGame";
import {
getAccordionMeta,
getAnimationText,
@@ -60,6 +66,21 @@ export function RunAgentTool({ part }: Props) {
/>
</div>
{isStreaming && !output && (
<ToolAccordion
icon={<OrbitLoader size={32} />}
title="Running agent, this may take a few minutes. Play while you wait."
expanded={true}
>
<ContentGrid>
<MiniGame />
<ContentHint>
This could take a few minutes play while you wait!
</ContentHint>
</ContentGrid>
</ToolAccordion>
)}
{hasExpandableContent && output && (
<ToolAccordion {...getAccordionMeta(output)}>
{isRunAgentExecutionStartedOutput(output) && (

View File

@@ -1,631 +0,0 @@
"use client";
import { useParams, useRouter } from "next/navigation";
import { useQueryState } from "nuqs";
import React, {
useCallback,
useEffect,
useMemo,
useRef,
useState,
} from "react";
import {
Graph,
GraphExecution,
GraphExecutionID,
GraphExecutionMeta,
GraphID,
LibraryAgent,
LibraryAgentID,
LibraryAgentPreset,
LibraryAgentPresetID,
Schedule,
ScheduleID,
} from "@/lib/autogpt-server-api";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import { exportAsJSONFile } from "@/lib/utils";
import DeleteConfirmDialog from "@/components/__legacy__/delete-confirm-dialog";
import type { ButtonAction } from "@/components/__legacy__/types";
import { Button } from "@/components/__legacy__/ui/button";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/__legacy__/ui/dialog";
import LoadingBox, { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import {
useToast,
useToastOnFail,
} from "@/components/molecules/Toast/use-toast";
import { AgentRunDetailsView } from "./components/agent-run-details-view";
import { AgentRunDraftView } from "./components/agent-run-draft-view";
import { CreatePresetDialog } from "./components/create-preset-dialog";
import { useAgentRunsInfinite } from "./use-agent-runs";
import { AgentRunsSelectorList } from "./components/agent-runs-selector-list";
import { AgentScheduleDetailsView } from "./components/agent-schedule-details-view";
export function OldAgentLibraryView() {
const { id: agentID }: { id: LibraryAgentID } = useParams();
const [executionId, setExecutionId] = useQueryState("executionId");
const toastOnFail = useToastOnFail();
const { toast } = useToast();
const router = useRouter();
const api = useBackendAPI();
// ============================ STATE =============================
const [graph, setGraph] = useState<Graph | null>(null); // Graph version corresponding to LibraryAgent
const [agent, setAgent] = useState<LibraryAgent | null>(null);
const agentRunsQuery = useAgentRunsInfinite(graph?.id); // only runs once graph.id is known
const agentRuns = agentRunsQuery.agentRuns;
const [agentPresets, setAgentPresets] = useState<LibraryAgentPreset[]>([]);
const [schedules, setSchedules] = useState<Schedule[]>([]);
const [selectedView, selectView] = useState<
| { type: "run"; id?: GraphExecutionID }
| { type: "preset"; id: LibraryAgentPresetID }
| { type: "schedule"; id: ScheduleID }
>({ type: "run" });
const [selectedRun, setSelectedRun] = useState<
GraphExecution | GraphExecutionMeta | null
>(null);
const selectedSchedule =
selectedView.type == "schedule"
? schedules.find((s) => s.id == selectedView.id)
: null;
const [isFirstLoad, setIsFirstLoad] = useState<boolean>(true);
const [agentDeleteDialogOpen, setAgentDeleteDialogOpen] =
useState<boolean>(false);
const [confirmingDeleteAgentRun, setConfirmingDeleteAgentRun] =
useState<GraphExecutionMeta | null>(null);
const [confirmingDeleteAgentPreset, setConfirmingDeleteAgentPreset] =
useState<LibraryAgentPresetID | null>(null);
const [copyAgentDialogOpen, setCopyAgentDialogOpen] = useState(false);
const [creatingPresetFromExecutionID, setCreatingPresetFromExecutionID] =
useState<GraphExecutionID | null>(null);
// Set page title with agent name
useEffect(() => {
if (agent) {
document.title = `${agent.name} - Library - AutoGPT Platform`;
}
}, [agent]);
const openRunDraftView = useCallback(() => {
selectView({ type: "run" });
}, []);
const selectRun = useCallback((id: GraphExecutionID) => {
selectView({ type: "run", id });
}, []);
const selectPreset = useCallback((id: LibraryAgentPresetID) => {
selectView({ type: "preset", id });
}, []);
const selectSchedule = useCallback((id: ScheduleID) => {
selectView({ type: "schedule", id });
}, []);
const graphVersions = useRef<Record<number, Graph>>({});
const loadingGraphVersions = useRef<Record<number, Promise<Graph>>>({});
const getGraphVersion = useCallback(
async (graphID: GraphID, version: number) => {
if (version in graphVersions.current)
return graphVersions.current[version];
if (version in loadingGraphVersions.current)
return loadingGraphVersions.current[version];
const pendingGraph = api.getGraph(graphID, version).then((graph) => {
graphVersions.current[version] = graph;
return graph;
});
// Cache promise as well to avoid duplicate requests
loadingGraphVersions.current[version] = pendingGraph;
return pendingGraph;
},
[api, graphVersions, loadingGraphVersions],
);
const lastRefresh = useRef<number>(0);
const refreshPageData = useCallback(() => {
if (Date.now() - lastRefresh.current < 2e3) return; // 2 second debounce
lastRefresh.current = Date.now();
api.getLibraryAgent(agentID).then((agent) => {
setAgent(agent);
getGraphVersion(agent.graph_id, agent.graph_version).then(
(_graph) =>
(graph && graph.version == _graph.version) || setGraph(_graph),
);
Promise.all([
agentRunsQuery.refetchRuns(),
api.listLibraryAgentPresets({
graph_id: agent.graph_id,
page_size: 100,
}),
]).then(([runsQueryResult, presets]) => {
setAgentPresets(presets.presets);
const newestAgentRunsResponse = runsQueryResult.data?.pages[0];
if (!newestAgentRunsResponse || newestAgentRunsResponse.status != 200)
return;
const newestAgentRuns = newestAgentRunsResponse.data.executions;
// Preload the corresponding graph versions for the latest 10 runs
new Set(
newestAgentRuns.slice(0, 10).map((run) => run.graph_version),
).forEach((version) => getGraphVersion(agent.graph_id, version));
});
});
}, [api, agentID, getGraphVersion, graph]);
// On first load: select the latest run
useEffect(() => {
// Only for first load or first execution
if (selectedView.id || !isFirstLoad) return;
if (agentRuns.length == 0 && agentPresets.length == 0) return;
setIsFirstLoad(false);
if (agentRuns.length > 0) {
// select latest run
const latestRun = agentRuns.reduce((latest, current) => {
if (!latest.started_at && !current.started_at) return latest;
if (!latest.started_at) return current;
if (!current.started_at) return latest;
return latest.started_at > current.started_at ? latest : current;
}, agentRuns[0]);
selectRun(latestRun.id as GraphExecutionID);
} else {
// select top preset
const latestPreset = agentPresets.toSorted(
(a, b) => b.updated_at.getTime() - a.updated_at.getTime(),
)[0];
selectPreset(latestPreset.id);
}
}, [
isFirstLoad,
selectedView.id,
agentRuns,
agentPresets,
selectRun,
selectPreset,
]);
useEffect(() => {
if (executionId) {
selectRun(executionId as GraphExecutionID);
setExecutionId(null);
}
}, [executionId, selectRun, setExecutionId]);
// Initial load
useEffect(() => {
refreshPageData();
// Show a toast when the WebSocket connection disconnects
let connectionToast: ReturnType<typeof toast> | null = null;
const cancelDisconnectHandler = api.onWebSocketDisconnect(() => {
connectionToast ??= toast({
title: "Connection to server was lost",
variant: "destructive",
description: (
<div className="flex items-center">
Trying to reconnect...
<LoadingSpinner className="ml-1.5 size-3.5" />
</div>
),
duration: Infinity,
dismissable: true,
});
});
const cancelConnectHandler = api.onWebSocketConnect(() => {
if (connectionToast)
connectionToast.update({
id: connectionToast.id,
title: "✅ Connection re-established",
variant: "default",
description: (
<div className="flex items-center">
Refreshing data...
<LoadingSpinner className="ml-1.5 size-3.5" />
</div>
),
duration: 2000,
dismissable: true,
});
connectionToast = null;
});
return () => {
cancelDisconnectHandler();
cancelConnectHandler();
};
}, []);
// Subscribe to WebSocket updates for agent runs
useEffect(() => {
if (!agent?.graph_id) return;
return api.onWebSocketConnect(() => {
refreshPageData(); // Sync up on (re)connect
// Subscribe to all executions for this agent
api.subscribeToGraphExecutions(agent.graph_id);
});
}, [api, agent?.graph_id, refreshPageData]);
// Handle execution updates
useEffect(() => {
const detachExecUpdateHandler = api.onWebSocketMessage(
"graph_execution_event",
(data) => {
if (data.graph_id != agent?.graph_id) return;
agentRunsQuery.upsertAgentRun(data);
if (data.id === selectedView.id) {
// Update currently viewed run
setSelectedRun(data);
}
},
);
return () => {
detachExecUpdateHandler();
};
}, [api, agent?.graph_id, selectedView.id]);
// Pre-load selectedRun based on selectedView
useEffect(() => {
if (selectedView.type != "run" || !selectedView.id) return;
const newSelectedRun = agentRuns.find((run) => run.id == selectedView.id);
if (selectedView.id !== selectedRun?.id) {
// Pull partial data from "cache" while waiting for the rest to load
setSelectedRun((newSelectedRun as GraphExecutionMeta) ?? null);
}
}, [api, selectedView, agentRuns, selectedRun?.id]);
// Load selectedRun based on selectedView; refresh on agent refresh
useEffect(() => {
if (selectedView.type != "run" || !selectedView.id || !agent) return;
api
.getGraphExecutionInfo(agent.graph_id, selectedView.id)
.then(async (run) => {
// Ensure corresponding graph version is available before rendering I/O
await getGraphVersion(run.graph_id, run.graph_version);
setSelectedRun(run);
});
}, [api, selectedView, agent, getGraphVersion]);
const fetchSchedules = useCallback(async () => {
if (!agent) return;
setSchedules(await api.listGraphExecutionSchedules(agent.graph_id));
}, [api, agent?.graph_id]);
useEffect(() => {
fetchSchedules();
}, [fetchSchedules]);
// =========================== ACTIONS ============================
const deleteRun = useCallback(
async (run: GraphExecutionMeta) => {
if (run.status == "RUNNING" || run.status == "QUEUED") {
await api.stopGraphExecution(run.graph_id, run.id);
}
await api.deleteGraphExecution(run.id);
setConfirmingDeleteAgentRun(null);
if (selectedView.type == "run" && selectedView.id == run.id) {
openRunDraftView();
}
agentRunsQuery.removeAgentRun(run.id);
},
[api, selectedView, openRunDraftView],
);
const deletePreset = useCallback(
async (presetID: LibraryAgentPresetID) => {
await api.deleteLibraryAgentPreset(presetID);
setConfirmingDeleteAgentPreset(null);
if (selectedView.type == "preset" && selectedView.id == presetID) {
openRunDraftView();
}
setAgentPresets((presets) => presets.filter((p) => p.id !== presetID));
},
[api, selectedView, openRunDraftView],
);
const deleteSchedule = useCallback(
async (scheduleID: ScheduleID) => {
const removedSchedule =
await api.deleteGraphExecutionSchedule(scheduleID);
setSchedules((schedules) => {
const newSchedules = schedules.filter(
(s) => s.id !== removedSchedule.id,
);
if (
selectedView.type == "schedule" &&
selectedView.id == removedSchedule.id
) {
if (newSchedules.length > 0) {
// Select next schedule if available
selectSchedule(newSchedules[0].id);
} else {
// Reset to draft view if current schedule was deleted
openRunDraftView();
}
}
return newSchedules;
});
openRunDraftView();
},
[schedules, api],
);
const handleCreatePresetFromRun = useCallback(
async (name: string, description: string) => {
if (!creatingPresetFromExecutionID) return;
await api
.createLibraryAgentPreset({
name,
description,
graph_execution_id: creatingPresetFromExecutionID,
})
.then((preset) => {
setAgentPresets((prev) => [...prev, preset]);
selectPreset(preset.id);
setCreatingPresetFromExecutionID(null);
})
.catch(toastOnFail("create a preset"));
},
[api, creatingPresetFromExecutionID, selectPreset, toast],
);
const downloadGraph = useCallback(
async () =>
agent &&
// Export sanitized graph from backend
api
.getGraph(agent.graph_id, agent.graph_version, true)
.then((graph) =>
exportAsJSONFile(graph, `${graph.name}_v${graph.version}.json`),
),
[api, agent],
);
const copyAgent = useCallback(async () => {
setCopyAgentDialogOpen(false);
api
.forkLibraryAgent(agentID)
.then((newAgent) => {
router.push(`/library/agents/${newAgent.id}`);
})
.catch((error) => {
console.error("Error copying agent:", error);
toast({
title: "Error copying agent",
description: `An error occurred while copying the agent: ${error.message}`,
variant: "destructive",
});
});
}, [agentID, api, router, toast]);
const agentActions: ButtonAction[] = useMemo(
() => [
{
label: "Customize agent",
href: `/build?flowID=${agent?.graph_id}&flowVersion=${agent?.graph_version}`,
disabled: !agent?.can_access_graph,
},
{ label: "Export agent to file", callback: downloadGraph },
...(!agent?.can_access_graph
? [
{
label: "Edit a copy",
callback: () => setCopyAgentDialogOpen(true),
},
]
: []),
{
label: "Delete agent",
callback: () => setAgentDeleteDialogOpen(true),
},
],
[agent, downloadGraph],
);
const runGraph =
graphVersions.current[selectedRun?.graph_version ?? 0] ?? graph;
const onCreateSchedule = useCallback(
(schedule: Schedule) => {
setSchedules((prev) => [...prev, schedule]);
selectSchedule(schedule.id);
},
[selectView],
);
const onCreatePreset = useCallback(
(preset: LibraryAgentPreset) => {
setAgentPresets((prev) => [...prev, preset]);
selectPreset(preset.id);
},
[selectPreset],
);
const onUpdatePreset = useCallback(
(updated: LibraryAgentPreset) => {
setAgentPresets((prev) =>
prev.map((p) => (p.id === updated.id ? updated : p)),
);
selectPreset(updated.id);
},
[selectPreset],
);
if (!agent || !graph) {
return <LoadingBox className="h-[90vh]" />;
}
return (
<div className="container justify-stretch p-0 pt-16 lg:flex">
{/* Sidebar w/ list of runs */}
{/* TODO: render this below header in sm and md layouts */}
<AgentRunsSelectorList
className="agpt-div w-full border-b pb-2 lg:w-auto lg:border-b-0 lg:border-r lg:pb-0"
agent={agent}
agentRunsQuery={agentRunsQuery}
agentPresets={agentPresets}
schedules={schedules}
selectedView={selectedView}
onSelectRun={selectRun}
onSelectPreset={selectPreset}
onSelectSchedule={selectSchedule}
onSelectDraftNewRun={openRunDraftView}
doDeleteRun={setConfirmingDeleteAgentRun}
doDeletePreset={setConfirmingDeleteAgentPreset}
doDeleteSchedule={deleteSchedule}
doCreatePresetFromRun={setCreatingPresetFromExecutionID}
/>
<div className="flex-1">
{/* Header */}
<div className="agpt-div w-full border-b">
<h1
data-testid="agent-title"
className="font-poppins text-3xl font-medium"
>
{
agent.name /* TODO: use dynamic/custom run title - https://github.com/Significant-Gravitas/AutoGPT/issues/9184 */
}
</h1>
</div>
{/* Run / Schedule views */}
{(selectedView.type == "run" && selectedView.id ? (
selectedRun && runGraph ? (
<AgentRunDetailsView
agent={agent}
graph={runGraph}
run={selectedRun}
agentActions={agentActions}
onRun={selectRun}
doDeleteRun={() => setConfirmingDeleteAgentRun(selectedRun)}
doCreatePresetFromRun={() =>
setCreatingPresetFromExecutionID(selectedRun.id)
}
/>
) : null
) : selectedView.type == "run" ? (
/* Draft new runs / Create new presets */
<AgentRunDraftView
graph={graph}
onRun={selectRun}
onCreateSchedule={onCreateSchedule}
onCreatePreset={onCreatePreset}
agentActions={agentActions}
recommendedScheduleCron={agent?.recommended_schedule_cron || null}
/>
) : selectedView.type == "preset" ? (
/* Edit & update presets */
<AgentRunDraftView
graph={graph}
agentPreset={
agentPresets.find((preset) => preset.id == selectedView.id)!
}
onRun={selectRun}
recommendedScheduleCron={agent?.recommended_schedule_cron || null}
onCreateSchedule={onCreateSchedule}
onUpdatePreset={onUpdatePreset}
doDeletePreset={setConfirmingDeleteAgentPreset}
agentActions={agentActions}
/>
) : selectedView.type == "schedule" ? (
selectedSchedule &&
graph && (
<AgentScheduleDetailsView
graph={graph}
schedule={selectedSchedule}
// agent={agent}
agentActions={agentActions}
onForcedRun={selectRun}
doDeleteSchedule={deleteSchedule}
/>
)
) : null) || <LoadingBox className="h-[70vh]" />}
<DeleteConfirmDialog
entityType="agent"
open={agentDeleteDialogOpen}
onOpenChange={setAgentDeleteDialogOpen}
onDoDelete={() =>
agent &&
api.deleteLibraryAgent(agent.id).then(() => router.push("/library"))
}
/>
<DeleteConfirmDialog
entityType="agent run"
open={!!confirmingDeleteAgentRun}
onOpenChange={(open) => !open && setConfirmingDeleteAgentRun(null)}
onDoDelete={() =>
confirmingDeleteAgentRun && deleteRun(confirmingDeleteAgentRun)
}
/>
<DeleteConfirmDialog
entityType={agent.has_external_trigger ? "trigger" : "agent preset"}
open={!!confirmingDeleteAgentPreset}
onOpenChange={(open) => !open && setConfirmingDeleteAgentPreset(null)}
onDoDelete={() =>
confirmingDeleteAgentPreset &&
deletePreset(confirmingDeleteAgentPreset)
}
/>
{/* Copy agent confirmation dialog */}
<Dialog
onOpenChange={setCopyAgentDialogOpen}
open={copyAgentDialogOpen}
>
<DialogContent>
<DialogHeader>
<DialogTitle>You&apos;re making an editable copy</DialogTitle>
<DialogDescription className="pt-2">
The original Marketplace agent stays the same and cannot be
edited. We&apos;ll save a new version of this agent to your
Library. From there, you can customize it however you&apos;d
like by clicking &quot;Customize agent&quot; this will open
the builder where you can see and modify the inner workings.
</DialogDescription>
</DialogHeader>
<DialogFooter className="justify-end">
<Button
type="button"
variant="outline"
onClick={() => setCopyAgentDialogOpen(false)}
>
Cancel
</Button>
<Button type="button" onClick={copyAgent}>
Continue
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
<CreatePresetDialog
open={!!creatingPresetFromExecutionID}
onOpenChange={() => setCreatingPresetFromExecutionID(null)}
onConfirm={handleCreatePresetFromRun}
/>
</div>
</div>
);
}

View File

@@ -1,445 +0,0 @@
"use client";
import { format, formatDistanceToNow, formatDistanceStrict } from "date-fns";
import React, { useCallback, useMemo, useEffect } from "react";
import {
Graph,
GraphExecution,
GraphExecutionID,
GraphExecutionMeta,
LibraryAgent,
} from "@/lib/autogpt-server-api";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import ActionButtonGroup from "@/components/__legacy__/action-button-group";
import type { ButtonAction } from "@/components/__legacy__/types";
import {
Card,
CardContent,
CardHeader,
CardTitle,
} from "@/components/__legacy__/ui/card";
import {
IconRefresh,
IconSquare,
IconCircleAlert,
} from "@/components/__legacy__/ui/icons";
import { Input } from "@/components/__legacy__/ui/input";
import LoadingBox from "@/components/__legacy__/ui/loading";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { useToastOnFail } from "@/components/molecules/Toast/use-toast";
import { AgentRunStatus, agentRunStatusMap } from "./agent-run-status-chip";
import useCredits from "@/hooks/useCredits";
import { AgentRunOutputView } from "./agent-run-output-view";
import { analytics } from "@/services/analytics";
import { PendingReviewsList } from "@/components/organisms/PendingReviewsList/PendingReviewsList";
import { usePendingReviewsForExecution } from "@/hooks/usePendingReviews";
export function AgentRunDetailsView({
agent,
graph,
run,
agentActions,
onRun,
doDeleteRun,
doCreatePresetFromRun,
}: {
agent: LibraryAgent;
graph: Graph;
run: GraphExecution | GraphExecutionMeta;
agentActions: ButtonAction[];
onRun: (runID: GraphExecutionID) => void;
doDeleteRun: () => void;
doCreatePresetFromRun: () => void;
}): React.ReactNode {
const api = useBackendAPI();
const { formatCredits } = useCredits();
const runStatus: AgentRunStatus = useMemo(
() => agentRunStatusMap[run.status],
[run],
);
const {
pendingReviews,
isLoading: reviewsLoading,
refetch: refetchReviews,
} = usePendingReviewsForExecution(run.id);
const toastOnFail = useToastOnFail();
// Refetch pending reviews when execution status changes to REVIEW
useEffect(() => {
if (runStatus === "review" && run.id) {
refetchReviews();
}
}, [runStatus, run.id, refetchReviews]);
const infoStats: { label: string; value: React.ReactNode }[] = useMemo(() => {
if (!run) return [];
return [
{
label: "Status",
value: runStatus.charAt(0).toUpperCase() + runStatus.slice(1),
},
{
label: "Started",
value: run.started_at
? `${formatDistanceToNow(run.started_at, { addSuffix: true })}, ${format(run.started_at, "HH:mm")}`
: "—",
},
...(run.stats
? [
{
label: "Duration",
value: formatDistanceStrict(0, run.stats.duration * 1000),
},
{ label: "Steps", value: run.stats.node_exec_count },
{ label: "Cost", value: formatCredits(run.stats.cost) },
]
: []),
];
}, [run, runStatus, formatCredits]);
const agentRunInputs:
| Record<
string,
{
title?: string;
/* type: BlockIOSubType; */
value: string | number | undefined;
}
>
| undefined = useMemo(() => {
if (!run.inputs) return undefined;
// TODO: show (link to) preset - https://github.com/Significant-Gravitas/AutoGPT/issues/9168
// Add type info from agent input schema
return Object.fromEntries(
Object.entries(run.inputs).map(([k, v]) => [
k,
{
title: graph.input_schema.properties[k]?.title,
// type: graph.input_schema.properties[k].type, // TODO: implement typed graph inputs
value: typeof v == "object" ? JSON.stringify(v, undefined, 2) : v,
},
]),
);
}, [graph, run]);
const runAgain = useCallback(() => {
if (
!run.inputs ||
!(graph.credentials_input_schema?.required ?? []).every(
(k) => k in (run.credential_inputs ?? {}),
)
)
return;
if (run.preset_id) {
return api
.executeLibraryAgentPreset(
run.preset_id,
run.inputs!,
run.credential_inputs!,
)
.then(({ id }) => {
analytics.sendDatafastEvent("run_agent", {
name: graph.name,
id: graph.id,
});
onRun(id);
})
.catch(toastOnFail("execute agent preset"));
}
return api
.executeGraph(
graph.id,
graph.version,
run.inputs!,
run.credential_inputs!,
"library",
)
.then(({ id }) => {
analytics.sendDatafastEvent("run_agent", {
name: graph.name,
id: graph.id,
});
onRun(id);
})
.catch(toastOnFail("execute agent"));
}, [api, graph, run, onRun, toastOnFail]);
const stopRun = useCallback(
() => api.stopGraphExecution(graph.id, run.id),
[api, graph.id, run.id],
);
const agentRunOutputs:
| Record<
string,
{
title?: string;
/* type: BlockIOSubType; */
values: Array<React.ReactNode>;
}
>
| null
| undefined = useMemo(() => {
if (!("outputs" in run)) return undefined;
if (!["running", "success", "failed", "stopped"].includes(runStatus))
return null;
// Add type info from agent input schema
return Object.fromEntries(
Object.entries(run.outputs).map(([k, vv]) => [
k,
{
title: graph.output_schema.properties[k].title,
/* type: agent.output_schema.properties[k].type */
values: vv.map((v) =>
typeof v == "object" ? JSON.stringify(v, undefined, 2) : v,
),
},
]),
);
}, [graph, run, runStatus]);
const runActions: ButtonAction[] = useMemo(
() => [
...(["running", "queued"].includes(runStatus)
? ([
{
label: (
<>
<IconSquare className="mr-2 size-4" />
Stop run
</>
),
variant: "secondary",
callback: stopRun,
},
] satisfies ButtonAction[])
: []),
...(["success", "failed", "stopped"].includes(runStatus) &&
!graph.has_external_trigger &&
(graph.credentials_input_schema?.required ?? []).every(
(k) => k in (run.credential_inputs ?? {}),
)
? [
{
label: (
<>
<IconRefresh className="mr-2 size-4" />
Run again
</>
),
callback: runAgain,
dataTestId: "run-again-button",
},
]
: []),
...(agent.can_access_graph
? [
{
label: "Open run in builder",
href: `/build?flowID=${run.graph_id}&flowVersion=${run.graph_version}&flowExecutionID=${run.id}`,
},
]
: []),
{ label: "Create preset from run", callback: doCreatePresetFromRun },
{ label: "Delete run", variant: "secondary", callback: doDeleteRun },
],
[
runStatus,
runAgain,
stopRun,
doDeleteRun,
doCreatePresetFromRun,
graph.has_external_trigger,
graph.credentials_input_schema?.required,
agent.can_access_graph,
run.graph_id,
run.graph_version,
run.id,
],
);
return (
<div className="agpt-div flex gap-6">
<div className="flex flex-1 flex-col gap-4">
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Info</CardTitle>
</CardHeader>
<CardContent>
<div className="flex justify-stretch gap-4">
{infoStats.map(({ label, value }) => (
<div key={label} className="flex-1">
<p className="text-sm font-medium text-black">{label}</p>
<p className="text-sm text-neutral-600">{value}</p>
</div>
))}
</div>
{run.status === "FAILED" && (
<div className="mt-4 rounded-md border border-red-200 bg-red-50 p-3 dark:border-red-800 dark:bg-red-900/20">
<p className="text-sm text-red-800 dark:text-red-200">
<strong>Error:</strong>{" "}
{run.stats?.error ||
"The execution failed due to an internal error. You can re-run the agent to retry."}
</p>
</div>
)}
</CardContent>
</Card>
{/* Smart Agent Execution Summary */}
{run.stats?.activity_status && (
<Card className="agpt-box">
<CardHeader>
<CardTitle className="flex items-center gap-2 font-poppins text-lg">
Task Summary
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<IconCircleAlert className="size-4 cursor-help text-neutral-500 hover:text-neutral-700" />
</TooltipTrigger>
<TooltipContent>
<p className="max-w-xs">
This AI-generated summary describes how the agent
handled your task. Its an experimental feature and may
occasionally be inaccurate.
</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<p className="text-sm leading-relaxed text-neutral-700">
{run.stats.activity_status}
</p>
{/* Correctness Score */}
{typeof run.stats.correctness_score === "number" && (
<div className="flex items-center gap-3 rounded-lg bg-neutral-50 p-3">
<div className="flex items-center gap-2">
<span className="text-sm font-medium text-neutral-600">
Success Estimate:
</span>
<div className="flex items-center gap-2">
<div className="relative h-2 w-16 overflow-hidden rounded-full bg-neutral-200">
<div
className={`h-full transition-all ${
run.stats.correctness_score >= 0.8
? "bg-green-500"
: run.stats.correctness_score >= 0.6
? "bg-yellow-500"
: run.stats.correctness_score >= 0.4
? "bg-orange-500"
: "bg-red-500"
}`}
style={{
width: `${Math.round(run.stats.correctness_score * 100)}%`,
}}
/>
</div>
<span className="text-sm font-medium">
{Math.round(run.stats.correctness_score * 100)}%
</span>
</div>
</div>
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<IconCircleAlert className="size-4 cursor-help text-neutral-400 hover:text-neutral-600" />
</TooltipTrigger>
<TooltipContent>
<p className="max-w-xs">
AI-generated estimate of how well this execution
achieved its intended purpose. This score indicates
{run.stats.correctness_score >= 0.8
? " the agent was highly successful."
: run.stats.correctness_score >= 0.6
? " the agent was mostly successful with minor issues."
: run.stats.correctness_score >= 0.4
? " the agent was partially successful with some gaps."
: " the agent had limited success with significant issues."}
</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
)}
</CardContent>
</Card>
)}
{agentRunOutputs !== null && (
<AgentRunOutputView agentRunOutputs={agentRunOutputs} />
)}
{/* Pending Reviews Section */}
{runStatus === "review" && (
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">
Pending Reviews ({pendingReviews.length})
</CardTitle>
</CardHeader>
<CardContent>
{reviewsLoading ? (
<LoadingBox spinnerSize={12} className="h-24" />
) : pendingReviews.length > 0 ? (
<PendingReviewsList
reviews={pendingReviews}
onReviewComplete={refetchReviews}
emptyMessage="No pending reviews for this execution"
/>
) : (
<div className="py-4 text-neutral-600">
No pending reviews for this execution
</div>
)}
</CardContent>
</Card>
)}
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Input</CardTitle>
</CardHeader>
<CardContent className="flex flex-col gap-4">
{agentRunInputs !== undefined ? (
Object.entries(agentRunInputs).map(([key, { title, value }]) => (
<div key={key} className="flex flex-col gap-1.5">
<label className="text-sm font-medium">{title || key}</label>
<Input value={value} className="rounded-full" disabled />
</div>
))
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
</div>
{/* Run / Agent Actions */}
<aside className="w-48 xl:w-56">
<div className="flex flex-col gap-8">
<ActionButtonGroup title="Run actions" actions={runActions} />
<ActionButtonGroup title="Agent actions" actions={agentActions} />
</div>
</aside>
</div>
);
}

View File

@@ -1,178 +0,0 @@
"use client";
import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag";
import React, { useMemo } from "react";
import {
Card,
CardContent,
CardHeader,
CardTitle,
} from "@/components/__legacy__/ui/card";
import LoadingBox from "@/components/__legacy__/ui/loading";
import type { OutputMetadata } from "../../../../../../../../components/contextual/OutputRenderers";
import {
globalRegistry,
OutputActions,
OutputItem,
} from "../../../../../../../../components/contextual/OutputRenderers";
export function AgentRunOutputView({
agentRunOutputs,
}: {
agentRunOutputs:
| Record<
string,
{
title?: string;
/* type: BlockIOSubType; */
values: Array<React.ReactNode>;
}
>
| undefined;
}) {
const enableEnhancedOutputHandling = useGetFlag(
Flag.ENABLE_ENHANCED_OUTPUT_HANDLING,
);
// Prepare items for the renderer system
const outputItems = useMemo(() => {
if (!agentRunOutputs) return [];
const items: Array<{
key: string;
label: string;
value: unknown;
metadata?: OutputMetadata;
renderer: any;
}> = [];
Object.entries(agentRunOutputs).forEach(([key, { title, values }]) => {
values.forEach((value, index) => {
// Enhanced metadata extraction
const metadata: OutputMetadata = {};
// Type guard to safely access properties
if (
typeof value === "object" &&
value !== null &&
!React.isValidElement(value)
) {
const objValue = value as any;
if (objValue.type) metadata.type = objValue.type;
if (objValue.mimeType) metadata.mimeType = objValue.mimeType;
if (objValue.filename) metadata.filename = objValue.filename;
}
const renderer = globalRegistry.getRenderer(value, metadata);
if (renderer) {
items.push({
key: `${key}-${index}`,
label: index === 0 ? title || key : "",
value,
metadata,
renderer,
});
} else {
const textRenderer = globalRegistry
.getAllRenderers()
.find((r) => r.name === "TextRenderer");
if (textRenderer) {
items.push({
key: `${key}-${index}`,
label: index === 0 ? title || key : "",
value: JSON.stringify(value, null, 2),
metadata,
renderer: textRenderer,
});
}
}
});
});
return items;
}, [agentRunOutputs]);
return (
<>
{enableEnhancedOutputHandling ? (
<Card className="agpt-box" style={{ maxWidth: "950px" }}>
<CardHeader>
<div className="flex items-center justify-between">
<CardTitle className="font-poppins text-lg">Output</CardTitle>
{outputItems.length > 0 && (
<OutputActions
items={outputItems.map((item) => ({
value: item.value,
metadata: item.metadata,
renderer: item.renderer,
}))}
/>
)}
</div>
</CardHeader>
<CardContent
className="flex flex-col gap-4"
style={{ maxWidth: "660px" }}
>
{agentRunOutputs !== undefined ? (
outputItems.length > 0 ? (
outputItems.map((item) => (
<OutputItem
key={item.key}
value={item.value}
metadata={item.metadata}
renderer={item.renderer}
label={item.label}
/>
))
) : (
<p className="text-sm text-muted-foreground">
No outputs to display
</p>
)
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
) : (
<Card className="agpt-box" style={{ maxWidth: "950px" }}>
<CardHeader>
<CardTitle className="font-poppins text-lg">Output</CardTitle>
</CardHeader>
<CardContent
className="flex flex-col gap-4"
style={{ maxWidth: "660px" }}
>
{agentRunOutputs !== undefined ? (
Object.entries(agentRunOutputs).map(
([key, { title, values }]) => (
<div key={key} className="flex flex-col gap-1.5">
<label className="text-sm font-medium">
{title || key}
</label>
{values.map((value, i) => (
<p
className="resize-none overflow-x-auto whitespace-pre-wrap break-words border-none text-sm text-neutral-700 disabled:cursor-not-allowed"
key={i}
>
{value}
</p>
))}
{/* TODO: pretty type-dependent rendering */}
</div>
),
)
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
)}
</>
);
}

View File

@@ -1,68 +0,0 @@
import React from "react";
import { Badge } from "@/components/__legacy__/ui/badge";
import { GraphExecutionMeta } from "@/lib/autogpt-server-api/types";
export type AgentRunStatus =
| "success"
| "failed"
| "queued"
| "running"
| "stopped"
| "scheduled"
| "draft"
| "review";
export const agentRunStatusMap: Record<
GraphExecutionMeta["status"],
AgentRunStatus
> = {
INCOMPLETE: "draft",
COMPLETED: "success",
FAILED: "failed",
QUEUED: "queued",
RUNNING: "running",
TERMINATED: "stopped",
REVIEW: "review",
};
const statusData: Record<
AgentRunStatus,
{ label: string; variant: keyof typeof statusStyles }
> = {
success: { label: "Success", variant: "success" },
running: { label: "Running", variant: "info" },
failed: { label: "Failed", variant: "destructive" },
queued: { label: "Queued", variant: "warning" },
draft: { label: "Draft", variant: "secondary" },
stopped: { label: "Stopped", variant: "secondary" },
scheduled: { label: "Scheduled", variant: "secondary" },
review: { label: "In Review", variant: "warning" },
};
const statusStyles = {
success:
"bg-green-100 text-green-800 hover:bg-green-100 hover:text-green-800",
destructive: "bg-red-100 text-red-800 hover:bg-red-100 hover:text-red-800",
warning:
"bg-yellow-100 text-yellow-800 hover:bg-yellow-100 hover:text-yellow-800",
info: "bg-blue-100 text-blue-800 hover:bg-blue-100 hover:text-blue-800",
secondary:
"bg-slate-100 text-slate-800 hover:bg-slate-100 hover:text-slate-800",
};
export function AgentRunStatusChip({
status,
}: {
status: AgentRunStatus;
}): React.ReactElement {
return (
<Badge
variant="secondary"
className={`text-xs font-medium ${statusStyles[statusData[status]?.variant]} rounded-[45px] px-[9px] py-[3px]`}
>
{statusData[status]?.label}
</Badge>
);
}

View File

@@ -1,130 +0,0 @@
import React from "react";
import { formatDistanceToNow, isPast } from "date-fns";
import { cn } from "@/lib/utils";
import { Link2Icon, Link2OffIcon, MoreVertical } from "lucide-react";
import { Card, CardContent } from "@/components/__legacy__/ui/card";
import { Button } from "@/components/__legacy__/ui/button";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/__legacy__/ui/dropdown-menu";
import { AgentStatus, AgentStatusChip } from "./agent-status-chip";
import { AgentRunStatus, AgentRunStatusChip } from "./agent-run-status-chip";
import { PushPinSimpleIcon } from "@phosphor-icons/react";
export type AgentRunSummaryProps = (
| {
type: "run";
status: AgentRunStatus;
}
| {
type: "preset";
status?: undefined;
}
| {
type: "preset.triggered";
status: AgentStatus;
}
| {
type: "schedule";
status: "scheduled";
}
) & {
title: string;
timestamp?: number | Date;
selected?: boolean;
onClick?: () => void;
// onRename: () => void;
onDelete: () => void;
onPinAsPreset?: () => void;
className?: string;
};
export function AgentRunSummaryCard({
type,
status,
title,
timestamp,
selected = false,
onClick,
// onRename,
onDelete,
onPinAsPreset,
className,
}: AgentRunSummaryProps): React.ReactElement {
return (
<Card
className={cn(
"agpt-rounded-card cursor-pointer border-zinc-300",
selected ? "agpt-card-selected" : "",
className,
)}
onClick={onClick}
>
<CardContent className="relative p-2.5 lg:p-4">
{(type == "run" || type == "schedule") && (
<AgentRunStatusChip status={status} />
)}
{type == "preset" && (
<div className="flex items-center text-sm font-medium text-neutral-700">
<PushPinSimpleIcon className="mr-1 size-4 text-foreground" /> Preset
</div>
)}
{type == "preset.triggered" && (
<div className="flex items-center justify-between">
<AgentStatusChip status={status} />
<div className="flex items-center text-sm font-medium text-neutral-700">
{status == "inactive" ? (
<Link2OffIcon className="mr-1 size-4 text-foreground" />
) : (
<Link2Icon className="mr-1 size-4 text-foreground" />
)}{" "}
Trigger
</div>
</div>
)}
<div className="mt-5 flex items-center justify-between">
<h3 className="truncate pr-2 text-base font-medium text-neutral-900">
{title}
</h3>
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button variant="ghost" className="h-5 w-5 p-0">
<MoreVertical className="h-5 w-5" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent>
{onPinAsPreset && (
<DropdownMenuItem onClick={onPinAsPreset}>
Pin as a preset
</DropdownMenuItem>
)}
{/* <DropdownMenuItem onClick={onRename}>Rename</DropdownMenuItem> */}
<DropdownMenuItem onClick={onDelete}>Delete</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
{timestamp && (
<p
className="mt-1 text-sm font-normal text-neutral-500"
title={new Date(timestamp).toString()}
>
{isPast(timestamp) ? "Ran" : "Runs in"}{" "}
{formatDistanceToNow(timestamp, { addSuffix: true })}
</p>
)}
</CardContent>
</Card>
);
}

View File

@@ -1,237 +0,0 @@
"use client";
import { Plus } from "lucide-react";
import React, { useEffect, useState } from "react";
import {
GraphExecutionID,
GraphExecutionMeta,
LibraryAgent,
LibraryAgentPreset,
LibraryAgentPresetID,
Schedule,
ScheduleID,
} from "@/lib/autogpt-server-api";
import { cn } from "@/lib/utils";
import { Badge } from "@/components/__legacy__/ui/badge";
import { Button } from "@/components/atoms/Button/Button";
import LoadingBox, { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import { Separator } from "@/components/__legacy__/ui/separator";
import { ScrollArea } from "@/components/__legacy__/ui/scroll-area";
import { InfiniteScroll } from "@/components/contextual/InfiniteScroll/InfiniteScroll";
import { AgentRunsQuery } from "../use-agent-runs";
import { agentRunStatusMap } from "./agent-run-status-chip";
import { AgentRunSummaryCard } from "./agent-run-summary-card";
interface AgentRunsSelectorListProps {
agent: LibraryAgent;
agentRunsQuery: AgentRunsQuery;
agentPresets: LibraryAgentPreset[];
schedules: Schedule[];
selectedView: { type: "run" | "preset" | "schedule"; id?: string };
allowDraftNewRun?: boolean;
onSelectRun: (id: GraphExecutionID) => void;
onSelectPreset: (preset: LibraryAgentPresetID) => void;
onSelectSchedule: (id: ScheduleID) => void;
onSelectDraftNewRun: () => void;
doDeleteRun: (id: GraphExecutionMeta) => void;
doDeletePreset: (id: LibraryAgentPresetID) => void;
doDeleteSchedule: (id: ScheduleID) => void;
doCreatePresetFromRun?: (id: GraphExecutionID) => void;
className?: string;
}
export function AgentRunsSelectorList({
agent,
agentRunsQuery: {
agentRuns,
agentRunCount,
agentRunsLoading,
hasMoreRuns,
fetchMoreRuns,
isFetchingMoreRuns,
},
agentPresets,
schedules,
selectedView,
allowDraftNewRun = true,
onSelectRun,
onSelectPreset,
onSelectSchedule,
onSelectDraftNewRun,
doDeleteRun,
doDeletePreset,
doDeleteSchedule,
doCreatePresetFromRun,
className,
}: AgentRunsSelectorListProps): React.ReactElement {
const [activeListTab, setActiveListTab] = useState<"runs" | "scheduled">(
"runs",
);
useEffect(() => {
if (selectedView.type === "schedule") {
setActiveListTab("scheduled");
} else {
setActiveListTab("runs");
}
}, [selectedView]);
const listItemClasses = "h-28 w-72 lg:w-full lg:h-32";
return (
<aside className={cn("flex flex-col gap-4", className)}>
{allowDraftNewRun ? (
<Button
className={"mb-4 hidden lg:flex"}
onClick={onSelectDraftNewRun}
leftIcon={<Plus className="h-6 w-6" />}
>
New {agent.has_external_trigger ? "trigger" : "run"}
</Button>
) : null}
<div className="flex gap-2">
<Badge
variant={activeListTab === "runs" ? "secondary" : "outline"}
className="cursor-pointer gap-2 rounded-full text-base"
onClick={() => setActiveListTab("runs")}
>
<span>Runs</span>
<span className="text-neutral-600">
{agentRunCount ?? <LoadingSpinner className="size-4" />}
</span>
</Badge>
<Badge
variant={activeListTab === "scheduled" ? "secondary" : "outline"}
className="cursor-pointer gap-2 rounded-full text-base"
onClick={() => setActiveListTab("scheduled")}
>
<span>Scheduled</span>
<span className="text-neutral-600">{schedules.length}</span>
</Badge>
</div>
{/* Runs / Schedules list */}
{agentRunsLoading && activeListTab === "runs" ? (
<LoadingBox className="h-28 w-full lg:h-[calc(100vh-300px)] lg:w-72 xl:w-80" />
) : (
<ScrollArea
className="w-full lg:h-[calc(100vh-300px)] lg:w-72 xl:w-80"
orientation={window.innerWidth >= 1024 ? "vertical" : "horizontal"}
>
<InfiniteScroll
direction={window.innerWidth >= 1024 ? "vertical" : "horizontal"}
hasNextPage={hasMoreRuns}
fetchNextPage={fetchMoreRuns}
isFetchingNextPage={isFetchingMoreRuns}
>
<div className="flex items-center gap-2 lg:flex-col">
{/* New Run button - only in small layouts */}
{allowDraftNewRun && (
<Button
size="large"
className={
"flex h-12 w-40 items-center gap-2 py-6 lg:hidden " +
(selectedView.type == "run" && !selectedView.id
? "agpt-card-selected text-accent"
: "")
}
onClick={onSelectDraftNewRun}
leftIcon={<Plus className="h-6 w-6" />}
>
New {agent.has_external_trigger ? "trigger" : "run"}
</Button>
)}
{activeListTab === "runs" ? (
<>
{agentPresets
.filter((preset) => preset.webhook) // Triggers
.toSorted(
(a, b) => b.updated_at.getTime() - a.updated_at.getTime(),
)
.map((preset) => (
<AgentRunSummaryCard
className={cn(listItemClasses, "lg:h-auto")}
key={preset.id}
type="preset.triggered"
status={preset.is_active ? "active" : "inactive"}
title={preset.name}
// timestamp={preset.last_run_time} // TODO: implement this
selected={selectedView.id === preset.id}
onClick={() => onSelectPreset(preset.id)}
onDelete={() => doDeletePreset(preset.id)}
/>
))}
{agentPresets
.filter((preset) => !preset.webhook) // Presets
.toSorted(
(a, b) => b.updated_at.getTime() - a.updated_at.getTime(),
)
.map((preset) => (
<AgentRunSummaryCard
className={cn(listItemClasses, "lg:h-auto")}
key={preset.id}
type="preset"
title={preset.name}
// timestamp={preset.last_run_time} // TODO: implement this
selected={selectedView.id === preset.id}
onClick={() => onSelectPreset(preset.id)}
onDelete={() => doDeletePreset(preset.id)}
/>
))}
{agentPresets.length > 0 && <Separator className="my-1" />}
{agentRuns
.toSorted((a, b) => {
const aTime = a.started_at?.getTime() ?? 0;
const bTime = b.started_at?.getTime() ?? 0;
return bTime - aTime;
})
.map((run) => (
<AgentRunSummaryCard
className={listItemClasses}
key={run.id}
type="run"
status={agentRunStatusMap[run.status]}
title={
(run.preset_id
? agentPresets.find((p) => p.id == run.preset_id)
?.name
: null) ?? agent.name
}
timestamp={run.started_at ?? undefined}
selected={selectedView.id === run.id}
onClick={() => onSelectRun(run.id)}
onDelete={() => doDeleteRun(run as GraphExecutionMeta)}
onPinAsPreset={
doCreatePresetFromRun
? () => doCreatePresetFromRun(run.id)
: undefined
}
/>
))}
</>
) : (
schedules.map((schedule) => (
<AgentRunSummaryCard
className={listItemClasses}
key={schedule.id}
type="schedule"
status="scheduled" // TODO: implement active/inactive status for schedules
title={schedule.name}
timestamp={schedule.next_run_time}
selected={selectedView.id === schedule.id}
onClick={() => onSelectSchedule(schedule.id)}
onDelete={() => doDeleteSchedule(schedule.id)}
/>
))
)}
</div>
</InfiniteScroll>
</ScrollArea>
)}
</aside>
);
}

View File

@@ -1,180 +0,0 @@
"use client";
import React, { useCallback, useMemo } from "react";
import {
Graph,
GraphExecutionID,
Schedule,
ScheduleID,
} from "@/lib/autogpt-server-api";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import ActionButtonGroup from "@/components/__legacy__/action-button-group";
import type { ButtonAction } from "@/components/__legacy__/types";
import {
Card,
CardContent,
CardHeader,
CardTitle,
} from "@/components/__legacy__/ui/card";
import { IconCross } from "@/components/__legacy__/ui/icons";
import { Input } from "@/components/__legacy__/ui/input";
import LoadingBox from "@/components/__legacy__/ui/loading";
import { useToastOnFail } from "@/components/molecules/Toast/use-toast";
import { humanizeCronExpression } from "@/lib/cron-expression-utils";
import { formatScheduleTime } from "@/lib/timezone-utils";
import { useUserTimezone } from "@/lib/hooks/useUserTimezone";
import { PlayIcon } from "lucide-react";
import { AgentRunStatus } from "./agent-run-status-chip";
export function AgentScheduleDetailsView({
graph,
schedule,
agentActions,
onForcedRun,
doDeleteSchedule,
}: {
graph: Graph;
schedule: Schedule;
agentActions: ButtonAction[];
onForcedRun: (runID: GraphExecutionID) => void;
doDeleteSchedule: (scheduleID: ScheduleID) => void;
}): React.ReactNode {
const api = useBackendAPI();
const selectedRunStatus: AgentRunStatus = "scheduled";
const toastOnFail = useToastOnFail();
// Get user's timezone for displaying schedule times
const userTimezone = useUserTimezone();
const infoStats: { label: string; value: React.ReactNode }[] = useMemo(() => {
return [
{
label: "Status",
value:
selectedRunStatus.charAt(0).toUpperCase() +
selectedRunStatus.slice(1),
},
{
label: "Schedule",
value: humanizeCronExpression(schedule.cron),
},
{
label: "Next run",
value: formatScheduleTime(schedule.next_run_time, userTimezone),
},
];
}, [schedule, selectedRunStatus, userTimezone]);
const agentRunInputs: Record<
string,
{ title?: string; /* type: BlockIOSubType; */ value: any }
> = useMemo(() => {
// TODO: show (link to) preset - https://github.com/Significant-Gravitas/AutoGPT/issues/9168
// Add type info from agent input schema
return Object.fromEntries(
Object.entries(schedule.input_data).map(([k, v]) => [
k,
{
title: graph.input_schema.properties[k].title,
/* TODO: type: agent.input_schema.properties[k].type */
value: v,
},
]),
);
}, [graph, schedule]);
const runNow = useCallback(
() =>
api
.executeGraph(
graph.id,
graph.version,
schedule.input_data,
schedule.input_credentials,
"library",
)
.then((run) => onForcedRun(run.id))
.catch(toastOnFail("execute agent")),
[api, graph, schedule, onForcedRun, toastOnFail],
);
const runActions: ButtonAction[] = useMemo(
() => [
{
label: (
<>
<PlayIcon className="mr-2 size-4" />
Run now
</>
),
callback: runNow,
},
{
label: (
<>
<IconCross className="mr-2 size-4 px-0.5" />
Delete schedule
</>
),
callback: () => doDeleteSchedule(schedule.id),
variant: "destructive",
},
],
[runNow],
);
return (
<div className="agpt-div flex gap-6">
<div className="flex flex-1 flex-col gap-4">
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Info</CardTitle>
</CardHeader>
<CardContent>
<div className="flex justify-stretch gap-4">
{infoStats.map(({ label, value }) => (
<div key={label} className="flex-1">
<p className="text-sm font-medium text-black">{label}</p>
<p className="text-sm text-neutral-600">{value}</p>
</div>
))}
</div>
</CardContent>
</Card>
<Card className="agpt-box">
<CardHeader>
<CardTitle className="font-poppins text-lg">Input</CardTitle>
</CardHeader>
<CardContent className="flex flex-col gap-4">
{agentRunInputs !== undefined ? (
Object.entries(agentRunInputs).map(([key, { title, value }]) => (
<div key={key} className="flex flex-col gap-1.5">
<label className="text-sm font-medium">{title || key}</label>
<Input value={value} className="rounded-full" disabled />
</div>
))
) : (
<LoadingBox spinnerSize={12} className="h-24" />
)}
</CardContent>
</Card>
</div>
{/* Run / Agent Actions */}
<aside className="w-48 xl:w-56">
<div className="flex flex-col gap-8">
<ActionButtonGroup title="Run actions" actions={runActions} />
<ActionButtonGroup title="Agent actions" actions={agentActions} />
</div>
</aside>
</div>
);
}

View File

@@ -1,100 +0,0 @@
"use client";
import React, { useState } from "react";
import { Button } from "@/components/__legacy__/ui/button";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/__legacy__/ui/dialog";
import { Input } from "@/components/__legacy__/ui/input";
import { Textarea } from "@/components/__legacy__/ui/textarea";
interface CreatePresetDialogProps {
open: boolean;
onOpenChange: (open: boolean) => void;
onConfirm: (name: string, description: string) => Promise<void> | void;
}
export function CreatePresetDialog({
open,
onOpenChange,
onConfirm,
}: CreatePresetDialogProps) {
const [name, setName] = useState("");
const [description, setDescription] = useState("");
const handleSubmit = async () => {
if (name.trim()) {
await onConfirm(name.trim(), description.trim());
setName("");
setDescription("");
onOpenChange(false);
}
};
const handleCancel = () => {
setName("");
setDescription("");
onOpenChange(false);
};
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === "Enter" && (e.metaKey || e.ctrlKey)) {
e.preventDefault();
handleSubmit();
}
};
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogContent className="sm:max-w-[425px]">
<DialogHeader>
<DialogTitle>Create Preset</DialogTitle>
<DialogDescription>
Give your preset a name and description to help identify it later.
</DialogDescription>
</DialogHeader>
<div className="grid gap-4 py-4">
<div className="grid gap-2">
<label htmlFor="preset-name" className="text-sm font-medium">
Name *
</label>
<Input
id="preset-name"
placeholder="Enter preset name"
value={name}
onChange={(e) => setName(e.target.value)}
onKeyDown={handleKeyDown}
autoFocus
/>
</div>
<div className="grid gap-2">
<label htmlFor="preset-description" className="text-sm font-medium">
Description
</label>
<Textarea
id="preset-description"
placeholder="Optional description"
value={description}
onChange={(e) => setDescription(e.target.value)}
onKeyDown={handleKeyDown}
rows={3}
/>
</div>
</div>
<DialogFooter>
<Button variant="outline" onClick={handleCancel}>
Cancel
</Button>
<Button onClick={handleSubmit} disabled={!name.trim()}>
Create Preset
</Button>
</DialogFooter>
</DialogContent>
</Dialog>
);
}

View File

@@ -1,210 +0,0 @@
import {
GraphExecutionMeta as LegacyGraphExecutionMeta,
GraphID,
GraphExecutionID,
} from "@/lib/autogpt-server-api";
import { getQueryClient } from "@/lib/react-query/queryClient";
import {
getPaginatedTotalCount,
getPaginationNextPageNumber,
unpaginate,
} from "@/app/api/helpers";
import {
getV1ListGraphExecutionsResponse,
getV1ListGraphExecutionsResponse200,
useGetV1ListGraphExecutionsInfinite,
} from "@/app/api/__generated__/endpoints/graphs/graphs";
import { GraphExecutionsPaginated } from "@/app/api/__generated__/models/graphExecutionsPaginated";
import { GraphExecutionMeta as RawGraphExecutionMeta } from "@/app/api/__generated__/models/graphExecutionMeta";
export type GraphExecutionMeta = Omit<
RawGraphExecutionMeta,
"id" | "user_id" | "graph_id" | "preset_id" | "stats"
> &
Pick<
LegacyGraphExecutionMeta,
"id" | "user_id" | "graph_id" | "preset_id" | "stats"
>;
/** Hook to fetch runs for a specific graph, with support for infinite scroll.
*
* @param graphID - The ID of the graph to fetch agent runs for. This parameter is
* optional in the sense that the hook doesn't run unless it is passed.
* This way, it can be used in components where the graph ID is not
* immediately available.
*/
export const useAgentRunsInfinite = (graphID?: GraphID) => {
const queryClient = getQueryClient();
const {
data: queryResults,
refetch: refetchRuns,
isPending: agentRunsLoading,
isRefetching: agentRunsReloading,
hasNextPage: hasMoreRuns,
fetchNextPage: fetchMoreRuns,
isFetchingNextPage: isFetchingMoreRuns,
queryKey,
} = useGetV1ListGraphExecutionsInfinite(
graphID!,
{ page: 1, page_size: 20 },
{
query: {
getNextPageParam: getPaginationNextPageNumber,
// Prevent query from running if graphID is not available (yet)
...(!graphID
? {
enabled: false,
queryFn: () =>
// Fake empty response if graphID is not available (yet)
Promise.resolve({
status: 200,
data: {
executions: [],
pagination: {
current_page: 1,
page_size: 20,
total_items: 0,
total_pages: 0,
},
},
headers: new Headers(),
} satisfies getV1ListGraphExecutionsResponse),
}
: {}),
},
},
queryClient,
);
const agentRuns = queryResults ? unpaginate(queryResults, "executions") : [];
const agentRunCount = getPaginatedTotalCount(queryResults);
const upsertAgentRun = (newAgentRun: GraphExecutionMeta) => {
queryClient.setQueryData(
queryKey,
(currentQueryData: typeof queryResults) => {
if (!currentQueryData?.pages || agentRunCount === undefined)
return currentQueryData;
const exists = currentQueryData.pages.some((page) => {
if (page.status !== 200) return false;
const response = page.data;
return response.executions.some((run) => run.id === newAgentRun.id);
});
if (exists) {
// If the run already exists, we update it
return {
...currentQueryData,
pages: currentQueryData.pages.map((page) => {
if (page.status !== 200) return page;
const response = page.data;
const executions = response.executions;
const index = executions.findIndex(
(run) => run.id === newAgentRun.id,
);
if (index === -1) return page;
const newExecutions = [...executions];
newExecutions[index] = newAgentRun;
return {
...page,
data: {
...response,
executions: newExecutions,
},
} satisfies getV1ListGraphExecutionsResponse;
}),
};
}
// If the run does not exist, we add it to the first page
const page = currentQueryData
.pages[0] as getV1ListGraphExecutionsResponse200 & {
headers: Headers;
};
const updatedExecutions = [newAgentRun, ...page.data.executions];
const updatedPage = {
...page,
data: {
...page.data,
executions: updatedExecutions,
},
} satisfies getV1ListGraphExecutionsResponse;
const updatedPages = [updatedPage, ...currentQueryData.pages.slice(1)];
return {
...currentQueryData,
pages: updatedPages.map(
// Increment the total runs count in the pagination info of all pages
(page) =>
page.status === 200
? {
...page,
data: {
...page.data,
pagination: {
...page.data.pagination,
total_items: agentRunCount + 1,
},
},
}
: page,
),
};
},
);
};
const removeAgentRun = (runID: GraphExecutionID) => {
queryClient.setQueryData(
[queryKey, { page: 1, page_size: 20 }],
(currentQueryData: typeof queryResults) => {
if (!currentQueryData?.pages) return currentQueryData;
let found = false;
return {
...currentQueryData,
pages: currentQueryData.pages.map((page) => {
const response = page.data as GraphExecutionsPaginated;
const filteredExecutions = response.executions.filter(
(run) => run.id !== runID,
);
if (filteredExecutions.length < response.executions.length) {
found = true;
}
return {
...page,
data: {
...response,
executions: filteredExecutions,
pagination: {
...response.pagination,
total_items:
response.pagination.total_items - (found ? 1 : 0),
},
},
};
}),
};
},
);
};
return {
agentRuns: agentRuns as GraphExecutionMeta[],
refetchRuns,
agentRunCount,
agentRunsLoading: agentRunsLoading || agentRunsReloading,
hasMoreRuns,
fetchMoreRuns,
isFetchingMoreRuns,
upsertAgentRun,
removeAgentRun,
};
};
export type AgentRunsQuery = ReturnType<typeof useAgentRunsInfinite>;

View File

@@ -1,7 +0,0 @@
"use client";
import { OldAgentLibraryView } from "../../agents/[id]/components/OldAgentLibraryView/OldAgentLibraryView";
export default function OldAgentLibraryPage() {
return <OldAgentLibraryView />;
}

View File

@@ -2,7 +2,7 @@ import { useEffect, useState } from "react";
import { Input } from "@/components/__legacy__/ui/input";
import { Button } from "@/components/__legacy__/ui/button";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { CronScheduler } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler";
import { CronScheduler } from "@/components/contextual/CronScheduler/cron-scheduler";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { getTimezoneDisplayName } from "@/lib/timezone-utils";
import { useUserTimezone } from "@/lib/hooks/useUserTimezone";

View File

@@ -1,6 +1,6 @@
"use client";
import { CronExpressionDialog } from "@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/cron-scheduler-dialog";
import { CronExpressionDialog } from "@/components/contextual/CronScheduler/cron-scheduler-dialog";
import { Form, FormField } from "@/components/__legacy__/ui/form";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";

View File

@@ -7,7 +7,6 @@ import { useFlags } from "launchdarkly-react-client-sdk";
export enum Flag {
BETA_BLOCKS = "beta-blocks",
NEW_BLOCK_MENU = "new-block-menu",
NEW_AGENT_RUNS = "new-agent-runs",
GRAPH_SEARCH = "graph-search",
ENABLE_ENHANCED_OUTPUT_HANDLING = "enable-enhanced-output-handling",
SHARE_EXECUTION_RESULTS = "share-execution-results",
@@ -22,7 +21,6 @@ const isPwMockEnabled = process.env.NEXT_PUBLIC_PW_TEST === "true";
const defaultFlags = {
[Flag.BETA_BLOCKS]: [],
[Flag.NEW_BLOCK_MENU]: false,
[Flag.NEW_AGENT_RUNS]: false,
[Flag.GRAPH_SEARCH]: false,
[Flag.ENABLE_ENHANCED_OUTPUT_HANDLING]: false,
[Flag.SHARE_EXECUTION_RESULTS]: false,

View File

@@ -0,0 +1,4 @@
declare module "*.png" {
const content: import("next/image").StaticImageData;
export default content;
}

View File

@@ -65,7 +65,7 @@ The result routes data to yes_output or no_output, enabling intelligent branchin
| condition | A plaintext English description of the condition to evaluate | str | Yes |
| yes_value | (Optional) Value to output if the condition is true. If not provided, input_value will be used. | Yes Value | No |
| no_value | (Optional) Value to output if the condition is false. If not provided, input_value will be used. | No Value | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for evaluating the condition. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
### Outputs
@@ -103,7 +103,7 @@ The block sends the entire conversation history to the chosen LLM, including sys
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | No |
| messages | List of messages in the conversation. | List[Any] | Yes |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for the conversation. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
| ollama_host | Ollama host for local models | str | No |
@@ -257,7 +257,7 @@ The block formulates a prompt based on the given focus or source data, sends it
|-------|-------------|------|----------|
| focus | The focus of the list to generate. | str | No |
| source_data | The data to generate the list from. | str | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for generating the list. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| max_retries | Maximum number of retries for generating a valid list. | int | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -424,7 +424,7 @@ The block sends the input prompt to a chosen LLM, along with any system prompts
| prompt | The prompt to send to the language model. | str | Yes |
| expected_format | Expected format of the response. If provided, the response will be validated against this format. The keys should be the expected fields in the response, and the values should be the description of the field. | Dict[str, str] | Yes |
| list_result | Whether the response should be a list of objects in the expected format. | bool | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| force_json_output | Whether to force the LLM to produce a JSON-only response. This can increase the block's reliability, but may also reduce the quality of the response because it prohibits the LLM from reasoning before providing its JSON response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |
@@ -464,7 +464,7 @@ The block sends the input prompt to a chosen LLM, processes the response, and re
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| retry | Number of times to retry the LLM call if the response does not match the expected format. | int | No |
| prompt_values | Values used to fill in the prompt. The values can be used in the prompt by putting them in a double curly braces, e.g. {{variable_name}}. | Dict[str, str] | No |
@@ -501,7 +501,7 @@ The block splits the input text into smaller chunks, sends each chunk to an LLM
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| text | The text to summarize. | str | Yes |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for summarizing the text. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| focus | The topic to focus on in the summary | str | No |
| style | The style of the summary to generate. | "concise" \| "detailed" \| "bullet points" \| "numbered list" | No |
| max_tokens | The maximum number of tokens to generate in the chat completion. | int | No |
@@ -763,7 +763,7 @@ Configure agent_mode_max_iterations to control loop behavior: 0 for single decis
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| prompt | The prompt to send to the language model. | str | Yes |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "gpt-4-turbo" \| "gpt-3.5-turbo" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| model | The language model to use for answering the prompt. | "o3-mini" \| "o3-2025-04-16" \| "o1" \| "o1-mini" \| "gpt-5.2-2025-12-11" \| "gpt-5.1-2025-11-13" \| "gpt-5-2025-08-07" \| "gpt-5-mini-2025-08-07" \| "gpt-5-nano-2025-08-07" \| "gpt-5-chat-latest" \| "gpt-4.1-2025-04-14" \| "gpt-4.1-mini-2025-04-14" \| "gpt-4o-mini" \| "gpt-4o" \| "claude-opus-4-1-20250805" \| "claude-opus-4-20250514" \| "claude-sonnet-4-20250514" \| "claude-opus-4-5-20251101" \| "claude-sonnet-4-5-20250929" \| "claude-haiku-4-5-20251001" \| "claude-opus-4-6" \| "claude-3-haiku-20240307" \| "Qwen/Qwen2.5-72B-Instruct-Turbo" \| "nvidia/llama-3.1-nemotron-70b-instruct" \| "meta-llama/Llama-3.3-70B-Instruct-Turbo" \| "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" \| "meta-llama/Llama-3.2-3B-Instruct-Turbo" \| "llama-3.3-70b-versatile" \| "llama-3.1-8b-instant" \| "llama3.3" \| "llama3.2" \| "llama3" \| "llama3.1:405b" \| "dolphin-mistral:latest" \| "openai/gpt-oss-120b" \| "openai/gpt-oss-20b" \| "google/gemini-2.5-pro-preview-03-25" \| "google/gemini-3-pro-preview" \| "google/gemini-2.5-flash" \| "google/gemini-2.0-flash-001" \| "google/gemini-2.5-flash-lite-preview-06-17" \| "google/gemini-2.0-flash-lite-001" \| "mistralai/mistral-nemo" \| "cohere/command-r-08-2024" \| "cohere/command-r-plus-08-2024" \| "deepseek/deepseek-chat" \| "deepseek/deepseek-r1-0528" \| "perplexity/sonar" \| "perplexity/sonar-pro" \| "perplexity/sonar-deep-research" \| "nousresearch/hermes-3-llama-3.1-405b" \| "nousresearch/hermes-3-llama-3.1-70b" \| "amazon/nova-lite-v1" \| "amazon/nova-micro-v1" \| "amazon/nova-pro-v1" \| "microsoft/wizardlm-2-8x22b" \| "gryphe/mythomax-l2-13b" \| "meta-llama/llama-4-scout" \| "meta-llama/llama-4-maverick" \| "x-ai/grok-4" \| "x-ai/grok-4-fast" \| "x-ai/grok-4.1-fast" \| "x-ai/grok-code-fast-1" \| "moonshotai/kimi-k2" \| "qwen/qwen3-235b-a22b-thinking-2507" \| "qwen/qwen3-coder" \| "Llama-4-Scout-17B-16E-Instruct-FP8" \| "Llama-4-Maverick-17B-128E-Instruct-FP8" \| "Llama-3.3-8B-Instruct" \| "Llama-3.3-70B-Instruct" \| "v0-1.5-md" \| "v0-1.5-lg" \| "v0-1.0-md" | No |
| multiple_tool_calls | Whether to allow multiple tool calls in a single response. | bool | No |
| sys_prompt | The system prompt to provide additional context to the model. | str | No |
| conversation_history | The conversation history to provide context for the prompt. | List[Dict[str, Any]] | No |