Compare commits

..

65 Commits

Author SHA1 Message Date
Nicholas Tindle
326554d89a style(classic): update black to 24.10.0 and reformat
Update black version to match pre-commit hook (24.10.0) and reformat
all files with the new version.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:51:54 -06:00
Nicholas Tindle
5e22a1888a chore: add classic benchmark reports and workspaces to gitignore
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:42:55 -06:00
Nicholas Tindle
a4d7b0142f fix(classic): resolve all pyright type errors
- Add missing strategies (lats, multi_agent_debate) to PromptStrategyName
- Fix method override signatures for reasoning_effort parameter
- Fix Pydantic Field() overload issues with helper function
- Fix BeautifulSoup Tag type narrowing in web_fetch.py
- Fix Optional member access in playwright_browser.py and rewoo.py
- Convert hasattr patterns to getattr for proper type narrowing
- Add proper type casts for Literal types
- Fix file storage path type conversions
- Exclude legacy challenges/ from pyright checking

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 10:41:53 -06:00
Nicholas Tindle
7d6375f59c style(classic): fix flake8 line length issue
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 01:25:00 -06:00
Nicholas Tindle
aeec0ce509 chore: add test.db to gitignore 2026-01-20 01:24:22 -06:00
Nicholas Tindle
b32bfcaac5 chore: remove test.db from tracking 2026-01-20 01:24:00 -06:00
Nicholas Tindle
5373a6eb6e style(classic): fix code formatting with black
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 01:23:51 -06:00
Nicholas Tindle
98cde46ccb style(classic): fix import sorting with isort
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 01:23:33 -06:00
Nicholas Tindle
bd10da10d9 ci: update pre-commit hooks for consolidated classic Poetry project
- Consolidate classic poetry-install hooks into single hook using classic/
- Update isort hook to work with consolidated project structure
- Simplify flake8 hooks to use single classic/.flake8 config
- Consolidate pyright hooks into single hook for classic/
- Add direct_benchmark to hook coverage

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 01:21:50 -06:00
Nicholas Tindle
60fdee1345 fix(classic): resolve linting and formatting issues for CI compliance
- Update .flake8 config to exclude workspace directories and ignore E203
- Fix import sorting (isort) across multiple files
- Fix code formatting (black) across multiple files
- Remove unused imports and fix line length issues (flake8)
- Fix f-strings without placeholders and unused variables

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 01:16:38 -06:00
Nicholas Tindle
6f2783468c feat(classic): add sub-agent architecture and LATS/multi-agent debate strategies
Add comprehensive sub-agent spawning infrastructure that enables prompt
strategies to coordinate multiple agents for advanced reasoning patterns.

New files:
- forge/agent/execution_context.py: ExecutionContext, ResourceBudget,
  SubAgentHandle, and AgentFactory protocol for sub-agent lifecycle
- agent_factory/default_factory.py: DefaultAgentFactory implementation
- prompt_strategies/lats.py: Language Agent Tree Search using MCTS
  with sub-agents for action expansion and evaluation
- prompt_strategies/multi_agent_debate.py: Multi-agent debate with
  proposal, critique, and consensus phases

Key changes:
- BaseMultiStepPromptStrategy gains spawn_sub_agent(), run_sub_agent(),
  spawn_and_run(), and run_parallel() methods
- Agent class accepts optional ExecutionContext and injects it into strategies
- Sub-agents enabled by default (enable_sub_agents=True)
- Resource limits: max_depth=5, max_sub_agents=25, max_cycles=25

All 7 strategies now available in benchmark:
one_shot, rewoo, plan_execute, reflexion, tree_of_thoughts, lats, multi_agent_debate

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 01:01:28 -06:00
Nicholas Tindle
c1031b286d ci(classic): update CI workflows for consolidated Poetry project
Update all classic CI workflows to use the single consolidated
pyproject.toml at classic/ instead of individual project directories.

Changes:
- classic-autogpt-ci.yml: Run from classic/, update cache key and test paths
- classic-forge-ci.yml: Run from classic/, update cache key and test paths
- classic-benchmark-ci.yml: Run from classic/, use direct-benchmark command
- classic-python-checks.yml: Simplify to single job (no matrix needed)
- classic-autogpts-ci.yml: Update to use direct-benchmark for smoke tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:53:50 -06:00
Nicholas Tindle
b849eafb7f feat(direct_benchmark): enable shell command execution with safety denylist
Enable agents to execute shell commands during benchmarks by setting
execute_local_commands=True and using denylist mode to block dangerous
commands (rm, sudo, chmod, kill, etc.) while allowing safe operations.

Also adds ExecutePython challenge to test code execution capability.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:52:06 -06:00
Nicholas Tindle
572c3f5e0d refactor(classic): consolidate Poetry projects into single pyproject.toml
Merge forge/, original_autogpt/, and direct_benchmark/ into a single Poetry
project to eliminate cross-project path dependency issues.

Changes:
- Create classic/pyproject.toml with merged dependencies from all three projects
- Remove individual pyproject.toml and poetry.lock files from subdirectories
- Update all CLAUDE.md files to reflect commands run from classic/ root
- Update all README.md files with new installation and usage instructions

All packages are now included via the packages directive:
- forge/forge (core agent framework)
- original_autogpt/autogpt (AutoGPT agent)
- direct_benchmark/direct_benchmark (benchmark harness)

CLI entry points preserved: autogpt, serve, direct-benchmark

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:49:56 -06:00
Nicholas Tindle
89003a585d feat(direct_benchmark): show "would have passed" for timed-out challenges
When a challenge times out but the agent's solution would have passed
evaluation, this is now clearly indicated:

- Completion blocks show "TIMEOUT (would have passed)" in yellow
- Recent completions panel shows hourglass icon + "would pass" suffix
- Summary table has new "Would Pass" column
- Final summary shows "+N would pass" count
- Success rate includes "would pass" challenges

The evaluator still runs on timed-out challenges to calculate the score,
but success remains False. This gives visibility into near-misses that
just needed more time.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:30:00 -06:00
Nicholas Tindle
0e65785228 fix(direct_benchmark): don't mark timed-out challenges as passed
Previously, the evaluator would run on all results including timed-out
challenges. If the agent happened to write a working solution before
timing out, evaluation would pass and override success=True, resulting
in contradictory output showing both PASS and "timed out".

Now we skip evaluation for timed-out challenges - they cannot pass.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:25:41 -06:00
Nicholas Tindle
f07dff1cdd fix(direct_benchmark): add pytest dependency for challenge evaluation
The TicTacToe and other challenges use pytest-based test files for
evaluation. Without pytest installed in the benchmark virtualenv,
these evaluations were silently failing.

Root cause: test.py imports pytest but the package wasn't a dependency,
causing ModuleNotFoundError during evaluation subprocess.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:21:12 -06:00
Nicholas Tindle
00e02a4696 feat(direct_benchmark): add run ID to completion blocks
Include config:challenge:attempt and timestamp in completion block
header for easier debugging and log correlation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 00:14:23 -06:00
Nicholas Tindle
634bff8277 refactor(forge): replace Selenium with Playwright for web browsing
- Remove selenium.py and test_selenium.py
- Add playwright_browser.py with WebPlaywrightComponent
- Update web component exports to use Playwright
- Update dependencies in pyproject.toml/poetry.lock
- Minor agent and reflexion strategy improvements
- Update CLAUDE.md documentation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:57:17 -06:00
Nicholas Tindle
d591f36c7b fix(direct_benchmark): track cost from LLM provider
Previously cost was hardcoded to 0.0. Now extracts cumulative cost
from MultiProvider.get_incurred_cost() after each step execution.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:37:12 -06:00
Nicholas Tindle
a347bed0b1 feat(direct_benchmark): add incremental resume and selective reset
Benchmarks now automatically save progress and resume from where they
left off. State is persisted to .benchmark_state.json in reports dir.

Features:
- Auto-resume: runs skip already-completed challenges
- --fresh: clear all state and start over
- --retry-failures: re-run only failed challenges
- --reset-strategy/model/challenge: selective resets
- `state show/clear/reset` subcommands for state management
- Config mismatch detection with auto-reset

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:32:27 -06:00
Nicholas Tindle
4eeb6ee2b0 feat(direct_benchmark): add CI mode for non-interactive environments
Add --ci flag that disables Rich Live display while preserving
completion blocks. Auto-detects CI environment via CI env var or
non-TTY stdout. Prints progress every 10 completions for visibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:21:10 -06:00
Nicholas Tindle
7db962b9f9 feat(direct_benchmark): dynamic column layout up to 10 wide
- Calculate max columns based on terminal width (up to 10)
- Reduced panel width from 35 to 30 chars to fit more
- Wider terminals can now show more parallel runs side-by-side

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:15:16 -06:00
Nicholas Tindle
9108b21541 fix(direct_benchmark): parallel execution and always show completion blocks
Fixes:
- Use run_key (config:challenge) instead of just config_name for tracking
  active runs - allows multiple challenges from same config to run in parallel
- Add asyncio.sleep(0) yields to let multiple tasks acquire semaphore
  and start before any proceed with work
- Always print completion blocks (not just failures) for visibility

This should properly show 8/8 active runs when running with --parallel 8.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:13:56 -06:00
Nicholas Tindle
ffe9325296 feat(direct_benchmark): multi-panel UI with copy-paste completion blocks
UI improvements:
- Multi-column layout: each active config gets its own panel showing
  challenge name and step history (last 6 steps with status)
- Copy-paste completion blocks: when a challenge finishes (especially
  failures), prints a detailed block with all steps for easy debugging
- Configurable logging: suppresses noisy LLM provider warnings unless
  --debug flag is set
- Pass debug flag through harness to UI

Example active runs panel:
┌─ one_shot/claude ─┬─ rewoo/claude ────┐
│ ReadFile          │ WriteFile         │
│   ✓ #1 read_file  │   ✓ #1 think      │
│   ✓ #2 write_file │   ✓ #2 plan       │
│   ● step 3: ...   │   ● step 3: ...   │
└───────────────────┴───────────────────┘

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:10:34 -06:00
Nicholas Tindle
0a616d9267 feat(direct_benchmark): add step-level logging with colored prefixes
- Add step callback to AgentRunner for real-time step logging
- BenchmarkUI now shows:
  - Active runs with current step info
  - Recent steps panel with colored config prefixes
  - Proper Live display refresh (implements __rich_console__)
- Each config gets a distinct color for easy identification
- Verbose mode prints step logs immediately with config prefix
- Fix Live display not updating (pass UI object, not rendered content)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 23:02:20 -06:00
Nicholas Tindle
ab95077e5b refactor(forge): remove VCR cassettes, use real API calls with skip for forks
- Remove vcrpy and pytest-recording dependencies
- Remove tests/vcr/ directory and vcr_cassettes submodule
- Remove .gitmodules (only had cassette submodule)
- Simplify CI workflow - no more cassette checkout/push/PAT_REVIEW
- Tests requiring API keys now skip if not set (fork PRs)
- Update CLAUDE.md files to remove cassette references
- Fix broken agbenchmark path in pyproject.toml

Security improvement: removes need for PAT with cross-repo write access.
Fork PRs will have API-dependent tests skipped (GitHub protects secrets).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:51:57 -06:00
Nicholas Tindle
e477150979 Merge branch 'dev' into make-old-work 2026-01-19 22:30:46 -06:00
Nicholas Tindle
804430e243 refactor(classic): migrate from agbenchmark to direct_benchmark harness
- Remove old benchmark/ folder with agbenchmark framework
- Move challenges to direct_benchmark/challenges/
- Move analysis tools (analyze_reports.py, analyze_failures.py) to direct_benchmark/
- Move challenges_already_beaten.json to direct_benchmark/
- Update CI workflow to use direct_benchmark
- Update CLAUDE.md files with new benchmarking instructions
- Add benchmarking section to original_autogpt/CLAUDE.md

The direct_benchmark harness directly instantiates agents without HTTP
server overhead, enabling parallel execution with asyncio semaphore.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 22:29:51 -06:00
Nicholas Tindle
acb320d32d feat(classic): add noninteractive mode env var and benchmark config logging
- Add NONINTERACTIVE_MODE env var support to AppConfig for disabling
  user interaction during automated runs
- Benchmark harness now sets NONINTERACTIVE_MODE=True when starting agents
- Add agent configuration logging at server startup (model, strategy, etc.)
- Harness logs env vars being passed to agent for verification
- Add --agent-output flag to show full agent server output for debugging

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 19:40:24 -06:00
Nicholas Tindle
32f68d5999 feat(classic): add failure analysis tool and improve benchmark output
Benchmark improvements:
- Add analyze_failures.py for pattern detection and failure analysis
- Add informative step output: tool name, args, result status, cost
- Add --all and --matrix flags for comprehensive model/strategy testing
- Add --analyze-only and --no-analyze flags for flexible analysis control
- Auto-run failure analysis after benchmarks with markdown export
- Fix directory creation bug in ReportManager (add parents=True)

Prompt strategy enhancements:
- Implement full plan_execute, reflexion, rewoo, tree_of_thoughts strategies
- Add PROMPT_STRATEGY env var support for strategy selection
- Add extended thinking support for Anthropic models
- Add reasoning effort support for OpenAI o-series models

LLM provider improvements:
- Add thinking_budget_tokens config for Anthropic extended thinking
- Add reasoning_effort config for OpenAI reasoning models
- Improve error feedback for LLM self-correction

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 18:58:41 -06:00
Nicholas Tindle
49f56b4e8d feat(classic): enhance strategy benchmark harness with model comparison and bug fixes
- Add model comparison support to test harness (claude, openai, gpt5, opus presets)
- Add --models, --smart-llm, --fast-llm, --list-models CLI args
- Add real-time logging with timestamps and progress indicators
- Fix success parsing bug: read results[0].success instead of non-existent metrics.success
- Fix agbenchmark TestResult validation: use exception typename when value is empty
- Fix WebArena challenge validation: use strings instead of integers in instantiation_dict
- Fix Agent type annotations: create AnyActionProposal union for all prompt strategies
- Add pytest integration tests for the strategy benchmark harness

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 18:07:14 -06:00
Swifty
bc75d70e7d refactor(backend): Improve Langfuse tracing with v3 SDK patterns and @observe decorators (#11803)
<!-- Clearly explain the need for these changes: -->

This PR improves the Langfuse tracing implementation in the chat feature
by adopting the v3 SDK patterns, resulting in cleaner code and better
observability.

### Changes 🏗️

- **Simplified Langfuse client usage**: Replace manual client
initialization with `langfuse.get_client()` global singleton
- **Use v3 context managers**: Switch to
`start_as_current_observation()` and `propagate_attributes()` for
automatic trace propagation
- **Auto-instrument OpenAI calls**: Use `langfuse.openai` wrapper for
automatic LLM call tracing instead of manual generation tracking
- **Add `@observe` decorators**: All chat tools now have
`@observe(as_type="tool")` decorators for automatic tool execution
tracing:
  - `add_understanding`
  - `view_agent_output` (renamed from `agent_output`)
  - `create_agent`
  - `edit_agent`
  - `find_agent`
  - `find_block`
  - `find_library_agent`
  - `get_doc_page`
  - `run_agent`
  - `run_block`
  - `search_docs`
- **Remove manual trace lifecycle**: Eliminated the verbose `finally`
block that manually ended traces/generations
- **Rename tool**: `agent_output` → `view_agent_output` for clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified chat feature works with Langfuse tracing enabled
- [x] Confirmed traces appear correctly in Langfuse dashboard with tool
spans
  - [x] Tested tool execution flows show up as nested observations

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - uses existing Langfuse environment
variables.
2026-01-19 20:56:51 +00:00
Nicholas Tindle
bead811e73 docs(classic): add workspace, settings, and permissions documentation
Document the layered configuration system including:
- Workspace structure (.autogpt/ directory layout)
- Settings location (environment variables, workspace YAML, agent YAML)
- Permission system (check order, pattern syntax, approval scopes)
- Default security behavior

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 12:17:10 -06:00
Nicholas Tindle
013f728ebf feat(forge): improve tool call error feedback for LLM self-correction
When tool calls fail validation, the error messages now include:
- What arguments were actually provided
- The expected parameter schema with types and required/optional indicators

This helps LLMs understand and fix their mistakes when retrying,
rather than just being told a parameter is missing.

Example improved error:
  Invalid function call for write_file: 'contents' is a required property
  You provided: {"filename": 'story.txt'}
  Expected parameters: {"filename": string (required), "contents": string (required)}

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 11:49:17 -06:00
Nicholas Tindle
cda9572acd feat(forge): add lightweight web fetch component
Add WebFetchComponent for fast HTTP-based page fetching without browser
overhead. Uses trafilatura for intelligent content extraction.

Commands:
- fetch_webpage: Extract main content as text/markdown/xml
  - Removes navigation, ads, boilerplate automatically
  - Extracts page metadata (title, description, author, date)
  - Extracts and lists page links
  - Much faster than Selenium-based read_webpage

- fetch_raw_html: Get raw HTML for structure inspection
  - Optional truncation for large pages

Features:
- Trafilatura-powered content extraction (best-in-class accuracy)
- Automatic link extraction with relative URL resolution
- Page metadata extraction (OG tags, meta tags)
- Configurable timeout, max content length, max links
- Proper error handling for timeouts and HTTP errors
- 19 comprehensive tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 01:04:22 -06:00
Nicholas Tindle
c1a1767034 feat(docs): Add block documentation auto-generation system (#11707)
- Add generate_block_docs.py script that introspects block code to
generate markdown
- Support manual content preservation via <!-- MANUAL: --> markers
- Add migrate_block_docs.py to preserve existing manual content from git
HEAD
- Add CI workflow (docs-block-sync.yml) to fail if docs drift from code
- Add Claude PR review workflow (docs-claude-review.yml) for doc changes
- Add manual LLM enhancement workflow (docs-enhance.yml)
- Add GitBook configuration (.gitbook.yaml, SUMMARY.md)
- Fix non-deterministic category ordering (categories is a set)
- Add comprehensive test suite (32 tests)
- Generate docs for 444 blocks with 66 preserved manual sections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Extensively test code generation for the docs pages



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces an automated documentation pipeline for blocks and
integrates it into CI.
> 
> - Adds `scripts/generate_block_docs.py` (+ tests) to introspect blocks
and generate `docs/integrations/**`, preserving `<!-- MANUAL: -->`
sections
> - New CI workflows: **docs-block-sync** (fails if docs drift),
**docs-claude-review** (AI review for block/docs PRs), and
**docs-enhance** (optional LLM improvements)
> - Updates existing Claude workflows to use `CLAUDE_CODE_OAUTH_TOKEN`
instead of `ANTHROPIC_API_KEY`
> - Improves numerous block descriptions/typos and links across backend
blocks to standardize docs output
> - Commits initial generated docs including
`docs/integrations/README.md` and many provider/category pages
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
631e53e0f6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 07:03:19 +00:00
Nicholas Tindle
e0784f8f6b refactor(forge): simplify deeply nested error handling in Anthropic provider
- Extract _get_tool_error_message helper method
- Replace 20+ levels of nesting with simple for loop
- Improve readability of tool_result construction
- Update benchmark poetry.lock

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 00:15:33 -06:00
Nicholas Tindle
3040f39136 feat(forge): modernize web search with tiered provider system
Replace basic DuckDuckGo-only search with a modern tiered system:

1. Tavily (primary) - AI-optimized results with content extraction
   - AI-generated answer summaries
   - Relevance scoring
   - Full page content extraction via search_and_extract command

2. Serper (secondary) - Fast, cheap Google SERP results
   - $0.30-1.00 per 1K queries
   - Real Google results without scraping

3. DDGS multi-engine (fallback) - Free, no API key required
   - Automatic fallback chain: DuckDuckGo → Bing → Brave → Google → etc.
   - 8 search backends supported

Key changes:
- Upgrade duckduckgo-search to ddgs v9.10 (renamed successor package)
- Add Tavily and Serper API integrations
- Implement automatic provider selection and fallback chain
- Add search_and_extract command for research with content extraction
- Add TAVILY_API_KEY and SERPER_API_KEY to env templates
- Update benchmark httpx constraint for ddgs compatibility
- 23 comprehensive tests for all providers and fallback scenarios

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 00:06:42 -06:00
Nicholas Tindle
515504c604 fix(classic): resolve pyright type errors in original_autogpt
- Change Agent class to use ActionProposal instead of OneShotAgentActionProposal
  to support multiple prompt strategy types
- Widen display_thoughts parameter type from AssistantThoughts to ModelWithSummary
- Fix speak attribute access in agent_protocol_server with hasattr check
- Add type: ignore comments for intentional thoughts field overrides in strategies
- Remove unused OneShotAgentActionProposal import

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 23:53:23 -06:00
Nicholas Tindle
18edeaeaf4 fix(classic): fix linting and formatting errors across codebase
- Fix 32+ flake8 E501 (line too long) errors by shortening descriptions
- Remove unused import in todo.py
- Fix test_todo.py argument order (config= keyword)
- Add type annotations to fix pyright errors where straightforward
- Add noqa comments for flake8 false positives in __init__.py
- Remove unused nonlocal declarations in main.py
- Run black and isort to fix formatting
- Update CLAUDE.md with improved linting commands

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 23:37:28 -06:00
Nicholas Tindle
44182aff9c feat(classic): add strategy benchmark test harness for CI
- Add test_prompt_strategies.py harness to compare prompt strategies
- Add pytest wrapper (test_strategy_benchmark.py) for CI integration
- Fix serve command (remove invalid --port flag, use AP_SERVER_PORT env)
- Fix test category (interface -> general)
- Add aiohttp-retry dependency for agbenchmark
- Add pytest markers: slow, integration, requires_agent

Usage:
  poetry run python agbenchmark_config/test_prompt_strategies.py --quick
  poetry run pytest tests/integration/test_strategy_benchmark.py -v

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 23:36:19 -06:00
Nicholas Tindle
864c5a7846 fix(classic): approve+feedback now executes command then sends feedback
Previously, when a user selected "Once" or "Always" with feedback (via Tab),
the command was NOT executed because UserFeedbackProvided was raised before
checking the approval scope. This fix changes the architecture from
exception-based to return-value-based.

Changes:
- Add PermissionCheckResult class with allowed, scope, and feedback fields
- Change check_command() to return PermissionCheckResult instead of bool
- Update prompt_fn signature to return (ApprovalScope, feedback) tuple
- Add pending_user_feedback mechanism to EpisodicActionHistory
- Update execute() to handle feedback after successful command execution
- Feedback message explicitly states "Command executed successfully"
- Add on_auto_approve callback for displaying auto-approved commands
- Add comprehensive tests for approval/denial with feedback scenarios

Behavior:
- Once + feedback → Execute command, then send feedback to agent
- Always + feedback → Execute command, save permission, send feedback
- Deny + feedback → Don't execute, send feedback to agent

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 22:32:43 -06:00
Nicholas Tindle
699fffb1a8 feat(classic): add Rich interactive selector for command approval
Adds a custom Rich-based interactive selector for the command approval
workflow. Features include:
- Arrow key navigation for selecting approval options
- Tab to add context to any selection (e.g., "Once + also check file x")
- Dedicated inline feedback option with shadow placeholder text
- Quick select with number keys 1-5
- Works within existing asyncio event loop (no prompt_toolkit dependency)

Also adds UIProvider abstraction pattern for future UI implementations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 21:49:43 -06:00
Nicholas Tindle
f0641c2d26 fix(classic): auto-advance plan steps in Plan-Execute strategy
The strategy was stuck in a loop because it tracked plan steps but never
advanced them - the record_step_success() method existed but was never
called by the agent's execution loop.

Fix by using a _pending_step_advance flag to track when an action has
been proposed. On the next parse_response_content() call, advance the
previous step before processing the new response. This keeps step
tracking self-contained in the strategy without requiring agent changes.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 21:14:16 -06:00
Nicholas Tindle
94b6f74c95 feat(classic): add multiple prompt strategies for agent reasoning
Implement four new prompt strategies based on research papers:

- ReWOO: Reasoning Without Observation (5x token efficiency)
- Plan-and-Execute: Separate planning from execution phases
- Reflexion: Verbal reinforcement learning with episodic memory
- Tree of Thoughts: Deliberate problem solving with tree search

Each strategy extends a new BaseMultiStepPromptStrategy base class
with shared utilities. Strategies are selectable via PROMPT_STRATEGY
environment variable or config.prompt_strategy setting.

Fix JSONSchema generation issue where Optional/Union types created
anyOf schemas without direct type field - resolved by storing
plan/phase state in strategy instances rather than ActionProposal.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 20:33:10 -06:00
Nicholas Tindle
46aabab3ea feat(classic): upgrade to Python 3.12+ with CI testing on 3.12, 3.13, 3.14
- Update Python version constraint from ^3.10 to ^3.12 in all pyproject.toml
- Update classifiers to reflect Python 3.12, 3.13, 3.14 support
- Update dependencies for Python 3.13+ compatibility:
  - chromadb: ^0.4.10 -> ^1.4.0
  - numpy: >=1.26.0,<2.0.0 -> >=2.0.0
  - watchdog: 4.0.0 -> ^6.0.0
  - spacy: ^3.0.0 -> ^3.8.0 (numpy 2.x compatibility)
  - en-core-web-sm model: 3.7.1 -> 3.8.0
  - httpx (benchmark): ^0.24.0 -> ^0.27.0
- Update tool configuration:
  - Black target-version: py310 -> py312
  - Pyright pythonVersion: 3.10 -> 3.12
- Update Dockerfiles to use Python 3.12
- Update CI workflows to test on Python 3.12, 3.13, and 3.14
- Regenerate all poetry.lock files

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 20:25:11 -06:00
Nicholas Tindle
0a65df5102 fix(classic): always use native tool calling, fix N/A command loop
- Remove openai_functions config option - native tool calling is now always enabled
- Remove use_functions_api from BaseAgentConfiguration and prompt strategy
- Add use_prefill config to disable prefill for Anthropic (prefill + tools incompatible)
- Update anthropic dependency to ^0.45.0 for tools API support
- Simplify prompt strategy to always expect tool_calls from LLM response

This fixes the N/A command loop bug where models would output "N/A" as a
command name when function calling was disabled. With native tool calling
always enabled, models are forced to pick from valid tools only.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 19:54:40 -06:00
Nicholas Tindle
6fbd208fe3 chore: ignore .claude/settings.local.json in all directories
Update gitignore to use glob pattern for settings.local.json files
in any .claude directory. Also untrack the existing file.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:54:42 -06:00
Nicholas Tindle
8fc174ca87 refactor(classic): simplify log format by removing timestamps
Remove asctime from log formats since terminal output already has
timestamps from the logging infrastructure. Makes logs cleaner
and easier to read.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:52:47 -06:00
Nicholas Tindle
cacc89790f feat(classic): improve AutoGPT configuration and setup
Environment loading:
- Search for .env in multiple locations (cwd, ~/.autogpt, ~/.config/autogpt)
- Allows running autogpt from any directory
- Document search order in .env.template

Setup simplification:
- Remove interactive AI settings revision (was broken/unused)
- Simplify to just printing current settings
- Clean up unused imports

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:52:38 -06:00
Nicholas Tindle
b9113bee02 feat(classic): enhance existing components with new capabilities
CodeExecutorComponent:
- Add timeout and env_vars parameters to execution commands
- Add execute_shell_popen for streaming output
- Improve error handling with CodeTimeoutError

FileManagerComponent:
- Add file_info, file_search, file_copy, file_move commands
- Add directory_create, directory_list_tree commands
- Better path validation and error messages

GitOperationsComponent:
- Add git_log, git_show, git_branch commands
- Add git_stash, git_stash_pop, git_stash_list commands
- Add git_cherry_pick, git_revert, git_reset commands
- Add git_remote, git_fetch, git_pull, git_push commands

UserInteractionComponent:
- Add ask_multiple_choice for structured options
- Add notify_user for non-blocking notifications
- Add confirm_action for yes/no confirmations

WebSearchComponent:
- Minor error handling improvements

WebSeleniumComponent:
- Add get_page_content, execute_javascript commands
- Add take_element_screenshot command
- Add wait_for_element, scroll_page commands
- Improve element interaction reliability

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:52:27 -06:00
Nicholas Tindle
3f65da03e7 feat(classic): add new exception types for enhanced error handling
Add specialized exception classes for better error reporting:
- CodeTimeoutError: For code execution timeouts
- HTTPError: For HTTP request failures with status code/URL
- DataProcessingError: For JSON/CSV processing errors

Each exception includes helpful hints for users.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:52:10 -06:00
Nicholas Tindle
9e96d11b2d feat(classic): add utility components for agent capabilities
Add 6 new utility components to expand agent functionality:

- ArchiveHandlerComponent: ZIP/TAR archive operations (create, extract, list)
- ClipboardComponent: In-memory clipboard for copy/paste operations
- DataProcessorComponent: CSV/JSON data manipulation and analysis
- HTTPClientComponent: HTTP requests (GET, POST, PUT, DELETE)
- MathUtilsComponent: Mathematical calculations and statistics
- TextUtilsComponent: Text processing (regex, diff, encoding, hashing)

All components follow the forge component pattern with:
- CommandProvider for exposing commands
- DirectiveProvider for resources/best practices
- Comprehensive parameter validation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:50:52 -06:00
Nicholas Tindle
4c264b7ae9 feat(classic): add TodoComponent with LLM-powered decomposition
Add a task management component modeled after Claude Code's TodoWrite:
- TodoItem with recursive sub_items for hierarchical task structure
- todo_write: atomic list replacement with sub-items support
- todo_read: retrieve current todos with nested structure
- todo_clear: clear all todos
- todo_decompose: use smart LLM to break down tasks into sub-steps

Features:
- Hierarchical task tracking with independent status per sub-item
- MessageProvider shows todos in LLM context with proper indentation
- DirectiveProvider adds best practices for task management
- Graceful fallback when LLM provider not configured

Integrates with:
- original_autogpt Agent (full LLM decomposition support)
- ForgeAgent (basic task tracking, no decomposition)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 18:49:48 -06:00
Nicholas Tindle
0adbc0bd05 fix(classic): update CI for removed frontend and helper scripts
Remove references to deleted files (./run, cli.py, setup.py, frontend/)
from CI workflows. Replace ./run agent start with direct poetry commands
to start agent servers in background.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 17:41:11 -06:00
Nicholas Tindle
8f3291bc92 feat(classic): add workspace permissions system for agent commands
Add a layered permission system that controls agent command execution:

- Create autogpt.yaml in .autogpt/ folder with default allow/deny rules
- File operations in workspace allowed by default
- Sensitive files (.env, .key, .pem) blocked by default
- Dangerous shell commands (sudo, rm -rf) blocked by default
- Interactive prompts for unknown commands (y=agent, Y=workspace, n=deny)
- Agent-specific permissions stored in .autogpt/agents/{id}/permissions.yaml

Files added:
- forge/forge/config/workspace_settings.py - Pydantic models for settings
- forge/forge/permissions.py - CommandPermissionManager with pattern matching

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 17:39:33 -06:00
Nicholas Tindle
7a20de880d chore: add .autogpt/ to gitignore
The .autogpt/ directory is where AutoGPT stores agent data when running
from any directory. This should not be committed to version control.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 17:02:47 -06:00
Nicholas Tindle
ef8a6d2528 feat(classic): make AutoGPT installable and runnable from any directory
Add --workspace option to CLI that defaults to current working directory,
allowing users to run `autogpt` from any folder. Agent data is now stored
in `.autogpt/` subdirectory of the workspace instead of a hardcoded path.

Changes:
- Add -w/--workspace CLI option to run and serve commands
- Remove dependency on forge package location for PROJECT_ROOT
- Update config to use workspace instead of project_root
- Store agent data in .autogpt/ within workspace directory
- Update pyproject.toml files with proper PyPI metadata
- Fix outdated tests to match current implementation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 17:00:36 -06:00
Nicholas Tindle
fd66be2aaa chore(classic): remove unneeded files and add CLAUDE.md docs
- Remove deprecated Flutter frontend (replaced by autogpt_platform)
- Remove shell scripts (run, setup, autogpt.sh, etc.)
- Remove tutorials (outdated)
- Remove CLI-USAGE.md and FORGE-QUICKSTART.md
- Add CLAUDE.md files for Claude Code guidance

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 16:17:35 -06:00
Nicholas Tindle
ae2cc97dc4 feat(classic): add modern Anthropic models and fix deprecated API
- Add Claude 3.5 v2, Claude 4 Sonnet, Claude 4 Opus, and Claude 4.5 Opus models
- Add rolling aliases (CLAUDE_SONNET, CLAUDE_OPUS, CLAUDE_HAIKU)
- Fix deprecated beta.tools.messages.create API call to use standard messages.create
- Update anthropic SDK from ^0.25.1 to >=0.40,<1.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 16:15:16 -06:00
Nicholas Tindle
1b56ff13d9 test 2026-01-18 15:32:10 -06:00
Zamil Majdy
f31c160043 feat(platform): add endedAt field and fix execution analytics timestamps (#11759)
## Summary

This PR adds proper execution end time tracking and fixes timestamp
handling throughout the execution analytics system.

### Key Changes

1. **Added `endedAt` field to database schema** - Executions now have a
dedicated field for tracking when they finish
2. **Fixed timestamp nullable handling** - `started_at` and `ended_at`
are now properly nullable in types
3. **Fixed chart aggregation** - Reduced threshold from ≥3 to ≥1
executions per day
4. **Improved timestamp display** - Moved timestamps to expandable
details section in analytics table
5. **Fixed nullable timestamp bugs** - Updated all frontend code to
handle null timestamps correctly

## Problem Statement

### Issue 1: Missing Execution End Times
Previously, executions used `updatedAt` (last DB update) as a proxy for
"end time". This broke when adding correctness scores retroactively -
the end time would change to whenever the score was added, not when the
execution actually finished.

### Issue 2: Chart Shows Only One Data Point
The accuracy trends chart showed only one data point despite having
executions across multiple days. Root cause: aggregation required ≥3
executions per day.

### Issue 3: Incorrect Type Definitions
Manually maintained types defined `started_at` and `ended_at` as
non-nullable `Date`, contradicting reality where QUEUED executions
haven't started yet.

## Solution

### Database Schema (`schema.prisma`)
```prisma
model AgentGraphExecution {
  // ...
  startedAt DateTime?
  endedAt   DateTime?  // NEW FIELD
  // ...
}
```

### Execution Lifecycle
- **QUEUED**: `startedAt = null`, `endedAt = null` (not started)
- **RUNNING**: `startedAt = set`, `endedAt = null` (in progress)  
- **COMPLETED/FAILED/TERMINATED**: `startedAt = set`, `endedAt = set`
(finished)

### Migration Strategy
```sql
-- Add endedAt column
ALTER TABLE "AgentGraphExecution" ADD COLUMN "endedAt" TIMESTAMP(3);

-- Backfill ONLY terminal executions (prevents marking RUNNING executions as ended)
UPDATE "AgentGraphExecution"
SET "endedAt" = "updatedAt"
WHERE "endedAt" IS NULL
  AND "executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED');
```

## Changes by Component

### Backend

**`schema.prisma`**
- Added `endedAt` field to `AgentGraphExecution`

**`execution.py`**
- Made `started_at` and `ended_at` optional with Field descriptions
- Updated `from_db()` to use `endedAt` instead of `updatedAt`
- `update_graph_execution_stats()` sets `endedAt` when status becomes
terminal

**`execution_analytics_routes.py`**
- Removed `created_at`/`updated_at` from `ExecutionAnalyticsResult` (DB
metadata, not execution data)
- Kept only `started_at`/`ended_at` (actual execution runtime)
- Made settings global (avoid recreation)
- Moved OpenAI key validation to `_process_batch` (only check when LLM
actually runs)

**`analytics.py`**
- Fixed aggregation: `COUNT(*) >= 1` (was 3) - include all days with ≥1
execution
- Uses `createdAt` for chart grouping (when execution was queued)

**`late_execution_monitor.py`**
- Handle optional `started_at` with fallback to `datetime.min` for
sorting
- Display "Not started" when `started_at` is null

### Frontend

**Type Definitions**
- Fixed manually maintained `types.ts`: `started_at: Date | null` (was
non-nullable)
- Generated types were already correct

**Analytics Components**
- `AnalyticsResultsTable.tsx`: Show only `started_at`/`ended_at` in
2-column expandable grid
- `ExecutionAnalyticsForm.tsx`: Added filter explanation UI

**Monitoring Components** - Fixed null handling bugs:
- `OldAgentLibraryView.tsx`: Handle null in reduce function
- `agent-runs-selector-list.tsx`: Safe sorting with `?.getTime() ?? 0`
- `AgentFlowList.tsx`: Filter/sort with null checks
- `FlowRunsStatus.tsx`: Filter null timestamps
- `FlowRunsTimeline.tsx`: Filter executions with null timestamps before
rendering
- `monitoring/page.tsx`: Safe sorting
- `ActivityItem.tsx`: Fallback to "recently" for null timestamps

## Benefits

 **Accurate End Times**: `endedAt` is frozen when execution finishes,
not updated later
 **Type Safety**: Nullable types match reality, exposing real bugs  
 **Better UX**: Chart shows all days with data (not just days with ≥3
executions)
 **Bug Fixes**: 7+ frontend components now handle null timestamps
correctly
 **Documentation**: Field descriptions explain when timestamps are null

## Testing

### Backend
```bash
cd autogpt_platform/backend
poetry run format  #  All checks passed
poetry run lint    #  All checks passed
```

### Frontend  
```bash
cd autogpt_platform/frontend
pnpm format        #  All checks passed
pnpm lint          #  All checks passed
pnpm types         #  All type errors fixed
```

### Test Data Generation
Created script to generate 35 test executions across 7 days with
correctness scores:
```bash
poetry run python scripts/generate_test_analytics_data.py
```

## Migration Notes

⚠️ **Important**: The migration only backfills `endedAt` for executions
with terminal status (COMPLETED, FAILED, TERMINATED). Active executions
(QUEUED, RUNNING) correctly keep `endedAt = null`.

## Breaking Changes

None - this is backward compatible:
- `endedAt` is nullable, existing code that doesn't use it is unaffected
- Frontend already used generated types which were correct
- Migration safely backfills historical data

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces explicit execution end-time tracking and normalizes
timestamp handling across backend and frontend.
> 
> - Adds `endedAt` to `AgentGraphExecution` (schema + migration);
backfills terminal executions; sets `endedAt` on terminal status updates
> - Makes `GraphExecutionMeta.started_at/ended_at` optional; updates
`from_db()` to use DB `endedAt`; exposes timestamps in
`ExecutionAnalyticsResult`
> - Moves OpenAI key validation into batch processing; instantiates
`Settings` once
> - Accuracy trends: reduce daily aggregation threshold to `>= 1`;
optional historical series
> - Monitoring/analytics UI: results table shows/export
`started_at`/`ended_at`; adds chart filter explainer
> - Frontend null-safety: update types (`Date | null`) and fix
sorting/filtering/rendering for nullable timestamps across monitoring
and library views
> - Late execution monitor: safe sorting/display when `started_at` is
null
> - OpenAPI specs updated for new/nullable fields
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1d987ca6e5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-01-16 21:44:24 +00:00
Nicholas Tindle
06550a87eb feat(backend): add missed default credentials (#11760)
### Changes 🏗️

**Fixed missing default credentials and provider name mismatch in the
credentials store:**

1. **Provider name correction** (`credentials_store.py:97-103`)
- Changed `provider="unreal"` → `provider="unreal_speech"` to match the
existing `unreal_speech_api_key` setting and block usage
- Updated title from "Use Credits for Unreal" → "Use Credits for Unreal
Speech" for clarity

2. **Added missing OpenWeatherMap credentials**
(`credentials_store.py:219-226`)
- New `openweathermap_credentials` definition with `APIKeyCredentials`
- Uses existing `settings.secrets.openweathermap_api_key` setting that
was previously defined but had no credential object
   - Added to `DEFAULT_CREDENTIALS` list

3. **Fixed credentials not exposed in `get_all_creds()`**
(`credentials_store.py:343-354`)
- Added `llama_api_credentials` conditional append (was defined but not
returned to users)
- Added `v0_credentials` conditional append (was defined but not
returned to users)
   - Added `openweathermap_credentials` conditional append

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified provider name `unreal_speech` matches block usage in
`text_to_speech_block.py`
  - [x] Confirmed `openweathermap_api_key` setting exists in secrets
- [x] Confirmed `llama_api_key` and `v0_api_key` settings exist in
secrets

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Aligns backend credential definitions and exposes missing system
creds; updates frontend to hide new built-ins.
> 
> - Backend `credentials_store.py`:
>   - Corrects `provider` to `unreal_speech` and updates title
> - Adds `openweathermap_credentials`; includes in `DEFAULT_CREDENTIALS`
and `get_all_creds()` when key present
> - Ensures `llama_api_credentials` and `v0_credentials` are returned by
`get_all_creds()`
> - Frontend `integrations/page.tsx`:
> - Extends `hiddenCredentials` with IDs for `v0`, `webshare_proxy`, and
`openweathermap`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7d46b76c6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-16 21:18:12 +00:00
Nicholas Tindle
ea521eed26 wip: add supprot for new openai models (non working) 2025-12-26 10:02:17 -06:00
2460 changed files with 55036 additions and 820866 deletions

View File

@@ -6,11 +6,15 @@ on:
paths:
- '.github/workflows/classic-autogpt-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/direct_benchmark/**'
- 'classic/forge/**'
pull_request:
branches: [ master, dev, release-* ]
paths:
- '.github/workflows/classic-autogpt-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/direct_benchmark/**'
- 'classic/forge/**'
concurrency:
group: ${{ format('classic-autogpt-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -19,47 +23,22 @@ concurrency:
defaults:
run:
shell: bash
working-directory: classic/original_autogpt
working-directory: classic
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
runs-on: ubuntu-latest
steps:
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
- name: Start MinIO service
working-directory: '.'
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
@@ -71,41 +50,23 @@ jobs:
git config --global user.name "Auto-GPT-Bot"
git config --global user.email "github-bot@agpt.co"
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
python-version: "3.12"
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/original_autogpt/poetry.lock') }}
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('classic/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
- name: Install Python dependencies
run: poetry install
@@ -116,12 +77,12 @@ jobs:
--cov=autogpt --cov-branch --cov-report term-missing --cov-report xml \
--numprocesses=logical --durations=10 \
--junitxml=junit.xml -o junit_family=legacy \
tests/unit tests/integration
original_autogpt/tests/unit original_autogpt/tests/integration
env:
CI: true
PLAIN_OUTPUT: True
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
S3_ENDPOINT_URL: ${{ runner.os != 'Windows' && 'http://127.0.0.1:9000' || '' }}
S3_ENDPOINT_URL: http://127.0.0.1:9000
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
@@ -135,11 +96,11 @@ jobs:
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: autogpt-agent,${{ runner.os }}
flags: autogpt-agent
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: test-logs
path: classic/original_autogpt/logs/
path: classic/logs/

View File

@@ -11,9 +11,6 @@ on:
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '!**/*.md'
pull_request:
branches: [ master, dev, release-* ]
@@ -22,9 +19,6 @@ on:
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/run'
- 'classic/cli.py'
- 'classic/setup.py'
- '!**/*.md'
defaults:
@@ -35,13 +29,9 @@ defaults:
jobs:
serve-agent-protocol:
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [ original_autogpt ]
fail-fast: false
timeout-minutes: 20
env:
min-python-version: '3.10'
min-python-version: '3.12'
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -55,22 +45,22 @@ jobs:
python-version: ${{ env.min-python-version }}
- name: Install Poetry
working-directory: ./classic/${{ matrix.agent-name }}/
run: |
curl -sSL https://install.python-poetry.org | python -
- name: Run regression tests
- name: Install dependencies
run: poetry install
- name: Run smoke tests with direct-benchmark
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
poetry run agbenchmark --mock --test=BasicRetrieval --test=Battleship --test=WebArenaTask_0
poetry run agbenchmark --test=WriteFile
poetry run direct-benchmark run \
--strategies one_shot \
--models claude \
--tests ReadFile,WriteFile \
--json
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AGENT_NAME: ${{ matrix.agent-name }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
HELICONE_CACHE_ENABLED: false
HELICONE_PROPERTY_AGENT: ${{ matrix.agent-name }}
REPORTS_FOLDER: ${{ format('../../reports/{0}', matrix.agent-name) }}
TELEMETRY_ENVIRONMENT: autogpt-ci
TELEMETRY_OPT_IN: ${{ github.ref_name == 'master' }}
NONINTERACTIVE_MODE: "true"
CI: true

View File

@@ -1,17 +1,21 @@
name: Classic - AGBenchmark CI
name: Classic - Direct Benchmark CI
on:
push:
branches: [ master, dev, ci-test* ]
paths:
- 'classic/benchmark/**'
- '!classic/benchmark/reports/**'
- 'classic/direct_benchmark/**'
- 'classic/benchmark/agbenchmark/challenges/**'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- .github/workflows/classic-benchmark-ci.yml
pull_request:
branches: [ master, dev, release-* ]
paths:
- 'classic/benchmark/**'
- '!classic/benchmark/reports/**'
- 'classic/direct_benchmark/**'
- 'classic/benchmark/agbenchmark/challenges/**'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- .github/workflows/classic-benchmark-ci.yml
concurrency:
@@ -23,23 +27,16 @@ defaults:
shell: bash
env:
min-python-version: '3.10'
min-python-version: '3.12'
jobs:
test:
permissions:
contents: read
benchmark-tests:
runs-on: ubuntu-latest
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
defaults:
run:
shell: bash
working-directory: classic/benchmark
working-directory: classic
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -47,71 +44,84 @@ jobs:
fetch-depth: 0
submodules: true
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
python-version: ${{ env.min-python-version }}
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/benchmark/poetry.lock') }}
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('classic/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Python dependencies
- name: Install dependencies
run: poetry install
- name: Run pytest with coverage
- name: Run basic benchmark tests
run: |
poetry run pytest -vv \
--cov=agbenchmark --cov-branch --cov-report term-missing --cov-report xml \
--durations=10 \
--junitxml=junit.xml -o junit_family=legacy \
tests
echo "Testing ReadFile challenge with one_shot strategy..."
poetry run direct-benchmark run \
--strategies one_shot \
--models claude \
--tests ReadFile \
--json
echo "Testing WriteFile challenge..."
poetry run direct-benchmark run \
--strategies one_shot \
--models claude \
--tests WriteFile \
--json
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
- name: Upload test results to Codecov
if: ${{ !cancelled() }} # Run even if tests fail
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Test category filtering
run: |
echo "Testing coding category..."
poetry run direct-benchmark run \
--strategies one_shot \
--models claude \
--categories coding \
--tests ReadFile,WriteFile \
--json
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: agbenchmark,${{ runner.os }}
- name: Test multiple strategies
run: |
echo "Testing multiple strategies..."
poetry run direct-benchmark run \
--strategies one_shot,plan_execute \
--models claude \
--tests ReadFile \
--parallel 2 \
--json
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NONINTERACTIVE_MODE: "true"
self-test-with-agent:
# Run regression tests on maintain challenges
regression-tests:
runs-on: ubuntu-latest
strategy:
matrix:
agent-name: [forge]
fail-fast: false
timeout-minutes: 20
timeout-minutes: 45
if: github.ref == 'refs/heads/master' || github.ref == 'refs/heads/dev'
defaults:
run:
shell: bash
working-directory: classic
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -126,51 +136,22 @@ jobs:
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python -
curl -sSL https://install.python-poetry.org | python3 -
- name: Install dependencies
run: poetry install
- name: Run regression tests
working-directory: classic
run: |
./run agent start ${{ matrix.agent-name }}
cd ${{ matrix.agent-name }}
set +e # Ignore non-zero exit codes and continue execution
echo "Running the following command: poetry run agbenchmark --maintain --mock"
poetry run agbenchmark --maintain --mock
EXIT_CODE=$?
set -e # Stop ignoring non-zero exit codes
# Check if the exit code was 5, and if so, exit with 0 instead
if [ $EXIT_CODE -eq 5 ]; then
echo "regression_tests.json is empty."
fi
echo "Running the following command: poetry run agbenchmark --mock"
poetry run agbenchmark --mock
echo "Running the following command: poetry run agbenchmark --mock --category=data"
poetry run agbenchmark --mock --category=data
echo "Running the following command: poetry run agbenchmark --mock --category=coding"
poetry run agbenchmark --mock --category=coding
# echo "Running the following command: poetry run agbenchmark --test=WriteFile"
# poetry run agbenchmark --test=WriteFile
cd ../benchmark
poetry install
echo "Adding the BUILD_SKILL_TREE environment variable. This will attempt to add new elements in the skill tree. If new elements are added, the CI fails because they should have been pushed"
export BUILD_SKILL_TREE=true
# poetry run agbenchmark --mock
# CHANGED=$(git diff --name-only | grep -E '(agbenchmark/challenges)|(../classic/frontend/assets)') || echo "No diffs"
# if [ ! -z "$CHANGED" ]; then
# echo "There are unstaged changes please run agbenchmark and commit those changes since they are needed."
# echo "$CHANGED"
# exit 1
# else
# echo "No unstaged changes."
# fi
echo "Running regression tests (previously beaten challenges)..."
poetry run direct-benchmark run \
--strategies one_shot \
--models claude \
--maintain \
--parallel 4 \
--json
env:
CI: true
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
TELEMETRY_ENVIRONMENT: autogpt-benchmark-ci
TELEMETRY_OPT_IN: ${{ github.ref_name == 'master' }}
NONINTERACTIVE_MODE: "true"

View File

@@ -6,13 +6,11 @@ on:
paths:
- '.github/workflows/classic-forge-ci.yml'
- 'classic/forge/**'
- '!classic/forge/tests/vcr_cassettes'
pull_request:
branches: [ master, dev, release-* ]
paths:
- '.github/workflows/classic-forge-ci.yml'
- 'classic/forge/**'
- '!classic/forge/tests/vcr_cassettes'
concurrency:
group: ${{ format('forge-ci-{0}', github.head_ref && format('{0}-{1}', github.event_name, github.event.pull_request.number) || github.sha) }}
@@ -21,115 +19,38 @@ concurrency:
defaults:
run:
shell: bash
working-directory: classic/forge
working-directory: classic
jobs:
test:
permissions:
contents: read
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
platform-os: [ubuntu, macos, macos-arm64, windows]
runs-on: ${{ matrix.platform-os != 'macos-arm64' && format('{0}-latest', matrix.platform-os) || 'macos-14' }}
runs-on: ubuntu-latest
steps:
# Quite slow on macOS (2~4 minutes to set up Docker)
# - name: Set up Docker (macOS)
# if: runner.os == 'macOS'
# uses: crazy-max/ghaction-setup-docker@v3
- name: Start MinIO service (Linux)
if: runner.os == 'Linux'
- name: Start MinIO service
working-directory: '.'
run: |
docker pull minio/minio:edge-cicd
docker run -d -p 9000:9000 minio/minio:edge-cicd
- name: Start MinIO service (macOS)
if: runner.os == 'macOS'
working-directory: ${{ runner.temp }}
run: |
brew install minio/stable/minio
mkdir data
minio server ./data &
# No MinIO on Windows:
# - Windows doesn't support running Linux Docker containers
# - It doesn't seem possible to start background processes on Windows. They are
# killed after the step returns.
# See: https://github.com/actions/runner/issues/598#issuecomment-2011890429
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: true
- name: Checkout cassettes
if: ${{ startsWith(github.event_name, 'pull_request') }}
env:
PR_BASE: ${{ github.event.pull_request.base.ref }}
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
cassette_base_branch="${PR_BASE}"
cd tests/vcr_cassettes
if ! git ls-remote --exit-code --heads origin $cassette_base_branch ; then
cassette_base_branch="master"
fi
if git ls-remote --exit-code --heads origin $cassette_branch ; then
git fetch origin $cassette_branch
git fetch origin $cassette_base_branch
git checkout $cassette_branch
# Pick non-conflicting cassette updates from the base branch
git merge --no-commit --strategy-option=ours origin/$cassette_base_branch
echo "Using cassettes from mirror branch '$cassette_branch'," \
"synced to upstream branch '$cassette_base_branch'."
else
git checkout -b $cassette_branch
echo "Branch '$cassette_branch' does not exist in cassette submodule." \
"Using cassettes from '$cassette_base_branch'."
fi
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
python-version: "3.12"
- name: Set up Python dependency cache
# On Windows, unpacking cached dependencies takes longer than just installing them
if: runner.os != 'Windows'
uses: actions/cache@v4
with:
path: ${{ runner.os == 'macOS' && '~/Library/Caches/pypoetry' || '~/.cache/pypoetry' }}
key: poetry-${{ runner.os }}-${{ hashFiles('classic/forge/poetry.lock') }}
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('classic/poetry.lock') }}
- name: Install Poetry (Unix)
if: runner.os != 'Windows'
run: |
curl -sSL https://install.python-poetry.org | python3 -
if [ "${{ runner.os }}" = "macOS" ]; then
PATH="$HOME/.local/bin:$PATH"
echo "$HOME/.local/bin" >> $GITHUB_PATH
fi
- name: Install Poetry (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -
$env:PATH += ";$env:APPDATA\Python\Scripts"
echo "$env:APPDATA\Python\Scripts" >> $env:GITHUB_PATH
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
- name: Install Python dependencies
run: poetry install
@@ -140,12 +61,15 @@ jobs:
--cov=forge --cov-branch --cov-report term-missing --cov-report xml \
--durations=10 \
--junitxml=junit.xml -o junit_family=legacy \
forge
forge/forge forge/tests
env:
CI: true
PLAIN_OUTPUT: True
# API keys - tests that need these will skip if not available
# Secrets are not available to fork PRs (GitHub security feature)
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
S3_ENDPOINT_URL: ${{ runner.os != 'Windows' && 'http://127.0.0.1:9000' || '' }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
S3_ENDPOINT_URL: http://127.0.0.1:9000
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
@@ -159,85 +83,11 @@ jobs:
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
flags: forge,${{ runner.os }}
- id: setup_git_auth
name: Set up git token authentication
# Cassettes may be pushed even when tests fail
if: success() || failure()
run: |
config_key="http.${{ github.server_url }}/.extraheader"
if [ "${{ runner.os }}" = 'macOS' ]; then
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64)
else
base64_pat=$(echo -n "pat:${{ secrets.PAT_REVIEW }}" | base64 -w0)
fi
git config "$config_key" \
"Authorization: Basic $base64_pat"
cd tests/vcr_cassettes
git config "$config_key" \
"Authorization: Basic $base64_pat"
echo "config_key=$config_key" >> $GITHUB_OUTPUT
- id: push_cassettes
name: Push updated cassettes
# For pull requests, push updated cassettes even when tests fail
if: github.event_name == 'push' || (! github.event.pull_request.head.repo.fork && (success() || failure()))
env:
PR_BRANCH: ${{ github.event.pull_request.head.ref }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
if [ "${{ startsWith(github.event_name, 'pull_request') }}" = "true" ]; then
is_pull_request=true
cassette_branch="${PR_AUTHOR}-${PR_BRANCH}"
else
cassette_branch="${{ github.ref_name }}"
fi
cd tests/vcr_cassettes
# Commit & push changes to cassettes if any
if ! git diff --quiet; then
git add .
git commit -m "Auto-update cassettes"
git push origin HEAD:$cassette_branch
if [ ! $is_pull_request ]; then
cd ../..
git add tests/vcr_cassettes
git commit -m "Update cassette submodule"
git push origin HEAD:$cassette_branch
fi
echo "updated=true" >> $GITHUB_OUTPUT
else
echo "updated=false" >> $GITHUB_OUTPUT
echo "No cassette changes to commit"
fi
- name: Post Set up git token auth
if: steps.setup_git_auth.outcome == 'success'
run: |
git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
git submodule foreach git config --unset-all '${{ steps.setup_git_auth.outputs.config_key }}'
- name: Apply "behaviour change" label and comment on PR
if: ${{ startsWith(github.event_name, 'pull_request') }}
run: |
PR_NUMBER="${{ github.event.pull_request.number }}"
TOKEN="${{ secrets.PAT_REVIEW }}"
REPO="${{ github.repository }}"
if [[ "${{ steps.push_cassettes.outputs.updated }}" == "true" ]]; then
echo "Adding label and comment..."
echo $TOKEN | gh auth login --with-token
gh issue edit $PR_NUMBER --add-label "behaviour change"
gh issue comment $PR_NUMBER --body "You changed AutoGPT's behaviour on ${{ runner.os }}. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged."
fi
flags: forge
- name: Upload logs to artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: test-logs
path: classic/forge/logs/
path: classic/logs/

View File

@@ -1,60 +0,0 @@
name: Classic - Frontend CI/CD
on:
push:
branches:
- master
- dev
- 'ci-test*' # This will match any branch that starts with "ci-test"
paths:
- 'classic/frontend/**'
- '.github/workflows/classic-frontend-ci.yml'
pull_request:
paths:
- 'classic/frontend/**'
- '.github/workflows/classic-frontend-ci.yml'
jobs:
build:
permissions:
contents: write
pull-requests: write
runs-on: ubuntu-latest
env:
BUILD_BRANCH: ${{ format('classic-frontend-build/{0}', github.ref_name) }}
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Setup Flutter
uses: subosito/flutter-action@v2
with:
flutter-version: '3.13.2'
- name: Build Flutter to Web
run: |
cd classic/frontend
flutter build web --base-href /app/
# - name: Commit and Push to ${{ env.BUILD_BRANCH }}
# if: github.event_name == 'push'
# run: |
# git config --local user.email "action@github.com"
# git config --local user.name "GitHub Action"
# git add classic/frontend/build/web
# git checkout -B ${{ env.BUILD_BRANCH }}
# git commit -m "Update frontend build to ${GITHUB_SHA:0:7}" -a
# git push -f origin ${{ env.BUILD_BRANCH }}
- name: Create PR ${{ env.BUILD_BRANCH }} -> ${{ github.ref_name }}
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v7
with:
add-paths: classic/frontend/build/web
base: ${{ github.ref_name }}
branch: ${{ env.BUILD_BRANCH }}
delete-branch: true
title: "Update frontend build in `${{ github.ref_name }}`"
body: "This PR updates the frontend build based on commit ${{ github.sha }}."
commit-message: "Update frontend build based on commit ${{ github.sha }}"

View File

@@ -7,7 +7,9 @@ on:
- '.github/workflows/classic-python-checks-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/direct_benchmark/**'
- 'classic/pyproject.toml'
- 'classic/poetry.lock'
- '**.py'
- '!classic/forge/tests/vcr_cassettes'
pull_request:
@@ -16,7 +18,9 @@ on:
- '.github/workflows/classic-python-checks-ci.yml'
- 'classic/original_autogpt/**'
- 'classic/forge/**'
- 'classic/benchmark/**'
- 'classic/direct_benchmark/**'
- 'classic/pyproject.toml'
- 'classic/poetry.lock'
- '**.py'
- '!classic/forge/tests/vcr_cassettes'
@@ -27,44 +31,13 @@ concurrency:
defaults:
run:
shell: bash
working-directory: classic
jobs:
get-changed-parts:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- id: changes-in
name: Determine affected subprojects
uses: dorny/paths-filter@v3
with:
filters: |
original_autogpt:
- classic/original_autogpt/autogpt/**
- classic/original_autogpt/tests/**
- classic/original_autogpt/poetry.lock
forge:
- classic/forge/forge/**
- classic/forge/tests/**
- classic/forge/poetry.lock
benchmark:
- classic/benchmark/agbenchmark/**
- classic/benchmark/tests/**
- classic/benchmark/poetry.lock
outputs:
changed-parts: ${{ steps.changes-in.outputs.changes }}
lint:
needs: get-changed-parts
runs-on: ubuntu-latest
env:
min-python-version: "3.10"
strategy:
matrix:
sub-package: ${{ fromJson(needs.get-changed-parts.outputs.changed-parts) }}
fail-fast: false
min-python-version: "3.12"
steps:
- name: Checkout repository
@@ -81,42 +54,31 @@ jobs:
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: ${{ runner.os }}-poetry-${{ hashFiles(format('{0}/poetry.lock', matrix.sub-package)) }}
key: ${{ runner.os }}-poetry-${{ hashFiles('classic/poetry.lock') }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
- name: Install Python dependencies
run: poetry -C classic/${{ matrix.sub-package }} install
run: poetry install
# Lint
- name: Lint (isort)
run: poetry run isort --check .
working-directory: classic/${{ matrix.sub-package }}
- name: Lint (Black)
if: success() || failure()
run: poetry run black --check .
working-directory: classic/${{ matrix.sub-package }}
- name: Lint (Flake8)
if: success() || failure()
run: poetry run flake8 .
working-directory: classic/${{ matrix.sub-package }}
types:
needs: get-changed-parts
runs-on: ubuntu-latest
env:
min-python-version: "3.10"
strategy:
matrix:
sub-package: ${{ fromJson(needs.get-changed-parts.outputs.changed-parts) }}
fail-fast: false
min-python-version: "3.12"
steps:
- name: Checkout repository
@@ -133,19 +95,16 @@ jobs:
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: ${{ runner.os }}-poetry-${{ hashFiles(format('{0}/poetry.lock', matrix.sub-package)) }}
key: ${{ runner.os }}-poetry-${{ hashFiles('classic/poetry.lock') }}
- name: Install Poetry
run: curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
- name: Install Python dependencies
run: poetry -C classic/${{ matrix.sub-package }} install
run: poetry install
# Typecheck
- name: Typecheck
if: success() || failure()
run: poetry run pyright
working-directory: classic/${{ matrix.sub-package }}

View File

@@ -93,5 +93,5 @@ jobs:
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

View File

@@ -7,7 +7,7 @@
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
# Requirements: CLAUDE_CODE_OAUTH_TOKEN secret must be configured
name: Claude Dependabot PR Review
@@ -308,7 +308,7 @@ jobs:
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |

View File

@@ -323,7 +323,7 @@ jobs:
id: claude
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus

78
.github/workflows/docs-block-sync.yml vendored Normal file
View File

@@ -0,0 +1,78 @@
name: Block Documentation Sync Check
on:
push:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/integrations/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
pull_request:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/integrations/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
jobs:
check-docs-sync:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Check block documentation is in sync
working-directory: autogpt_platform/backend
run: |
echo "Checking if block documentation is in sync with code..."
poetry run python scripts/generate_block_docs.py --check
- name: Show diff if out of sync
if: failure()
working-directory: autogpt_platform/backend
run: |
echo "::error::Block documentation is out of sync with code!"
echo ""
echo "To fix this, run the following command locally:"
echo " cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
echo ""
echo "Then commit the updated documentation files."
echo ""
echo "Regenerating docs to show diff..."
poetry run python scripts/generate_block_docs.py
echo ""
echo "Changes detected:"
git diff ../../docs/integrations/ || true

View File

@@ -0,0 +1,95 @@
name: Claude Block Docs Review
on:
pull_request:
types: [opened, synchronize]
paths:
- "docs/integrations/**"
- "autogpt_platform/backend/backend/blocks/**"
jobs:
claude-review:
# Only run for PRs from members/collaborators
if: |
github.event.pull_request.author_association == 'OWNER' ||
github.event.pull_request.author_association == 'MEMBER' ||
github.event.pull_request.author_association == 'COLLABORATOR'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Code Review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Read,Glob,Grep,Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*)"
prompt: |
You are reviewing a PR that modifies block documentation or block code for AutoGPT.
## Your Task
Review the changes in this PR and provide constructive feedback. Focus on:
1. **Documentation Accuracy**: For any block code changes, verify that:
- Input/output tables in docs match the actual block schemas
- Description text accurately reflects what the block does
- Any new blocks have corresponding documentation
2. **Manual Content Quality**: Check manual sections (marked with `<!-- MANUAL: -->` markers):
- "How it works" sections should have clear technical explanations
- "Possible use case" sections should have practical, real-world examples
- Content should be helpful for users trying to understand the blocks
3. **Template Compliance**: Ensure docs follow the standard template:
- What it is (brief intro)
- What it does (description)
- How it works (technical explanation)
- Inputs table
- Outputs table
- Possible use case
4. **Cross-references**: Check that links and anchors are correct
## Review Process
1. First, get the PR diff to see what changed: `gh pr diff ${{ github.event.pull_request.number }}`
2. Read any modified block files to understand the implementation
3. Read corresponding documentation files to verify accuracy
4. Provide your feedback as a PR comment
Be constructive and specific. If everything looks good, say so!
If there are issues, explain what's wrong and suggest how to fix it.

194
.github/workflows/docs-enhance.yml vendored Normal file
View File

@@ -0,0 +1,194 @@
name: Enhance Block Documentation
on:
workflow_dispatch:
inputs:
block_pattern:
description: 'Block file pattern to enhance (e.g., "google/*.md" or "*" for all blocks)'
required: true
default: '*'
type: string
dry_run:
description: 'Dry run mode - show proposed changes without committing'
type: boolean
default: true
max_blocks:
description: 'Maximum number of blocks to process (0 for unlimited)'
type: number
default: 10
jobs:
enhance-docs:
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: write
pull-requests: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Enhancement
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Read,Edit,Glob,Grep,Write,Bash(git:*),Bash(gh:*),Bash(find:*),Bash(ls:*)"
prompt: |
You are enhancing block documentation for AutoGPT. Your task is to improve the MANUAL sections
of block documentation files by reading the actual block implementations and writing helpful content.
## Configuration
- Block pattern: ${{ inputs.block_pattern }}
- Dry run: ${{ inputs.dry_run }}
- Max blocks to process: ${{ inputs.max_blocks }}
## Your Task
1. **Find Documentation Files**
Find block documentation files matching the pattern in `docs/integrations/`
Pattern: ${{ inputs.block_pattern }}
Use: `find docs/integrations -name "*.md" -type f`
2. **For Each Documentation File** (up to ${{ inputs.max_blocks }} files):
a. Read the documentation file
b. Identify which block(s) it documents (look for the block class name)
c. Find and read the corresponding block implementation in `autogpt_platform/backend/backend/blocks/`
d. Improve the MANUAL sections:
**"How it works" section** (within `<!-- MANUAL: how_it_works -->` markers):
- Explain the technical flow of the block
- Describe what APIs or services it connects to
- Note any important configuration or prerequisites
- Keep it concise but informative (2-4 paragraphs)
**"Possible use case" section** (within `<!-- MANUAL: use_case -->` markers):
- Provide 2-3 practical, real-world examples
- Make them specific and actionable
- Show how this block could be used in an automation workflow
3. **Important Rules**
- ONLY modify content within `<!-- MANUAL: -->` and `<!-- END MANUAL -->` markers
- Do NOT modify auto-generated sections (inputs/outputs tables, descriptions)
- Keep content accurate based on the actual block implementation
- Write for users who may not be technical experts
4. **Output**
${{ inputs.dry_run == true && 'DRY RUN MODE: Show proposed changes for each file but do NOT actually edit the files. Describe what you would change.' || 'LIVE MODE: Actually edit the files to improve the documentation.' }}
## Example Improvements
**Before (How it works):**
```
_Add technical explanation here._
```
**After (How it works):**
```
This block connects to the GitHub API to retrieve issue information. When executed,
it authenticates using your GitHub credentials and fetches issue details including
title, body, labels, and assignees.
The block requires a valid GitHub OAuth connection with repository access permissions.
It supports both public and private repositories you have access to.
```
**Before (Possible use case):**
```
_Add practical use case examples here._
```
**After (Possible use case):**
```
**Customer Support Automation**: Monitor a GitHub repository for new issues with
the "bug" label, then automatically create a ticket in your support system and
notify the on-call engineer via Slack.
**Release Notes Generation**: When a new release is published, gather all closed
issues since the last release and generate a summary for your changelog.
```
Begin by finding and listing the documentation files to process.
- name: Create PR with enhanced documentation
if: ${{ inputs.dry_run == false }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Check if there are changes
if git diff --quiet docs/integrations/; then
echo "No changes to commit"
exit 0
fi
# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Create branch and commit
BRANCH_NAME="docs/enhance-blocks-$(date +%Y%m%d-%H%M%S)"
git checkout -b "$BRANCH_NAME"
git add docs/integrations/
git commit -m "docs: enhance block documentation with LLM-generated content
Pattern: ${{ inputs.block_pattern }}
Max blocks: ${{ inputs.max_blocks }}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push and create PR
git push -u origin "$BRANCH_NAME"
gh pr create \
--title "docs: LLM-enhanced block documentation" \
--body "## Summary
This PR contains LLM-enhanced documentation for block files matching pattern: \`${{ inputs.block_pattern }}\`
The following manual sections were improved:
- **How it works**: Technical explanations based on block implementations
- **Possible use case**: Practical, real-world examples
## Review Checklist
- [ ] Content is accurate based on block implementations
- [ ] Examples are practical and helpful
- [ ] No auto-generated sections were modified
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)" \
--base dev

10
.gitignore vendored
View File

@@ -3,6 +3,7 @@
classic/original_autogpt/keys.py
classic/original_autogpt/*.json
auto_gpt_workspace/*
.autogpt/
*.mpeg
.env
# Root .env files
@@ -159,6 +160,10 @@ CURRENT_BULLETIN.md
# AgBenchmark
classic/benchmark/agbenchmark/reports/
classic/reports/
classic/direct_benchmark/reports/
classic/.benchmark_workspaces/
classic/direct_benchmark/.benchmark_workspaces/
# Nodejs
package-lock.json
@@ -177,5 +182,8 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
.claude/settings.local.json
**/.claude/settings.local.json
/autogpt_platform/backend/logs
# Test database
test.db

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "classic/forge/tests/vcr_cassettes"]
path = classic/forge/tests/vcr_cassettes
url = https://github.com/Significant-Gravitas/Auto-GPT-test-cassettes

View File

@@ -43,29 +43,10 @@ repos:
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - AutoGPT
alias: poetry-install-classic-autogpt
entry: poetry -C classic/original_autogpt install
# include forge source (since it's a path dependency)
files: ^classic/(original_autogpt|forge)/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - Forge
alias: poetry-install-classic-forge
entry: poetry -C classic/forge install
files: ^classic/forge/poetry\.lock$
types: [file]
language: system
pass_filenames: false
- id: poetry-install
name: Check & Install dependencies - Classic - Benchmark
alias: poetry-install-classic-benchmark
entry: poetry -C classic/benchmark install
files: ^classic/benchmark/poetry\.lock$
name: Check & Install dependencies - Classic
alias: poetry-install-classic
entry: poetry -C classic install
files: ^classic/poetry\.lock$
types: [file]
language: system
pass_filenames: false
@@ -116,26 +97,10 @@ repos:
language: system
- id: isort
name: Lint (isort) - Classic - AutoGPT
alias: isort-classic-autogpt
entry: poetry -P classic/original_autogpt run isort -p autogpt
files: ^classic/original_autogpt/
types: [file, python]
language: system
- id: isort
name: Lint (isort) - Classic - Forge
alias: isort-classic-forge
entry: poetry -P classic/forge run isort -p forge
files: ^classic/forge/
types: [file, python]
language: system
- id: isort
name: Lint (isort) - Classic - Benchmark
alias: isort-classic-benchmark
entry: poetry -P classic/benchmark run isort -p agbenchmark
files: ^classic/benchmark/
name: Lint (isort) - Classic
alias: isort-classic
entry: bash -c 'cd classic && poetry run isort $(echo "$@" | sed "s|classic/||g")' --
files: ^classic/(original_autogpt|forge|direct_benchmark)/
types: [file, python]
language: system
@@ -149,26 +114,13 @@ repos:
- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
# To have flake8 load the config of the individual subprojects, we have to call
# them separately.
# Use consolidated flake8 config at classic/.flake8
hooks:
- id: flake8
name: Lint (Flake8) - Classic - AutoGPT
alias: flake8-classic-autogpt
files: ^classic/original_autogpt/(autogpt|scripts|tests)/
args: [--config=classic/original_autogpt/.flake8]
- id: flake8
name: Lint (Flake8) - Classic - Forge
alias: flake8-classic-forge
files: ^classic/forge/(forge|tests)/
args: [--config=classic/forge/.flake8]
- id: flake8
name: Lint (Flake8) - Classic - Benchmark
alias: flake8-classic-benchmark
files: ^classic/benchmark/(agbenchmark|tests)/((?!reports).)*[/.]
args: [--config=classic/benchmark/.flake8]
name: Lint (Flake8) - Classic
alias: flake8-classic
files: ^classic/(original_autogpt|forge|direct_benchmark)/
args: [--config=classic/.flake8]
- repo: local
hooks:
@@ -204,29 +156,10 @@ repos:
pass_filenames: false
- id: pyright
name: Typecheck - Classic - AutoGPT
alias: pyright-classic-autogpt
entry: poetry -C classic/original_autogpt run pyright
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|scripts|tests)/|poetry\.lock$)|classic/forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Classic - Forge
alias: pyright-classic-forge
entry: poetry -C classic/forge run pyright
files: ^classic/forge/(forge/|poetry\.lock$)
types: [file]
language: system
pass_filenames: false
- id: pyright
name: Typecheck - Classic - Benchmark
alias: pyright-classic-benchmark
entry: poetry -C classic/benchmark run pyright
files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
name: Typecheck - Classic
alias: pyright-classic
entry: poetry -C classic run pyright
files: ^classic/(original_autogpt|forge|direct_benchmark)/.*\.py$|^classic/poetry\.lock$
types: [file]
language: system
pass_filenames: false

View File

@@ -28,6 +28,7 @@ from backend.executor.manager import get_db_async_client
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
class ExecutionAnalyticsRequest(BaseModel):
@@ -63,6 +64,8 @@ class ExecutionAnalyticsResult(BaseModel):
score: Optional[float]
status: str # "success", "failed", "skipped"
error_message: Optional[str] = None
started_at: Optional[datetime] = None
ended_at: Optional[datetime] = None
class ExecutionAnalyticsResponse(BaseModel):
@@ -224,11 +227,6 @@ async def generate_execution_analytics(
)
try:
# Validate model configuration
settings = Settings()
if not settings.secrets.openai_internal_api_key:
raise HTTPException(status_code=500, detail="OpenAI API key not configured")
# Get database client
db_client = get_db_async_client()
@@ -320,6 +318,8 @@ async def generate_execution_analytics(
),
status="skipped",
error_message=None, # Not an error - just already processed
started_at=execution.started_at,
ended_at=execution.ended_at,
)
)
@@ -349,6 +349,9 @@ async def _process_batch(
) -> list[ExecutionAnalyticsResult]:
"""Process a batch of executions concurrently."""
if not settings.secrets.openai_internal_api_key:
raise HTTPException(status_code=500, detail="OpenAI API key not configured")
async def process_single_execution(execution) -> ExecutionAnalyticsResult:
try:
# Generate activity status and score using the specified model
@@ -387,6 +390,8 @@ async def _process_batch(
score=None,
status="skipped",
error_message="Activity generation returned None",
started_at=execution.started_at,
ended_at=execution.ended_at,
)
# Update the execution stats
@@ -416,6 +421,8 @@ async def _process_batch(
summary_text=activity_response["activity_status"],
score=activity_response["correctness_score"],
status="success",
started_at=execution.started_at,
ended_at=execution.ended_at,
)
except Exception as e:
@@ -429,6 +436,8 @@ async def _process_batch(
score=None,
status="failed",
error_message=str(e),
started_at=execution.started_at,
ended_at=execution.ended_at,
)
# Process all executions in the batch concurrently

View File

@@ -4,14 +4,9 @@ from collections.abc import AsyncGenerator
from typing import Any
import orjson
from langfuse import Langfuse
from openai import (
APIConnectionError,
APIError,
APIStatusError,
AsyncOpenAI,
RateLimitError,
)
from langfuse import get_client, propagate_attributes
from langfuse.openai import openai # type: ignore
from openai import APIConnectionError, APIError, APIStatusError, RateLimitError
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.data.understanding import (
@@ -21,7 +16,6 @@ from backend.data.understanding import (
from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings
from . import db as chat_db
from .config import ChatConfig
from .model import (
ChatMessage,
@@ -50,10 +44,10 @@ logger = logging.getLogger(__name__)
config = ChatConfig()
settings = Settings()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
# Langfuse client (lazy initialization)
_langfuse_client: Langfuse | None = None
langfuse = get_client()
class LangfuseNotConfiguredError(Exception):
@@ -69,65 +63,6 @@ def _is_langfuse_configured() -> bool:
)
def _get_langfuse_client() -> Langfuse:
"""Get or create the Langfuse client for prompt management and tracing."""
global _langfuse_client
if _langfuse_client is None:
if not _is_langfuse_configured():
raise LangfuseNotConfiguredError(
"Langfuse is not configured. The chat feature requires Langfuse for prompt management. "
"Please set the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables."
)
_langfuse_client = Langfuse(
public_key=settings.secrets.langfuse_public_key,
secret_key=settings.secrets.langfuse_secret_key,
host=settings.secrets.langfuse_host or "https://cloud.langfuse.com",
)
return _langfuse_client
def _get_environment() -> str:
"""Get the current environment name for Langfuse tagging."""
return settings.config.app_env.value
def _get_langfuse_prompt() -> str:
"""Fetch the latest production prompt from Langfuse.
Returns:
The compiled prompt text from Langfuse.
Raises:
Exception: If Langfuse is unavailable or prompt fetch fails.
"""
try:
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
compiled = prompt.compile()
logger.info(
f"Fetched prompt '{config.langfuse_prompt_name}' from Langfuse "
f"(version: {prompt.version})"
)
return compiled
except Exception as e:
logger.error(f"Failed to fetch prompt from Langfuse: {e}")
raise
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
"""Build the full system prompt including business understanding if available.
@@ -139,8 +74,6 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
Tuple of (compiled prompt string, Langfuse prompt object for tracing)
"""
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
@@ -158,7 +91,7 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
context = "This is the first time you are meeting the user. Greet them and introduce them to the platform"
compiled = prompt.compile(users_information=context)
return compiled, prompt
return compiled, understanding
async def _generate_session_title(message: str) -> str | None:
@@ -217,6 +150,7 @@ async def assign_user_to_session(
async def stream_chat_completion(
session_id: str,
message: str | None = None,
tool_call_response: str | None = None,
is_user_message: bool = True,
user_id: str | None = None,
retry_count: int = 0,
@@ -256,11 +190,6 @@ async def stream_chat_completion(
yield StreamFinish()
return
# Langfuse observations will be created after session is loaded (need messages for input)
# Initialize to None so finally block can safely check and end them
trace = None
generation = None
# Only fetch from Redis if session not provided (initial call)
if session is None:
session = await get_chat_session(session_id, user_id)
@@ -336,297 +265,259 @@ async def stream_chat_completion(
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt, langfuse_prompt = await _build_system_prompt(user_id)
# Build input messages including system prompt for complete Langfuse logging
trace_input_messages = [{"role": "system", "content": system_prompt}] + [
m.model_dump() for m in session.messages
]
system_prompt, understanding = await _build_system_prompt(user_id)
# Create Langfuse trace for this LLM call (each call gets its own trace, grouped by session_id)
# Using v3 SDK: start_observation creates a root span, update_trace sets trace-level attributes
try:
langfuse = _get_langfuse_client()
env = _get_environment()
trace = langfuse.start_observation(
name="chat_completion",
input={"messages": trace_input_messages},
metadata={
"environment": env,
"model": config.model,
"message_count": len(session.messages),
"prompt_name": langfuse_prompt.name if langfuse_prompt else None,
"prompt_version": langfuse_prompt.version if langfuse_prompt else None,
},
)
# Set trace-level attributes (session_id, user_id, tags)
trace.update_trace(
input = message
if not message and tool_call_response:
input = tool_call_response
langfuse = get_client()
with langfuse.start_as_current_observation(
as_type="span",
name="user-copilot-request",
input=input,
) as span:
with propagate_attributes(
session_id=session_id,
user_id=user_id,
tags=[env, "copilot"],
)
except Exception as e:
logger.warning(f"Failed to create Langfuse trace: {e}")
tags=["copilot"],
metadata={
"users_information": format_understanding_for_prompt(understanding)[
:200
] # langfuse only accepts upto to 200 chars
},
):
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
accumulated_tool_calls: list[dict[str, Any]] = []
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
try:
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
# Create Langfuse generation for each LLM call, linked to the prompt
# Using v3 SDK: start_observation with as_type="generation"
generation = (
trace.start_observation(
as_type="generation",
name="llm_call",
model=config.model,
input={"messages": trace_input_messages},
prompt=langfuse_prompt,
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
if trace
else None
)
accumulated_tool_calls: list[dict[str, Any]] = []
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode("utf-8"),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
if assistant_response.content:
logger.warn(
f"StreamTextEnd: Attempting to set output {assistant_response.content}"
)
span.update_trace(output=assistant_response.content)
span.update(output=assistant_response.content)
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode(
"utf-8"
),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
)
)
else:
logger.error(
f"Unknown chunk type: {type(chunk)}", exc_info=True
)
if assistant_response.content:
langfuse.update_current_trace(output=assistant_response.content)
langfuse.update_current_span(output=assistant_response.content)
elif tool_response_messages:
langfuse.update_current_trace(output=str(tool_response_messages))
langfuse.update_current_span(output=str(tool_response_messages))
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(
e, (orjson.JSONDecodeError, KeyError, TypeError)
)
if is_retryable and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
)
should_retry = True
else:
logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True)
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError))
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
if is_retryable and retry_count < config.max_retries:
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
should_retry = True
else:
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
):
yield chunk
finally:
# Always end Langfuse observations to prevent resource leaks
# Guard against None and catch errors to avoid masking original exceptions
if generation is not None:
try:
latest_usage = session.usage[-1] if session.usage else None
generation.update(
model=config.model,
output={
"content": assistant_response.content,
"tool_calls": accumulated_tool_calls or None,
},
usage_details=(
{
"input": latest_usage.prompt_tokens,
"output": latest_usage.completion_tokens,
"total": latest_usage.total_tokens,
}
if latest_usage
else None
),
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
generation.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse generation: {e}")
if trace is not None:
try:
if accumulated_tool_calls:
trace.update_trace(output={"tool_calls": accumulated_tool_calls})
else:
trace.update_trace(output={"response": assistant_response.content})
trace.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse trace: {e}")
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
):
yield chunk
# Retry configuration for OpenAI API calls
@@ -900,5 +791,4 @@ async def _yield_tool_call(
session=session,
)
logger.info(f"Yielding Tool execution response: {tool_execution_response}")
yield tool_execution_response

View File

@@ -30,7 +30,7 @@ TOOL_REGISTRY: dict[str, BaseTool] = {
"find_library_agent": FindLibraryAgentTool(),
"run_agent": RunAgentTool(),
"run_block": RunBlockTool(),
"agent_output": AgentOutputTool(),
"view_agent_output": AgentOutputTool(),
"search_docs": SearchDocsTool(),
"get_doc_page": GetDocPageTool(),
}

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
@@ -59,6 +61,7 @@ and automations for the user's specific needs."""
"""Requires authentication to store user-specific data."""
return True
@observe(as_type="tool", name="add_understanding")
async def _execute(
self,
user_id: str | None,

View File

@@ -218,7 +218,6 @@ async def save_agent_to_library(
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)

View File

@@ -5,6 +5,7 @@ import re
from datetime import datetime, timedelta, timezone
from typing import Any
from langfuse import observe
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
@@ -103,7 +104,7 @@ class AgentOutputTool(BaseTool):
@property
def name(self) -> str:
return "agent_output"
return "view_agent_output"
@property
def description(self) -> str:
@@ -328,6 +329,7 @@ class AgentOutputTool(BaseTool):
total_executions=len(available_executions) if available_executions else 1,
)
@observe(as_type="tool", name="view_agent_output")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -78,6 +80,7 @@ class CreateAgentTool(BaseTool):
"required": ["description"],
}
@observe(as_type="tool", name="create_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -85,6 +87,7 @@ class EditAgentTool(BaseTool):
"required": ["agent_id", "changes"],
}
@observe(as_type="tool", name="edit_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -35,6 +37,7 @@ class FindAgentTool(BaseTool):
"required": ["query"],
}
@observe(as_type="tool", name="find_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -1,6 +1,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -55,6 +56,7 @@ class FindBlockTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -41,6 +43,7 @@ class FindLibraryAgentTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_library_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -4,6 +4,8 @@ import logging
from pathlib import Path
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools.base import BaseTool
from backend.api.features.chat.tools.models import (
@@ -71,6 +73,7 @@ class GetDocPageTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="get_doc_page")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
@@ -154,6 +155,7 @@ class RunAgentTool(BaseTool):
"""All operations require authentication."""
return True
@observe(as_type="tool", name="run_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -4,6 +4,8 @@ import logging
from collections import defaultdict
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block
from backend.data.execution import ExecutionContext
@@ -127,6 +129,7 @@ class RunBlockTool(BaseTool):
return matched_credentials, missing_credentials
@observe(as_type="tool", name="run_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -87,6 +88,7 @@ class SearchDocsTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="search_docs")
async def _execute(
self,
user_id: str | None,

View File

@@ -401,10 +401,27 @@ async def add_generated_agent_image(
)
def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings:
"""
Initialize GraphSettings based on graph content.
Args:
graph: The graph to analyze
Returns:
GraphSettings with appropriate human_in_the_loop_safe_mode value
"""
if graph.has_human_in_the_loop:
# Graph has HITL blocks - set safe mode to True by default
return GraphSettings(human_in_the_loop_safe_mode=True)
else:
# Graph has no HITL blocks - keep None
return GraphSettings(human_in_the_loop_safe_mode=None)
async def create_library_agent(
graph: graph_db.GraphModel,
user_id: str,
sensitive_action_safe_mode: bool = False,
create_library_agents_for_sub_graphs: bool = True,
) -> list[library_model.LibraryAgent]:
"""
@@ -413,7 +430,6 @@ async def create_library_agent(
Args:
agent: The agent/Graph to add to the library.
user_id: The user to whom the agent will be added.
sensitive_action_safe_mode: Whether sensitive action blocks require review.
create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well.
Returns:
@@ -449,10 +465,7 @@ async def create_library_agent(
}
},
settings=SafeJson(
GraphSettings.from_graph(
graph_entry,
sensitive_action_safe_mode=sensitive_action_safe_mode,
).model_dump()
_initialize_graph_settings(graph_entry).model_dump()
),
),
include=library_agent_include(
@@ -614,6 +627,33 @@ async def update_library_agent(
raise DatabaseError("Failed to update library agent") from e
async def update_library_agent_settings(
user_id: str,
agent_id: str,
settings: GraphSettings,
) -> library_model.LibraryAgent:
"""
Updates the settings for a specific LibraryAgent.
Args:
user_id: The owner of the LibraryAgent.
agent_id: The ID of the LibraryAgent to update.
settings: New GraphSettings to apply.
Returns:
The updated LibraryAgent.
Raises:
NotFoundError: If the specified LibraryAgent does not exist.
DatabaseError: If there's an error in the update operation.
"""
return await update_library_agent(
library_agent_id=agent_id,
user_id=user_id,
settings=settings,
)
async def delete_library_agent(
library_agent_id: str, user_id: str, soft_delete: bool = True
) -> None:
@@ -798,7 +838,7 @@ async def add_store_agent_to_library(
"isCreatedByUser": False,
"useGraphIsActiveVersion": False,
"settings": SafeJson(
GraphSettings.from_graph(graph_model).model_dump()
_initialize_graph_settings(graph_model).model_dump()
),
},
include=library_agent_include(
@@ -1188,14 +1228,8 @@ async def fork_library_agent(
)
new_graph = await on_graph_activate(new_graph, user_id=user_id)
# Create a library agent for the new graph, preserving sensitive_action_safe_mode
return (
await create_library_agent(
new_graph,
user_id,
sensitive_action_safe_mode=original_agent.settings.sensitive_action_safe_mode,
)
)[0]
# Create a library agent for the new graph
return (await create_library_agent(new_graph, user_id))[0]
except prisma.errors.PrismaError as e:
logger.error(f"Database error cloning library agent: {e}")
raise DatabaseError("Failed to fork library agent") from e

View File

@@ -73,12 +73,6 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node"
)
has_human_in_the_loop: bool = pydantic.Field(
description="Whether the agent has human-in-the-loop blocks"
)
has_sensitive_action: bool = pydantic.Field(
description="Whether the agent has sensitive action blocks"
)
trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
@@ -186,8 +180,6 @@ class LibraryAgent(pydantic.BaseModel):
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
has_human_in_the_loop=graph.has_human_in_the_loop,
has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
can_access_graph=can_access_graph,

View File

@@ -52,8 +52,6 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -77,8 +75,6 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -154,8 +150,6 @@ async def test_get_favorite_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -224,8 +218,6 @@ def test_add_agent_to_library_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
new_output=False,
can_access_graph=True,

View File

@@ -761,8 +761,10 @@ async def create_new_graph(
graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
graph.validate_graph(for_run=False)
# The return value of the create graph & library function is intentionally not used here,
# as the graph already valid and no sub-graphs are returned back.
await graph_db.create_graph(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id)
await library_db.create_library_agent(graph, user_id=user_id)
activated_graph = await on_graph_activate(graph, user_id=user_id)
if create_graph.source == "builder":
@@ -886,19 +888,21 @@ async def set_graph_active_version(
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
user_id=user_id,
settings=updated_settings,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
)
return library
@@ -915,18 +919,21 @@ async def update_graph_settings(
user_id: Annotated[str, Security(get_user_id)],
) -> GraphSettings:
"""Update graph settings for the user's library agent."""
# Get the library agent for this graph
library_agent = await library_db.get_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id
)
if not library_agent:
raise HTTPException(404, f"Graph #{graph_id} not found in user's library")
updated_agent = await library_db.update_library_agent(
library_agent_id=library_agent.id,
# Update the library agent settings
updated_agent = await library_db.update_library_agent_settings(
user_id=user_id,
agent_id=library_agent.id,
settings=settings,
)
# Return the updated settings
return GraphSettings.model_validate(updated_agent.settings)

View File

@@ -174,7 +174,7 @@ class AIShortformVideoCreatorBlock(Block):
)
frame_rate: int = SchemaField(description="Frame rate of the video", default=60)
generation_preset: GenerationPreset = SchemaField(
description="Generation preset for visual style - only effects AI generated visuals",
description="Generation preset for visual style - only affects AI-generated visuals",
default=GenerationPreset.LEONARDO,
placeholder=GenerationPreset.LEONARDO,
)

View File

@@ -381,7 +381,7 @@ Each range you add needs to be a string, with the upper and lower numbers of the
organization_locations: Optional[list[str]] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
""",

View File

@@ -34,7 +34,7 @@ Each range you add needs to be a string, with the upper and lower numbers of the
organization_locations: list[str] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
""",

View File

@@ -81,7 +81,7 @@ class StoreValueBlock(Block):
def __init__(self):
super().__init__(
id="1ff065e9-88e8-4358-9d82-8dc91f622ba9",
description="This block forwards an input value as output, allowing reuse without change.",
description="A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks.",
categories={BlockCategory.BASIC},
input_schema=StoreValueBlock.Input,
output_schema=StoreValueBlock.Output,
@@ -111,7 +111,7 @@ class PrintToConsoleBlock(Block):
def __init__(self):
super().__init__(
id="f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c",
description="Print the given text to the console, this is used for a debugging purpose.",
description="A debugging block that outputs text to the console for monitoring and troubleshooting workflow execution.",
categories={BlockCategory.BASIC},
input_schema=PrintToConsoleBlock.Input,
output_schema=PrintToConsoleBlock.Output,
@@ -137,7 +137,7 @@ class NoteBlock(Block):
def __init__(self):
super().__init__(
id="cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
description="This block is used to display a sticky note with the given text.",
description="A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes.",
categories={BlockCategory.BASIC},
input_schema=NoteBlock.Input,
output_schema=NoteBlock.Output,

View File

@@ -159,7 +159,7 @@ class FindInDictionaryBlock(Block):
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
description="A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[

View File

@@ -51,7 +51,7 @@ class GithubCommentBlock(Block):
def __init__(self):
super().__init__(
id="a8db4d8d-db1c-4a25-a1b0-416a8c33602b",
description="This block posts a comment on a specified GitHub issue or pull request.",
description="A block that posts comments on GitHub issues or pull requests using the GitHub API.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCommentBlock.Input,
output_schema=GithubCommentBlock.Output,
@@ -151,7 +151,7 @@ class GithubUpdateCommentBlock(Block):
def __init__(self):
super().__init__(
id="b3f4d747-10e3-4e69-8c51-f2be1d99c9a7",
description="This block updates a comment on a specified GitHub issue or pull request.",
description="A block that updates an existing comment on a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUpdateCommentBlock.Input,
output_schema=GithubUpdateCommentBlock.Output,
@@ -249,7 +249,7 @@ class GithubListCommentsBlock(Block):
def __init__(self):
super().__init__(
id="c4b5fb63-0005-4a11-b35a-0c2467bd6b59",
description="This block lists all comments for a specified GitHub issue or pull request.",
description="A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListCommentsBlock.Input,
output_schema=GithubListCommentsBlock.Output,
@@ -363,7 +363,7 @@ class GithubMakeIssueBlock(Block):
def __init__(self):
super().__init__(
id="691dad47-f494-44c3-a1e8-05b7990f2dab",
description="This block creates a new issue on a specified GitHub repository.",
description="A block that creates new issues on GitHub repositories with a title and body content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakeIssueBlock.Input,
output_schema=GithubMakeIssueBlock.Output,
@@ -433,7 +433,7 @@ class GithubReadIssueBlock(Block):
def __init__(self):
super().__init__(
id="6443c75d-032a-4772-9c08-230c707c8acc",
description="This block reads the body, title, and user of a specified GitHub issue.",
description="A block that retrieves information about a specific GitHub issue, including its title, body content, and creator.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadIssueBlock.Input,
output_schema=GithubReadIssueBlock.Output,
@@ -510,7 +510,7 @@ class GithubListIssuesBlock(Block):
def __init__(self):
super().__init__(
id="c215bfd7-0e57-4573-8f8c-f7d4963dcd74",
description="This block lists all issues for a specified GitHub repository.",
description="A block that retrieves a list of issues from a GitHub repository with their titles and URLs.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListIssuesBlock.Input,
output_schema=GithubListIssuesBlock.Output,
@@ -597,7 +597,7 @@ class GithubAddLabelBlock(Block):
def __init__(self):
super().__init__(
id="98bd6b77-9506-43d5-b669-6b9733c4b1f1",
description="This block adds a label to a specified GitHub issue or pull request.",
description="A block that adds a label to a GitHub issue or pull request for categorization and organization.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAddLabelBlock.Input,
output_schema=GithubAddLabelBlock.Output,
@@ -657,7 +657,7 @@ class GithubRemoveLabelBlock(Block):
def __init__(self):
super().__init__(
id="78f050c5-3e3a-48c0-9e5b-ef1ceca5589c",
description="This block removes a label from a specified GitHub issue or pull request.",
description="A block that removes a label from a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubRemoveLabelBlock.Input,
output_schema=GithubRemoveLabelBlock.Output,
@@ -720,7 +720,7 @@ class GithubAssignIssueBlock(Block):
def __init__(self):
super().__init__(
id="90507c72-b0ff-413a-886a-23bbbd66f542",
description="This block assigns a user to a specified GitHub issue.",
description="A block that assigns a GitHub user to an issue for task ownership and tracking.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAssignIssueBlock.Input,
output_schema=GithubAssignIssueBlock.Output,
@@ -786,7 +786,7 @@ class GithubUnassignIssueBlock(Block):
def __init__(self):
super().__init__(
id="d154002a-38f4-46c2-962d-2488f2b05ece",
description="This block unassigns a user from a specified GitHub issue.",
description="A block that removes a user's assignment from a GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUnassignIssueBlock.Input,
output_schema=GithubUnassignIssueBlock.Output,

View File

@@ -353,7 +353,7 @@ class GmailReadBlock(GmailBase):
def __init__(self):
super().__init__(
id="25310c70-b89b-43ba-b25c-4dfa7e2a481c",
description="This block reads emails from Gmail.",
description="A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments.",
categories={BlockCategory.COMMUNICATION},
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
input_schema=GmailReadBlock.Input,
@@ -743,7 +743,7 @@ class GmailListLabelsBlock(GmailBase):
def __init__(self):
super().__init__(
id="3e1c2c1c-c689-4520-b956-1f3bf4e02bb7",
description="This block lists all labels in Gmail.",
description="A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailListLabelsBlock.Input,
output_schema=GmailListLabelsBlock.Output,
@@ -807,7 +807,7 @@ class GmailAddLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="f884b2fb-04f4-4265-9658-14f433926ac9",
description="This block adds a label to a Gmail message.",
description="A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailAddLabelBlock.Input,
output_schema=GmailAddLabelBlock.Output,
@@ -893,7 +893,7 @@ class GmailRemoveLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="0afc0526-aba1-4b2b-888e-a22b7c3f359d",
description="This block removes a label from a Gmail message.",
description="A block that removes a label from a specific email message in a Gmail account.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailRemoveLabelBlock.Input,
output_schema=GmailRemoveLabelBlock.Output,
@@ -961,7 +961,7 @@ class GmailGetThreadBlock(GmailBase):
def __init__(self):
super().__init__(
id="21a79166-9df7-4b5f-9f36-96f639d86112",
description="Get a full Gmail thread by ID",
description="A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailGetThreadBlock.Input,
output_schema=GmailGetThreadBlock.Output,

View File

@@ -282,7 +282,7 @@ class GoogleSheetsReadBlock(Block):
def __init__(self):
super().__init__(
id="5724e902-3635-47e9-a108-aaa0263a4988",
description="This block reads data from a Google Sheets spreadsheet.",
description="A block that reads data from a Google Sheets spreadsheet using A1 notation range selection.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsReadBlock.Input,
output_schema=GoogleSheetsReadBlock.Output,
@@ -409,7 +409,7 @@ class GoogleSheetsWriteBlock(Block):
def __init__(self):
super().__init__(
id="d9291e87-301d-47a8-91fe-907fb55460e5",
description="This block writes data to a Google Sheets spreadsheet.",
description="A block that writes data to a Google Sheets spreadsheet at a specified A1 notation range.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsWriteBlock.Input,
output_schema=GoogleSheetsWriteBlock.Output,

View File

@@ -84,7 +84,7 @@ class HITLReviewHelper:
Exception: If review creation or status update fails
"""
# Skip review if safe mode is disabled - return auto-approved result
if not execution_context.human_in_the_loop_safe_mode:
if not execution_context.safe_mode:
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
)

View File

@@ -104,7 +104,7 @@ class HumanInTheLoopBlock(Block):
execution_context: ExecutionContext,
**_kwargs,
) -> BlockOutput:
if not execution_context.human_in_the_loop_safe_mode:
if not execution_context.safe_mode:
logger.info(
f"HITL block skipping review for node {node_exec_id} - safe mode disabled"
)

View File

@@ -76,7 +76,7 @@ class AgentInputBlock(Block):
super().__init__(
**{
"id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"description": "Base block for user inputs.",
"description": "A block that accepts and processes user input values within a workflow, supporting various input types and validation.",
"input_schema": AgentInputBlock.Input,
"output_schema": AgentInputBlock.Output,
"test_input": [
@@ -168,7 +168,7 @@ class AgentOutputBlock(Block):
def __init__(self):
super().__init__(
id="363ae599-353e-4804-937e-b2ee3cef3da4",
description="Stores the output of the graph for users to see.",
description="A block that records and formats workflow results for display to users, with optional Jinja2 template formatting support.",
input_schema=AgentOutputBlock.Input,
output_schema=AgentOutputBlock.Output,
test_input=[

View File

@@ -854,7 +854,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
description="Call a Large Language Model (LLM) to generate formatted object based on the given prompt.",
description="A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement.",
categories={BlockCategory.AI},
input_schema=AIStructuredResponseGeneratorBlock.Input,
output_schema=AIStructuredResponseGeneratorBlock.Output,
@@ -1265,7 +1265,7 @@ class AITextGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="1f292d4a-41a4-4977-9684-7c8d560b9f91",
description="Call a Large Language Model (LLM) to generate a string based on the given prompt.",
description="A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions.",
categories={BlockCategory.AI},
input_schema=AITextGeneratorBlock.Input,
output_schema=AITextGeneratorBlock.Output,
@@ -1361,7 +1361,7 @@ class AITextSummarizerBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="a0a69be1-4528-491c-a85a-a4ab6873e3f0",
description="Utilize a Large Language Model (LLM) to summarize a long text.",
description="A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AITextSummarizerBlock.Input,
output_schema=AITextSummarizerBlock.Output,
@@ -1562,7 +1562,7 @@ class AIConversationBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="32a87eab-381e-4dd4-bdb8-4c47151be35a",
description="Advanced LLM call that takes a list of messages and sends them to the language model.",
description="A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges.",
categories={BlockCategory.AI},
input_schema=AIConversationBlock.Input,
output_schema=AIConversationBlock.Output,
@@ -1682,7 +1682,7 @@ class AIListGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="9c0b0450-d199-458b-a731-072189dd6593",
description="Generate a list of values based on the given prompt using a Large Language Model (LLM).",
description="A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AIListGeneratorBlock.Input,
output_schema=AIListGeneratorBlock.Output,

View File

@@ -46,7 +46,7 @@ class PublishToMediumBlock(Block):
class Input(BlockSchemaInput):
author_id: BlockSecret = SecretField(
key="medium_author_id",
description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me" the response will contain the authorId field.""",
description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me\n\nThe response will contain the authorId field.""",
placeholder="Enter the author's Medium AuthorID",
)
title: str = SchemaField(

View File

@@ -50,7 +50,7 @@ class CreateTalkingAvatarVideoBlock(Block):
description="The voice provider to use", default="microsoft"
)
voice_id: str = SchemaField(
description="The voice ID to use, get list of voices [here](https://docs.agpt.co/server/d_id)",
description="The voice ID to use, see [available voice IDs](https://agpt.co/docs/platform/using-ai-services/d_id)",
default="en-US-JennyNeural",
)
presenter_id: str = SchemaField(

View File

@@ -242,7 +242,7 @@ async def test_smart_decision_maker_tracks_llm_stats():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -343,7 +343,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -409,7 +409,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -471,7 +471,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -535,7 +535,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -658,7 +658,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -730,7 +730,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -786,7 +786,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
@@ -905,7 +905,7 @@ async def test_smart_decision_maker_agent_mode():
# Create a mock execution context
mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False,
safe_mode=False,
)
# Create a mock execution processor for agent mode tests
@@ -1027,7 +1027,7 @@ async def test_smart_decision_maker_traditional_mode_default():
# Create execution context
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests

View File

@@ -386,7 +386,7 @@ async def test_output_yielding_with_dynamic_fields():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_processor = MagicMock()
async for output_name, output_value in block.run(
@@ -609,9 +609,7 @@ async def test_validation_errors_dont_pollute_conversation():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False
)
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a proper mock execution processor for agent mode
from collections import defaultdict

View File

@@ -104,7 +104,7 @@ async def get_accuracy_trends_and_alerts(
AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED')
{user_filter}
GROUP BY DATE(e."createdAt")
HAVING COUNT(*) >= 3 -- Need at least 3 executions per day
HAVING COUNT(*) >= 1 -- Include all days with at least 1 execution
),
trends AS (
SELECT

View File

@@ -474,7 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type
self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.is_sensitive_action: bool = False
self.requires_human_review: bool = False
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -637,9 +637,8 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
if not (
self.is_sensitive_action and execution_context.sensitive_action_safe_mode
):
# Skip review if not required or safe mode is disabled
if not self.requires_human_review or not execution_context.safe_mode:
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper

View File

@@ -81,8 +81,7 @@ class ExecutionContext(BaseModel):
This includes information needed by blocks, sub-graphs, and execution management.
"""
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
safe_mode: bool = True
user_timezone: str = "UTC"
root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None
@@ -154,8 +153,14 @@ class GraphExecutionMeta(BaseDbModel):
nodes_input_masks: Optional[dict[str, BlockInput]]
preset_id: Optional[str]
status: ExecutionStatus
started_at: datetime
ended_at: datetime
started_at: Optional[datetime] = Field(
None,
description="When execution started running. Null if not yet started (QUEUED).",
)
ended_at: Optional[datetime] = Field(
None,
description="When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW).",
)
is_shared: bool = False
share_token: Optional[str] = None
@@ -230,10 +235,8 @@ class GraphExecutionMeta(BaseDbModel):
@staticmethod
def from_db(_graph_exec: AgentGraphExecution):
now = datetime.now(timezone.utc)
# TODO: make started_at and ended_at optional
start_time = _graph_exec.startedAt or _graph_exec.createdAt
end_time = _graph_exec.updatedAt or now
start_time = _graph_exec.startedAt
end_time = _graph_exec.endedAt
try:
stats = GraphExecutionStats.model_validate(_graph_exec.stats)
@@ -903,6 +906,14 @@ async def update_graph_execution_stats(
if status:
update_data["executionStatus"] = status
# Set endedAt when execution reaches a terminal status
terminal_statuses = [
ExecutionStatus.COMPLETED,
ExecutionStatus.FAILED,
ExecutionStatus.TERMINATED,
]
if status in terminal_statuses:
update_data["endedAt"] = datetime.now(tz=timezone.utc)
where_clause: AgentGraphExecutionWhereInput = {"id": graph_exec_id}

View File

@@ -62,23 +62,7 @@ logger = logging.getLogger(__name__)
class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
@classmethod
def from_graph(
cls,
graph: "GraphModel",
hitl_safe_mode: bool | None = None,
sensitive_action_safe_mode: bool = False,
) -> "GraphSettings":
# Default to True if not explicitly set
if hitl_safe_mode is None:
hitl_safe_mode = True
return cls(
human_in_the_loop_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
)
human_in_the_loop_safe_mode: bool | None = None
class Link(BaseDbModel):
@@ -260,14 +244,10 @@ class BaseGraph(BaseDbModel):
return any(
node.block_id
for node in self.nodes
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
)
@computed_field
@property
def has_sensitive_action(self) -> bool:
return any(
node.block_id for node in self.nodes if node.block.is_sensitive_action
if (
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
or node.block.requires_human_review
)
)
@property

View File

@@ -328,6 +328,8 @@ async def clear_business_understanding(user_id: str) -> bool:
def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str:
"""Format business understanding as text for system prompt injection."""
if not understanding:
return ""
sections = []
# User info section

View File

@@ -873,8 +873,11 @@ async def add_graph_execution(
settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id)
execution_context = ExecutionContext(
human_in_the_loop_safe_mode=settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=settings.sensitive_action_safe_mode,
safe_mode=(
settings.human_in_the_loop_safe_mode
if settings.human_in_the_loop_safe_mode is not None
else True
),
user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
),

View File

@@ -386,7 +386,6 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)
@@ -652,7 +651,6 @@ async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)

View File

@@ -96,9 +96,9 @@ jina_credentials = APIKeyCredentials(
)
unreal_credentials = APIKeyCredentials(
id="66f20754-1b81-48e4-91d0-f4f0dd82145f",
provider="unreal",
provider="unreal_speech",
api_key=SecretStr(settings.secrets.unreal_speech_api_key),
title="Use Credits for Unreal",
title="Use Credits for Unreal Speech",
expires_at=None,
)
open_router_credentials = APIKeyCredentials(
@@ -216,6 +216,14 @@ webshare_proxy_credentials = UserPasswordCredentials(
title="Use Credits for Webshare Proxy",
)
openweathermap_credentials = APIKeyCredentials(
id="8b3d4e5f-6a7b-8c9d-0e1f-2a3b4c5d6e7f",
provider="openweathermap",
api_key=SecretStr(settings.secrets.openweathermap_api_key),
title="Use Credits for OpenWeatherMap",
expires_at=None,
)
DEFAULT_CREDENTIALS = [
ollama_credentials,
revid_credentials,
@@ -243,6 +251,7 @@ DEFAULT_CREDENTIALS = [
llama_api_credentials,
v0_credentials,
webshare_proxy_credentials,
openweathermap_credentials,
]
SYSTEM_CREDENTIAL_IDS = {cred.id for cred in DEFAULT_CREDENTIALS}
@@ -346,11 +355,17 @@ class IntegrationCredentialsStore:
all_credentials.append(zerobounce_credentials)
if settings.secrets.google_maps_api_key:
all_credentials.append(google_maps_credentials)
if settings.secrets.llama_api_key:
all_credentials.append(llama_api_credentials)
if settings.secrets.v0_api_key:
all_credentials.append(v0_credentials)
if (
settings.secrets.webshare_proxy_username
and settings.secrets.webshare_proxy_password
):
all_credentials.append(webshare_proxy_credentials)
if settings.secrets.openweathermap_api_key:
all_credentials.append(openweathermap_credentials)
return all_credentials
async def get_creds_by_id(

View File

@@ -60,8 +60,10 @@ class LateExecutionMonitor:
if not all_late_executions:
return "No late executions detected."
# Sort by created time (oldest first)
all_late_executions.sort(key=lambda x: x.started_at)
# Sort by started time (oldest first), with None values (unstarted) first
all_late_executions.sort(
key=lambda x: x.started_at or datetime.min.replace(tzinfo=timezone.utc)
)
num_total_late = len(all_late_executions)
num_queued = len(queued_late_executions)
@@ -74,7 +76,7 @@ class LateExecutionMonitor:
was_truncated = num_total_late > tuncate_size
late_execution_details = [
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Status: {exec.status}, Created At: {exec.started_at.isoformat()}`"
f"* `Execution ID: {exec.id}, Graph ID: {exec.graph_id}v{exec.graph_version}, User ID: {exec.user_id}, Status: {exec.status}, Started At: {exec.started_at.isoformat() if exec.started_at else 'Not started'}`"
for exec in truncated_executions
]

View File

@@ -0,0 +1,8 @@
-- AlterTable
ALTER TABLE "AgentGraphExecution" ADD COLUMN "endedAt" TIMESTAMP(3);
-- Set endedAt to updatedAt for existing records with terminal status only
UPDATE "AgentGraphExecution"
SET "endedAt" = "updatedAt"
WHERE "endedAt" IS NULL
AND "executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED');

View File

@@ -450,6 +450,7 @@ model AgentGraphExecution {
createdAt DateTime @default(now())
updatedAt DateTime? @updatedAt
startedAt DateTime?
endedAt DateTime?
isDeleted Boolean @default(false)

View File

@@ -0,0 +1,864 @@
#!/usr/bin/env python3
"""
Block Documentation Generator
Generates markdown documentation for all blocks from code introspection.
Preserves manually-written content between marker comments.
Usage:
# Generate all docs
poetry run python scripts/generate_block_docs.py
# Check mode for CI (exits 1 if stale)
poetry run python scripts/generate_block_docs.py --check
# Verbose output
poetry run python scripts/generate_block_docs.py -v
"""
import argparse
import inspect
import logging
import re
import sys
from collections import defaultdict
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
# Add backend to path for imports
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
logger = logging.getLogger(__name__)
# Default output directory relative to repo root
DEFAULT_OUTPUT_DIR = (
Path(__file__).parent.parent.parent.parent / "docs" / "integrations"
)
@dataclass
class FieldDoc:
"""Documentation for a single input/output field."""
name: str
description: str
type_str: str
required: bool
default: Any = None
advanced: bool = False
hidden: bool = False
placeholder: str | None = None
@dataclass
class BlockDoc:
"""Documentation data extracted from a block."""
id: str
name: str
class_name: str
description: str
categories: list[str]
category_descriptions: dict[str, str]
inputs: list[FieldDoc]
outputs: list[FieldDoc]
block_type: str
source_file: str
contributors: list[str] = field(default_factory=list)
# Category to human-readable name mapping
CATEGORY_DISPLAY_NAMES = {
"AI": "AI and Language Models",
"BASIC": "Basic Operations",
"TEXT": "Text Processing",
"SEARCH": "Search and Information Retrieval",
"SOCIAL": "Social Media and Content",
"DEVELOPER_TOOLS": "Developer Tools",
"DATA": "Data Processing",
"LOGIC": "Logic and Control Flow",
"COMMUNICATION": "Communication",
"INPUT": "Input/Output",
"OUTPUT": "Input/Output",
"MULTIMEDIA": "Media Generation",
"PRODUCTIVITY": "Productivity",
"HARDWARE": "Hardware",
"AGENT": "Agent Integration",
"CRM": "CRM Services",
"SAFETY": "AI Safety",
"ISSUE_TRACKING": "Issue Tracking",
"MARKETING": "Marketing",
}
# Category to doc file mapping (for grouping related blocks)
CATEGORY_FILE_MAP = {
"BASIC": "basic",
"TEXT": "text",
"AI": "llm",
"SEARCH": "search",
"DATA": "data",
"LOGIC": "logic",
"COMMUNICATION": "communication",
"MULTIMEDIA": "multimedia",
"PRODUCTIVITY": "productivity",
}
def class_name_to_display_name(class_name: str) -> str:
"""Convert BlockClassName to 'Block Class Name'."""
# Remove 'Block' suffix (only at the end, not all occurrences)
name = class_name.removesuffix("Block")
# Insert space before capitals
name = re.sub(r"([a-z])([A-Z])", r"\1 \2", name)
# Handle consecutive capitals (e.g., 'HTTPRequest' -> 'HTTP Request')
name = re.sub(r"([A-Z]+)([A-Z][a-z])", r"\1 \2", name)
return name.strip()
def type_to_readable(type_schema: dict[str, Any] | Any) -> str:
"""Convert JSON schema type to human-readable string."""
if not isinstance(type_schema, dict):
return str(type_schema) if type_schema else "Any"
if "anyOf" in type_schema:
# Union type - show options
any_of = type_schema["anyOf"]
if not isinstance(any_of, list):
return "Any"
options = []
for opt in any_of:
if isinstance(opt, dict) and opt.get("type") == "null":
continue
options.append(type_to_readable(opt))
if not options:
return "None"
if len(options) == 1:
return options[0]
return " | ".join(options)
if "allOf" in type_schema:
all_of = type_schema["allOf"]
if not isinstance(all_of, list) or not all_of:
return "Any"
return type_to_readable(all_of[0])
schema_type = type_schema.get("type")
if schema_type == "array":
items = type_schema.get("items", {})
item_type = type_to_readable(items)
return f"List[{item_type}]"
if schema_type == "object":
if "additionalProperties" in type_schema:
additional_props = type_schema["additionalProperties"]
# additionalProperties: true means any value type is allowed
if additional_props is True:
return "Dict[str, Any]"
value_type = type_to_readable(additional_props)
return f"Dict[str, {value_type}]"
# Check if it's a specific model
title = type_schema.get("title", "Object")
return title
if schema_type == "string":
if "enum" in type_schema:
return " | ".join(f'"{v}"' for v in type_schema["enum"])
if "format" in type_schema:
return f"str ({type_schema['format']})"
return "str"
if schema_type == "integer":
return "int"
if schema_type == "number":
return "float"
if schema_type == "boolean":
return "bool"
if schema_type == "null":
return "None"
# Fallback
return type_schema.get("title", schema_type or "Any")
def safe_get(d: Any, key: str, default: Any = None) -> Any:
"""Safely get a value from a dict-like object."""
if isinstance(d, dict):
return d.get(key, default)
return default
def file_path_to_title(file_path: str) -> str:
"""Convert file path to a readable title.
Examples:
"github/issues.md" -> "GitHub Issues"
"basic.md" -> "Basic"
"llm.md" -> "LLM"
"google/sheets.md" -> "Google Sheets"
"""
# Special case replacements (applied after title casing)
TITLE_FIXES = {
"Llm": "LLM",
"Github": "GitHub",
"Api": "API",
"Ai": "AI",
"Oauth": "OAuth",
"Url": "URL",
"Ci": "CI",
"Pr": "PR",
"Gmb": "GMB", # Google My Business
"Hubspot": "HubSpot",
"Linkedin": "LinkedIn",
"Tiktok": "TikTok",
"Youtube": "YouTube",
}
def apply_fixes(text: str) -> str:
# Split into words, fix each word, rejoin
words = text.split()
fixed_words = [TITLE_FIXES.get(word, word) for word in words]
return " ".join(fixed_words)
path = Path(file_path)
name = path.stem # e.g., "issues" or "sheets"
# Get parent dir if exists
parent = path.parent.name if path.parent.name != "." else None
# Title case and apply fixes
if parent:
parent_title = apply_fixes(parent.replace("_", " ").title())
name_title = apply_fixes(name.replace("_", " ").title())
return f"{parent_title} {name_title}"
return apply_fixes(name.replace("_", " ").title())
def extract_block_doc(block_cls: type) -> BlockDoc:
"""Extract documentation data from a block class."""
block = block_cls.create()
# Get source file
try:
source_file = inspect.getfile(block_cls)
# Make relative to blocks directory
blocks_dir = Path(source_file).parent
while blocks_dir.name != "blocks" and blocks_dir.parent != blocks_dir:
blocks_dir = blocks_dir.parent
source_file = str(Path(source_file).relative_to(blocks_dir.parent))
except (TypeError, ValueError):
source_file = "unknown"
# Extract input fields
input_schema = block.input_schema.jsonschema()
input_properties = safe_get(input_schema, "properties", {})
if not isinstance(input_properties, dict):
input_properties = {}
required_raw = safe_get(input_schema, "required", [])
# Handle edge cases where required might not be a list
if isinstance(required_raw, (list, set, tuple)):
required_inputs = set(required_raw)
else:
required_inputs = set()
inputs = []
for field_name, field_schema in input_properties.items():
if not isinstance(field_schema, dict):
continue
# Skip credentials fields in docs (they're auto-handled)
if "credentials" in field_name.lower():
continue
inputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=field_name in required_inputs,
default=safe_get(field_schema, "default"),
advanced=safe_get(field_schema, "advanced", False) or False,
hidden=safe_get(field_schema, "hidden", False) or False,
placeholder=safe_get(field_schema, "placeholder"),
)
)
# Extract output fields
output_schema = block.output_schema.jsonschema()
output_properties = safe_get(output_schema, "properties", {})
if not isinstance(output_properties, dict):
output_properties = {}
outputs = []
for field_name, field_schema in output_properties.items():
if not isinstance(field_schema, dict):
continue
outputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=True, # Outputs are always produced
hidden=safe_get(field_schema, "hidden", False) or False,
)
)
# Get category info (sort for deterministic ordering since it's a set)
categories = []
category_descriptions = {}
for cat in sorted(block.categories, key=lambda c: c.name):
categories.append(cat.name)
category_descriptions[cat.name] = cat.value
# Get contributors
contributors = []
for contrib in block.contributors:
contributors.append(contrib.name if hasattr(contrib, "name") else str(contrib))
return BlockDoc(
id=block.id,
name=class_name_to_display_name(block.name),
class_name=block.name,
description=block.description,
categories=categories,
category_descriptions=category_descriptions,
inputs=inputs,
outputs=outputs,
block_type=block.block_type.value,
source_file=source_file,
contributors=contributors,
)
def generate_anchor(name: str) -> str:
"""Generate markdown anchor from block name."""
return name.lower().replace(" ", "-").replace("(", "").replace(")", "")
def extract_manual_content(existing_content: str) -> dict[str, str]:
"""Extract content between MANUAL markers from existing file."""
manual_sections = {}
# Pattern: <!-- MANUAL: section_name -->content<!-- END MANUAL -->
pattern = r"<!-- MANUAL: (\w+) -->\s*(.*?)\s*<!-- END MANUAL -->"
matches = re.findall(pattern, existing_content, re.DOTALL)
for section_name, content in matches:
manual_sections[section_name] = content.strip()
return manual_sections
def generate_block_markdown(
block: BlockDoc,
manual_content: dict[str, str] | None = None,
) -> str:
"""Generate markdown documentation for a single block."""
manual_content = manual_content or {}
lines = []
# All blocks use ## heading, sections use ### (consistent siblings)
lines.append(f"## {block.name}")
lines.append("")
# What it is (full description)
lines.append(f"### What it is")
lines.append(block.description or "No description available.")
lines.append("")
# How it works (manual section)
lines.append(f"### How it works")
how_it_works = manual_content.get(
"how_it_works", "_Add technical explanation here._"
)
lines.append("<!-- MANUAL: how_it_works -->")
lines.append(how_it_works)
lines.append("<!-- END MANUAL -->")
lines.append("")
# Inputs table (auto-generated)
visible_inputs = [f for f in block.inputs if not f.hidden]
if visible_inputs:
lines.append(f"### Inputs")
lines.append("")
lines.append("| Input | Description | Type | Required |")
lines.append("|-------|-------------|------|----------|")
for inp in visible_inputs:
required = "Yes" if inp.required else "No"
desc = inp.description or "-"
type_str = inp.type_str or "-"
# Normalize newlines and escape pipes for valid table syntax
desc = desc.replace("\n", " ").replace("|", "\\|")
type_str = type_str.replace("|", "\\|")
lines.append(f"| {inp.name} | {desc} | {type_str} | {required} |")
lines.append("")
# Outputs table (auto-generated)
visible_outputs = [f for f in block.outputs if not f.hidden]
if visible_outputs:
lines.append(f"### Outputs")
lines.append("")
lines.append("| Output | Description | Type |")
lines.append("|--------|-------------|------|")
for out in visible_outputs:
desc = out.description or "-"
type_str = out.type_str or "-"
# Normalize newlines and escape pipes for valid table syntax
desc = desc.replace("\n", " ").replace("|", "\\|")
type_str = type_str.replace("|", "\\|")
lines.append(f"| {out.name} | {desc} | {type_str} |")
lines.append("")
# Possible use case (manual section)
lines.append(f"### Possible use case")
use_case = manual_content.get("use_case", "_Add practical use case examples here._")
lines.append("<!-- MANUAL: use_case -->")
lines.append(use_case)
lines.append("<!-- END MANUAL -->")
lines.append("")
lines.append("---")
lines.append("")
return "\n".join(lines)
def get_block_file_mapping(blocks: list[BlockDoc]) -> dict[str, list[BlockDoc]]:
"""
Map blocks to their documentation files.
Returns dict of {relative_file_path: [blocks]}
"""
file_mapping = defaultdict(list)
for block in blocks:
# Determine file path based on source file or category
source_path = Path(block.source_file)
# If source is in a subdirectory (e.g., google/gmail.py), use that structure
if len(source_path.parts) > 2: # blocks/subdir/file.py
subdir = source_path.parts[1] # e.g., "google"
# Use the Python filename as the md filename
md_file = source_path.stem + ".md" # e.g., "gmail.md"
file_path = f"{subdir}/{md_file}"
else:
# Use category-based grouping for top-level blocks
primary_category = block.categories[0] if block.categories else "BASIC"
file_name = CATEGORY_FILE_MAP.get(primary_category, "misc")
file_path = f"{file_name}.md"
file_mapping[file_path].append(block)
return dict(file_mapping)
def generate_overview_table(blocks: list[BlockDoc]) -> str:
"""Generate the overview table markdown (blocks.md)."""
lines = []
lines.append("# AutoGPT Blocks Overview")
lines.append("")
lines.append(
'AutoGPT uses a modular approach with various "blocks" to handle different tasks. These blocks are the building blocks of AutoGPT workflows, allowing users to create complex automations by combining simple, specialized components.'
)
lines.append("")
lines.append('!!! info "Creating Your Own Blocks"')
lines.append(" Want to create your own custom blocks? Check out our guides:")
lines.append(" ")
lines.append(
" - [Build your own Blocks](https://docs.agpt.co/platform/new_blocks/) - Step-by-step tutorial with examples"
)
lines.append(
" - [Block SDK Guide](https://docs.agpt.co/platform/block-sdk-guide/) - Advanced SDK patterns with OAuth, webhooks, and provider configuration"
)
lines.append("")
lines.append(
"Below is a comprehensive list of all available blocks, categorized by their primary function. Click on any block name to view its detailed documentation."
)
lines.append("")
# Group blocks by category
by_category = defaultdict(list)
for block in blocks:
primary_cat = block.categories[0] if block.categories else "BASIC"
by_category[primary_cat].append(block)
# Sort categories
category_order = [
"BASIC",
"DATA",
"TEXT",
"AI",
"SEARCH",
"SOCIAL",
"COMMUNICATION",
"DEVELOPER_TOOLS",
"MULTIMEDIA",
"PRODUCTIVITY",
"LOGIC",
"INPUT",
"OUTPUT",
"AGENT",
"CRM",
"SAFETY",
"ISSUE_TRACKING",
"HARDWARE",
"MARKETING",
]
# Track emitted display names to avoid duplicate headers
# (e.g., INPUT and OUTPUT both map to "Input/Output")
emitted_display_names: set[str] = set()
for category in category_order:
if category not in by_category:
continue
display_name = CATEGORY_DISPLAY_NAMES.get(category, category)
# Collect all blocks for this display name (may span multiple categories)
if display_name in emitted_display_names:
# Already emitted header, just add rows to existing table
# Find the position before the last empty line and insert rows
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
# Remove the trailing empty line, add rows, then re-add empty line
lines.pop()
for block in cat_blocks:
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("\n", " ").replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
continue
emitted_display_names.add(display_name)
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
lines.append(f"## {display_name}")
lines.append("")
lines.append("| Block Name | Description |")
lines.append("|------------|-------------|")
for block in cat_blocks:
# Determine link path
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
# Short description (first sentence)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("\n", " ").replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
return "\n".join(lines)
def load_all_blocks_for_docs() -> list[BlockDoc]:
"""Load all blocks and extract documentation."""
from backend.blocks import load_all_blocks
block_classes = load_all_blocks()
blocks = []
for _block_id, block_cls in block_classes.items():
try:
block_doc = extract_block_doc(block_cls)
blocks.append(block_doc)
except Exception as e:
logger.warning(f"Failed to extract docs for {block_cls.__name__}: {e}")
return blocks
def write_block_docs(
output_dir: Path,
blocks: list[BlockDoc],
verbose: bool = False,
) -> dict[str, str]:
"""
Write block documentation files.
Returns dict of {file_path: content} for all generated files.
"""
output_dir = Path(output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
file_mapping = get_block_file_mapping(blocks)
generated_files = {}
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
# Create subdirectories if needed
full_path.parent.mkdir(parents=True, exist_ok=True)
# Load existing content for manual section preservation
existing_content = ""
if full_path.exists():
existing_content = full_path.read_text()
# Always generate title from file path (with fixes applied)
file_title = file_path_to_title(file_path)
# Extract existing file description if present (preserve manual content)
file_header_pattern = (
r"^# .+?\n<!-- MANUAL: file_description -->\n(.*?)\n<!-- END MANUAL -->"
)
file_header_match = re.search(file_header_pattern, existing_content, re.DOTALL)
if file_header_match:
file_description = file_header_match.group(1)
else:
file_description = "_Add a description of this category of blocks._"
# Generate file header
file_header = f"# {file_title}\n"
file_header += "<!-- MANUAL: file_description -->\n"
file_header += f"{file_description}\n"
file_header += "<!-- END MANUAL -->\n"
# Generate content for each block
content_parts = []
for block in sorted(file_blocks, key=lambda b: b.name):
# Extract manual content specific to this block
# Match block heading (h2) and capture until --- separator
block_pattern = rf"(?:^|\n)## {re.escape(block.name)}\s*\n(.*?)(?=\n---|\Z)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_content = extract_manual_content(block_match.group(1))
else:
manual_content = {}
content_parts.append(
generate_block_markdown(
block,
manual_content,
)
)
full_content = file_header + "\n" + "\n".join(content_parts)
generated_files[str(file_path)] = full_content
if verbose:
print(f" Writing {file_path} ({len(file_blocks)} blocks)")
full_path.write_text(full_content)
# Generate overview file
overview_content = generate_overview_table(blocks)
overview_path = output_dir / "README.md"
generated_files["README.md"] = overview_content
overview_path.write_text(overview_content)
if verbose:
print(" Writing README.md (overview)")
return generated_files
def check_docs_in_sync(output_dir: Path, blocks: list[BlockDoc]) -> bool:
"""
Check if generated docs match existing docs.
Returns True if in sync, False otherwise.
"""
output_dir = Path(output_dir)
file_mapping = get_block_file_mapping(blocks)
all_match = True
out_of_sync_details: list[tuple[str, list[str]]] = []
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
if not full_path.exists():
block_names = [b.name for b in sorted(file_blocks, key=lambda b: b.name)]
print(f"MISSING: {file_path}")
print(f" Blocks: {', '.join(block_names)}")
out_of_sync_details.append((file_path, block_names))
all_match = False
continue
existing_content = full_path.read_text()
# Always generate title from file path (with fixes applied)
file_title = file_path_to_title(file_path)
# Extract existing file description if present (preserve manual content)
file_header_pattern = (
r"^# .+?\n<!-- MANUAL: file_description -->\n(.*?)\n<!-- END MANUAL -->"
)
file_header_match = re.search(file_header_pattern, existing_content, re.DOTALL)
if file_header_match:
file_description = file_header_match.group(1)
else:
file_description = "_Add a description of this category of blocks._"
# Generate expected file header
file_header = f"# {file_title}\n"
file_header += "<!-- MANUAL: file_description -->\n"
file_header += f"{file_description}\n"
file_header += "<!-- END MANUAL -->\n"
# Extract manual content from existing file
manual_sections_by_block = {}
for block in file_blocks:
block_pattern = rf"(?:^|\n)## {re.escape(block.name)}\s*\n(.*?)(?=\n---|\Z)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_sections_by_block[block.name] = extract_manual_content(
block_match.group(1)
)
# Generate expected content and check each block individually
content_parts = []
mismatched_blocks = []
for block in sorted(file_blocks, key=lambda b: b.name):
manual_content = manual_sections_by_block.get(block.name, {})
expected_block_content = generate_block_markdown(
block,
manual_content,
)
content_parts.append(expected_block_content)
# Check if this specific block's section exists and matches
# Include the --- separator to match generate_block_markdown output
block_pattern = rf"(?:^|\n)(## {re.escape(block.name)}\s*\n.*?\n---\n)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if not block_match:
mismatched_blocks.append(f"{block.name} (missing)")
elif block_match.group(1).strip() != expected_block_content.strip():
mismatched_blocks.append(block.name)
expected_content = file_header + "\n" + "\n".join(content_parts)
if existing_content.strip() != expected_content.strip():
print(f"OUT OF SYNC: {file_path}")
if mismatched_blocks:
print(f" Affected blocks: {', '.join(mismatched_blocks)}")
out_of_sync_details.append((file_path, mismatched_blocks))
all_match = False
# Check overview
overview_path = output_dir / "README.md"
if overview_path.exists():
existing_overview = overview_path.read_text()
expected_overview = generate_overview_table(blocks)
if existing_overview.strip() != expected_overview.strip():
print("OUT OF SYNC: README.md (overview)")
print(" The blocks overview table needs regeneration")
out_of_sync_details.append(("README.md", ["overview table"]))
all_match = False
else:
print("MISSING: README.md (overview)")
out_of_sync_details.append(("README.md", ["overview table"]))
all_match = False
# Check for unfilled manual sections
unfilled_patterns = [
"_Add a description of this category of blocks._",
"_Add technical explanation here._",
"_Add practical use case examples here._",
]
files_with_unfilled = []
for file_path in file_mapping.keys():
full_path = output_dir / file_path
if full_path.exists():
content = full_path.read_text()
unfilled_count = sum(1 for p in unfilled_patterns if p in content)
if unfilled_count > 0:
files_with_unfilled.append((file_path, unfilled_count))
if files_with_unfilled:
print("\nWARNING: Files with unfilled manual sections:")
for file_path, count in sorted(files_with_unfilled):
print(f" {file_path}: {count} unfilled section(s)")
print(
f"\nTotal: {len(files_with_unfilled)} files with unfilled manual sections"
)
return all_match
def main():
parser = argparse.ArgumentParser(
description="Generate block documentation from code introspection"
)
parser.add_argument(
"--output-dir",
type=Path,
default=DEFAULT_OUTPUT_DIR,
help="Output directory for generated docs",
)
parser.add_argument(
"--check",
action="store_true",
help="Check if docs are in sync (for CI), exit 1 if not",
)
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="Verbose output",
)
args = parser.parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(levelname)s: %(message)s",
)
print("Loading blocks...")
blocks = load_all_blocks_for_docs()
print(f"Found {len(blocks)} blocks")
if args.check:
print(f"Checking docs in {args.output_dir}...")
in_sync = check_docs_in_sync(args.output_dir, blocks)
if in_sync:
print("All documentation is in sync!")
sys.exit(0)
else:
print("\n" + "=" * 60)
print("Documentation is out of sync!")
print("=" * 60)
print("\nTo fix this, run one of the following:")
print("\n Option 1 - Run locally:")
print(
" cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
)
print("\n Option 2 - Ask Claude Code to run it:")
print(' "Run the block docs generator script to sync documentation"')
print("\n" + "=" * 60)
sys.exit(1)
else:
print(f"Generating docs to {args.output_dir}...")
write_block_docs(
args.output_dir,
blocks,
verbose=args.verbose,
)
print("Done!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env python3
"""Tests for the block documentation generator."""
import pytest
from scripts.generate_block_docs import (
class_name_to_display_name,
extract_manual_content,
generate_anchor,
type_to_readable,
)
class TestClassNameToDisplayName:
"""Tests for class_name_to_display_name function."""
def test_simple_block_name(self):
assert class_name_to_display_name("PrintBlock") == "Print"
def test_multi_word_block_name(self):
assert class_name_to_display_name("GetWeatherBlock") == "Get Weather"
def test_consecutive_capitals(self):
assert class_name_to_display_name("HTTPRequestBlock") == "HTTP Request"
def test_ai_prefix(self):
assert class_name_to_display_name("AIConditionBlock") == "AI Condition"
def test_no_block_suffix(self):
assert class_name_to_display_name("SomeClass") == "Some Class"
class TestTypeToReadable:
"""Tests for type_to_readable function."""
def test_string_type(self):
assert type_to_readable({"type": "string"}) == "str"
def test_integer_type(self):
assert type_to_readable({"type": "integer"}) == "int"
def test_number_type(self):
assert type_to_readable({"type": "number"}) == "float"
def test_boolean_type(self):
assert type_to_readable({"type": "boolean"}) == "bool"
def test_array_type(self):
result = type_to_readable({"type": "array", "items": {"type": "string"}})
assert result == "List[str]"
def test_object_type(self):
result = type_to_readable({"type": "object", "title": "MyModel"})
assert result == "MyModel"
def test_anyof_with_null(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "null"}]})
assert result == "str"
def test_anyof_multiple_types(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "integer"}]})
assert result == "str | int"
def test_enum_type(self):
result = type_to_readable(
{"type": "string", "enum": ["option1", "option2", "option3"]}
)
assert result == '"option1" | "option2" | "option3"'
def test_none_input(self):
assert type_to_readable(None) == "Any"
def test_non_dict_input(self):
assert type_to_readable("string") == "string"
class TestExtractManualContent:
"""Tests for extract_manual_content function."""
def test_extract_how_it_works(self):
content = """
### How it works
<!-- MANUAL: how_it_works -->
This is how it works.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"how_it_works": "This is how it works."}
def test_extract_use_case(self):
content = """
### Possible use case
<!-- MANUAL: use_case -->
Example use case here.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"use_case": "Example use case here."}
def test_extract_multiple_sections(self):
content = """
<!-- MANUAL: how_it_works -->
How it works content.
<!-- END MANUAL -->
<!-- MANUAL: use_case -->
Use case content.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {
"how_it_works": "How it works content.",
"use_case": "Use case content.",
}
def test_empty_content(self):
result = extract_manual_content("")
assert result == {}
def test_no_markers(self):
result = extract_manual_content("Some content without markers")
assert result == {}
class TestGenerateAnchor:
"""Tests for generate_anchor function."""
def test_simple_name(self):
assert generate_anchor("Print") == "print"
def test_multi_word_name(self):
assert generate_anchor("Get Weather") == "get-weather"
def test_name_with_parentheses(self):
assert generate_anchor("Something (Optional)") == "something-optional"
def test_already_lowercase(self):
assert generate_anchor("already lowercase") == "already-lowercase"
class TestIntegration:
"""Integration tests that require block loading."""
def test_load_blocks(self):
"""Test that blocks can be loaded successfully."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
assert len(blocks) > 0, "Should load at least one block"
def test_block_doc_has_required_fields(self):
"""Test that extracted block docs have required fields."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
block = blocks[0]
assert hasattr(block, "id")
assert hasattr(block, "name")
assert hasattr(block, "description")
assert hasattr(block, "categories")
assert hasattr(block, "inputs")
assert hasattr(block, "outputs")
def test_file_mapping_is_deterministic(self):
"""Test that file mapping produces consistent results."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import (
get_block_file_mapping,
load_all_blocks_for_docs,
)
# Load blocks twice and compare mappings
blocks1 = load_all_blocks_for_docs()
blocks2 = load_all_blocks_for_docs()
mapping1 = get_block_file_mapping(blocks1)
mapping2 = get_block_file_mapping(blocks2)
# Check same files are generated
assert set(mapping1.keys()) == set(mapping2.keys())
# Check same block counts per file
for file_path in mapping1:
assert len(mapping1[file_path]) == len(mapping2[file_path])
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -11,7 +11,6 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -11,7 +11,6 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -27,8 +27,6 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": true,
@@ -36,8 +34,7 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
"human_in_the_loop_safe_mode": null
},
"marketplace_listing": null
},
@@ -68,8 +65,6 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": false,
@@ -77,8 +72,7 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
"human_in_the_loop_safe_mode": null
},
"marketplace_listing": null
}

View File

@@ -51,6 +51,8 @@ export function AnalyticsResultsTable({ results }: Props) {
"Execution ID",
"Status",
"Score",
"Started At",
"Ended At",
"Summary Text",
"Error Message",
];
@@ -62,6 +64,8 @@ export function AnalyticsResultsTable({ results }: Props) {
result.exec_id,
result.status,
result.score?.toString() || "",
result.started_at ? new Date(result.started_at).toLocaleString() : "",
result.ended_at ? new Date(result.ended_at).toLocaleString() : "",
`"${(result.summary_text || "").replace(/"/g, '""')}"`, // Escape quotes in summary
`"${(result.error_message || "").replace(/"/g, '""')}"`, // Escape quotes in error
]);
@@ -248,15 +252,13 @@ export function AnalyticsResultsTable({ results }: Props) {
)}
</td>
<td className="px-4 py-3">
{(result.summary_text || result.error_message) && (
<Button
variant="ghost"
size="small"
onClick={() => toggleRowExpansion(result.exec_id)}
>
<EyeIcon size={16} />
</Button>
)}
<Button
variant="ghost"
size="small"
onClick={() => toggleRowExpansion(result.exec_id)}
>
<EyeIcon size={16} />
</Button>
</td>
</tr>
@@ -264,6 +266,44 @@ export function AnalyticsResultsTable({ results }: Props) {
<tr>
<td colSpan={7} className="bg-gray-50 px-4 py-3">
<div className="space-y-3">
{/* Timestamps section */}
<div className="grid grid-cols-2 gap-4 border-b border-gray-200 pb-3">
<div>
<Text
variant="body"
className="text-xs font-medium text-gray-600"
>
Started At:
</Text>
<Text
variant="body"
className="text-sm text-gray-700"
>
{result.started_at
? new Date(
result.started_at,
).toLocaleString()
: "—"}
</Text>
</div>
<div>
<Text
variant="body"
className="text-xs font-medium text-gray-600"
>
Ended At:
</Text>
<Text
variant="body"
className="text-sm text-gray-700"
>
{result.ended_at
? new Date(result.ended_at).toLocaleString()
: "—"}
</Text>
</div>
</div>
{result.summary_text && (
<div>
<Text

View File

@@ -541,7 +541,19 @@ export function ExecutionAnalyticsForm() {
{/* Accuracy Trends Display */}
{trendsData && (
<div className="space-y-4">
<h3 className="text-lg font-semibold">Execution Accuracy Trends</h3>
<div className="flex items-start justify-between">
<h3 className="text-lg font-semibold">Execution Accuracy Trends</h3>
<div className="rounded-md bg-blue-50 px-3 py-2 text-xs text-blue-700">
<p className="font-medium">
Chart Filters (matches monitoring system):
</p>
<ul className="mt-1 list-inside list-disc space-y-1">
<li>Only days with 1 execution with correctness score</li>
<li>Last 30 days</li>
<li>Averages calculated from scored executions only</li>
</ul>
</div>
</div>
{/* Alert Section */}
{trendsData.alert && (

View File

@@ -31,18 +31,10 @@ export function AgentSettingsModal({
}
}
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
if (!shouldShowToggle) return null;
if (!hasHITLBlocks) return null;
return (
<Dialog
@@ -65,48 +57,23 @@ export function AgentSettingsModal({
)}
<Dialog.Content>
<div className="space-y-6">
{showHITLToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
</div>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait for
your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</div>
</div>
</Dialog.Content>
</Dialog>

View File

@@ -13,16 +13,8 @@ interface Props {
}
export function SelectedSettingsView({ agent, onClearSelectedRun }: Props) {
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
return (
<SelectedViewLayout agent={agent}>
@@ -42,51 +34,24 @@ export function SelectedSettingsView({ agent, onClearSelectedRun }: Props) {
</div>
<div className={`${AGENT_LIBRARY_SECTION_PADDING_X} space-y-6`}>
{shouldShowToggle ? (
<>
{showHITLToggle && (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and
wait for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
</div>
{hasHITLBlocks ? (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
) : (
<div className="rounded-xl border border-zinc-100 bg-white p-6">
<Text variant="body" className="text-muted-foreground">

View File

@@ -173,8 +173,9 @@ export function OldAgentLibraryView() {
if (agentRuns.length > 0) {
// select latest run
const latestRun = agentRuns.reduce((latest, current) => {
if (latest.started_at && !current.started_at) return current;
else if (!latest.started_at) return latest;
if (!latest.started_at && !current.started_at) return latest;
if (!latest.started_at) return current;
if (!current.started_at) return latest;
return latest.started_at > current.started_at ? latest : current;
}, agentRuns[0]);
selectRun(latestRun.id as GraphExecutionID);

View File

@@ -184,9 +184,11 @@ export function AgentRunsSelectorList({
))}
{agentPresets.length > 0 && <Separator className="my-1" />}
{agentRuns
.toSorted(
(a, b) => b.started_at.getTime() - a.started_at.getTime(),
)
.toSorted((a, b) => {
const aTime = a.started_at?.getTime() ?? 0;
const bTime = b.started_at?.getTime() ?? 0;
return bTime - aTime;
})
.map((run) => (
<AgentRunSummaryCard
className={listItemClasses}
@@ -199,7 +201,7 @@ export function AgentRunsSelectorList({
?.name
: null) ?? agent.name
}
timestamp={run.started_at}
timestamp={run.started_at ?? undefined}
selected={selectedView.id === run.id}
onClick={() => onSelectRun(run.id)}
onDelete={() => doDeleteRun(run as GraphExecutionMeta)}

View File

@@ -120,9 +120,11 @@ export const AgentFlowList = ({
lastRun =
runCount == 0
? null
: _flowRuns.reduce((a, c) =>
a.started_at > c.started_at ? a : c,
);
: _flowRuns.reduce((a, c) => {
const aTime = a.started_at?.getTime() ?? 0;
const cTime = c.started_at?.getTime() ?? 0;
return aTime > cTime ? a : c;
});
}
return { flow, runCount, lastRun };
})
@@ -130,10 +132,9 @@ export const AgentFlowList = ({
if (!a.lastRun && !b.lastRun) return 0;
if (!a.lastRun) return 1;
if (!b.lastRun) return -1;
return (
b.lastRun.started_at.getTime() -
a.lastRun.started_at.getTime()
);
const bTime = b.lastRun.started_at?.getTime() ?? 0;
const aTime = a.lastRun.started_at?.getTime() ?? 0;
return bTime - aTime;
})
.map(({ flow, runCount, lastRun }) => (
<TableRow

View File

@@ -29,7 +29,10 @@ export const FlowRunsStatus: React.FC<{
: statsSince;
const filteredFlowRuns =
statsSinceTimestamp != null
? executions.filter((fr) => fr.started_at.getTime() > statsSinceTimestamp)
? executions.filter(
(fr) =>
fr.started_at && fr.started_at.getTime() > statsSinceTimestamp,
)
: executions;
return (

View File

@@ -98,40 +98,43 @@ export const FlowRunsTimeline = ({
<Scatter
key={flow.id}
data={executions
.filter((e) => e.graph_id == flow.graph_id)
.filter((e) => e.graph_id == flow.graph_id && e.started_at)
.map((e) => ({
...e,
time:
e.started_at.getTime() + (e.stats?.node_exec_time ?? 0) * 1000,
(e.started_at?.getTime() ?? 0) +
(e.stats?.node_exec_time ?? 0) * 1000,
_duration: e.stats?.node_exec_time ?? 0,
}))}
name={flow.name}
fill={`hsl(${(hashString(flow.id) * 137.5) % 360}, 70%, 50%)`}
/>
))}
{executions.map((execution) => (
<Line
key={execution.id}
type="linear"
dataKey="_duration"
data={[
{
...execution,
time: execution.started_at.getTime(),
_duration: 0,
},
{
...execution,
time: execution.ended_at.getTime(),
_duration: execution.stats?.node_exec_time ?? 0,
},
]}
stroke={`hsl(${(hashString(execution.graph_id) * 137.5) % 360}, 70%, 50%)`}
strokeWidth={2}
dot={false}
legendType="none"
/>
))}
{executions
.filter((e) => e.started_at && e.ended_at)
.map((execution) => (
<Line
key={execution.id}
type="linear"
dataKey="_duration"
data={[
{
...execution,
time: execution.started_at!.getTime(),
_duration: 0,
},
{
...execution,
time: execution.ended_at!.getTime(),
_duration: execution.stats?.node_exec_time ?? 0,
},
]}
stroke={`hsl(${(hashString(execution.graph_id) * 137.5) % 360}, 70%, 50%)`}
strokeWidth={2}
dot={false}
legendType="none"
/>
))}
<Legend
content={<ScrollableLegend />}
wrapperStyle={{

View File

@@ -98,7 +98,11 @@ const Monitor = () => {
...(selectedFlow
? executions.filter((v) => v.graph_id == selectedFlow.graph_id)
: executions),
].sort((a, b) => b.started_at.getTime() - a.started_at.getTime())}
].sort((a, b) => {
const aTime = a.started_at?.getTime() ?? 0;
const bTime = b.started_at?.getTime() ?? 0;
return bTime - aTime;
})}
selectedRun={selectedRun}
onSelectRun={(r) => setSelectedRun(r.id == selectedRun?.id ? null : r)}
/>

View File

@@ -116,6 +116,9 @@ export default function UserIntegrationsPage() {
"63a6e279-2dc2-448e-bf57-85776f7176dc", // ZeroBounce
"9aa1bde0-4947-4a70-a20c-84daa3850d52", // Google Maps
"d44045af-1c33-4833-9e19-752313214de2", // Llama API
"c4e6d1a0-3b5f-4789-a8e2-9b123456789f", // V0 by Vercel
"a5b3c7d9-2e4f-4a6b-8c1d-9e0f1a2b3c4d", // Webshare Proxy
"8b3d4e5f-6a7b-8c9d-0e1f-2a3b4c5d6e7f", // OpenWeatherMap
],
[],
);

View File

@@ -6383,11 +6383,6 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -6404,7 +6399,6 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info"
],
"title": "BaseGraph"
@@ -7154,6 +7148,20 @@
"error_message": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Error Message"
},
"started_at": {
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At"
},
"ended_at": {
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At"
}
},
"type": "object",
@@ -7260,14 +7268,20 @@
},
"status": { "$ref": "#/components/schemas/AgentExecutionStatus" },
"started_at": {
"type": "string",
"format": "date-time",
"title": "Started At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At",
"description": "When execution started running. Null if not yet started (QUEUED)."
},
"ended_at": {
"type": "string",
"format": "date-time",
"title": "Ended At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At",
"description": "When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW)."
},
"is_shared": {
"type": "boolean",
@@ -7301,8 +7315,6 @@
"nodes_input_masks",
"preset_id",
"status",
"started_at",
"ended_at",
"stats",
"outputs"
],
@@ -7401,14 +7413,20 @@
},
"status": { "$ref": "#/components/schemas/AgentExecutionStatus" },
"started_at": {
"type": "string",
"format": "date-time",
"title": "Started At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At",
"description": "When execution started running. Null if not yet started (QUEUED)."
},
"ended_at": {
"type": "string",
"format": "date-time",
"title": "Ended At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At",
"description": "When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW)."
},
"is_shared": {
"type": "boolean",
@@ -7437,8 +7455,6 @@
"nodes_input_masks",
"preset_id",
"status",
"started_at",
"ended_at",
"stats"
],
"title": "GraphExecutionMeta"
@@ -7485,14 +7501,20 @@
},
"status": { "$ref": "#/components/schemas/AgentExecutionStatus" },
"started_at": {
"type": "string",
"format": "date-time",
"title": "Started At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Started At",
"description": "When execution started running. Null if not yet started (QUEUED)."
},
"ended_at": {
"type": "string",
"format": "date-time",
"title": "Ended At"
"anyOf": [
{ "type": "string", "format": "date-time" },
{ "type": "null" }
],
"title": "Ended At",
"description": "When execution finished. Null if not yet completed (QUEUED, RUNNING, INCOMPLETE, REVIEW)."
},
"is_shared": {
"type": "boolean",
@@ -7531,8 +7553,6 @@
"nodes_input_masks",
"preset_id",
"status",
"started_at",
"ended_at",
"stats",
"outputs",
"node_executions"
@@ -7609,11 +7629,6 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7637,7 +7652,6 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info",
"credentials_input_schema"
],
@@ -7716,11 +7730,6 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7745,7 +7754,6 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info",
"credentials_input_schema"
],
@@ -7754,14 +7762,8 @@
"GraphSettings": {
"properties": {
"human_in_the_loop_safe_mode": {
"type": "boolean",
"title": "Human In The Loop Safe Mode",
"default": true
},
"sensitive_action_safe_mode": {
"type": "boolean",
"title": "Sensitive Action Safe Mode",
"default": false
"anyOf": [{ "type": "boolean" }, { "type": "null" }],
"title": "Human In The Loop Safe Mode"
}
},
"type": "object",
@@ -7919,16 +7921,6 @@
"title": "Has External Trigger",
"description": "Whether the agent has an external trigger (e.g. webhook) node"
},
"has_human_in_the_loop": {
"type": "boolean",
"title": "Has Human In The Loop",
"description": "Whether the agent has human-in-the-loop blocks"
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"description": "Whether the agent has sensitive action blocks"
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7975,8 +7967,6 @@
"output_schema",
"credentials_input_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"new_output",
"can_access_graph",
"is_latest_version",

View File

@@ -50,7 +50,9 @@ export function ActivityItem({ execution }: Props) {
execution.status === AgentExecutionStatus.QUEUED;
if (isActiveStatus) {
const timeAgo = formatTimeAgo(execution.started_at.toString());
const timeAgo = execution.started_at
? formatTimeAgo(execution.started_at.toString())
: "recently";
const statusText =
execution.status === AgentExecutionStatus.QUEUED ? "queued" : "running";
return [
@@ -61,7 +63,9 @@ export function ActivityItem({ execution }: Props) {
// Handle all other statuses with time display
const timeAgo = execution.ended_at
? formatTimeAgo(execution.ended_at.toString())
: formatTimeAgo(execution.started_at.toString());
: execution.started_at
? formatTimeAgo(execution.started_at.toString())
: "recently";
let statusText = "ended";
switch (execution.status) {

View File

@@ -20,15 +20,11 @@ function hasHITLBlocks(graph: GraphModel | LibraryAgent | Graph): boolean {
if ("has_human_in_the_loop" in graph) {
return !!graph.has_human_in_the_loop;
}
return false;
}
function hasSensitiveActionBlocks(
graph: GraphModel | LibraryAgent | Graph,
): boolean {
if ("has_sensitive_action" in graph) {
return !!graph.has_sensitive_action;
if (isLibraryAgent(graph)) {
return graph.settings?.human_in_the_loop_safe_mode !== null;
}
return false;
}
@@ -44,9 +40,7 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
const graphId = getGraphId(graph);
const isAgent = isLibraryAgent(graph);
const showHITLToggle = hasHITLBlocks(graph);
const showSensitiveActionToggle = hasSensitiveActionBlocks(graph);
const shouldShowToggle = showHITLToggle || showSensitiveActionToggle;
const shouldShowToggle = hasHITLBlocks(graph);
const { mutateAsync: updateGraphSettings, isPending } =
usePatchV1UpdateGraphSettings();
@@ -62,37 +56,27 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
},
);
const [localHITLSafeMode, setLocalHITLSafeMode] = useState<boolean>(true);
const [localSensitiveActionSafeMode, setLocalSensitiveActionSafeMode] =
useState<boolean>(false);
const [isLocalStateLoaded, setIsLocalStateLoaded] = useState<boolean>(false);
const [localSafeMode, setLocalSafeMode] = useState<boolean | null>(null);
useEffect(() => {
if (!isAgent && libraryAgent) {
setLocalHITLSafeMode(
libraryAgent.settings?.human_in_the_loop_safe_mode ?? true,
);
setLocalSensitiveActionSafeMode(
libraryAgent.settings?.sensitive_action_safe_mode ?? false,
);
setIsLocalStateLoaded(true);
const backendValue = libraryAgent.settings?.human_in_the_loop_safe_mode;
if (backendValue !== undefined) {
setLocalSafeMode(backendValue);
}
}
}, [isAgent, libraryAgent]);
const currentHITLSafeMode = isAgent
? (graph.settings?.human_in_the_loop_safe_mode ?? true)
: localHITLSafeMode;
const currentSafeMode = isAgent
? graph.settings?.human_in_the_loop_safe_mode
: localSafeMode;
const currentSensitiveActionSafeMode = isAgent
? (graph.settings?.sensitive_action_safe_mode ?? false)
: localSensitiveActionSafeMode;
const isStateUndetermined = isAgent
? graph.settings?.human_in_the_loop_safe_mode == null
: isLoading || localSafeMode === null;
const isHITLStateUndetermined = isAgent
? false
: isLoading || !isLocalStateLoaded;
const handleHITLToggle = useCallback(async () => {
const newSafeMode = !currentHITLSafeMode;
const handleToggle = useCallback(async () => {
const newSafeMode = !currentSafeMode;
try {
await updateGraphSettings({
@@ -101,7 +85,7 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
});
if (!isAgent) {
setLocalHITLSafeMode(newSafeMode);
setLocalSafeMode(newSafeMode);
}
if (isAgent) {
@@ -117,62 +101,37 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
queryClient.invalidateQueries({ queryKey: ["v2", "executions"] });
toast({
title: `HITL safe mode ${newSafeMode ? "enabled" : "disabled"}`,
title: `Safe mode ${newSafeMode ? "enabled" : "disabled"}`,
description: newSafeMode
? "Human-in-the-loop blocks will require manual review"
: "Human-in-the-loop blocks will proceed automatically",
duration: 2000,
});
} catch (error) {
handleToggleError(error, isAgent, toast);
}
}, [
currentHITLSafeMode,
graphId,
isAgent,
graph.id,
updateGraphSettings,
queryClient,
toast,
]);
const isNotFoundError =
error instanceof Error &&
(error.message.includes("404") || error.message.includes("not found"));
const handleSensitiveActionToggle = useCallback(async () => {
const newSafeMode = !currentSensitiveActionSafeMode;
try {
await updateGraphSettings({
graphId,
data: { sensitive_action_safe_mode: newSafeMode },
});
if (!isAgent) {
setLocalSensitiveActionSafeMode(newSafeMode);
}
if (isAgent) {
queryClient.invalidateQueries({
queryKey: getGetV2GetLibraryAgentQueryOptions(graph.id.toString())
.queryKey,
if (!isAgent && isNotFoundError) {
toast({
title: "Safe mode not available",
description:
"To configure safe mode, please save this graph to your library first.",
variant: "destructive",
});
} else {
toast({
title: "Failed to update safe mode",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
}
queryClient.invalidateQueries({
queryKey: ["v1", "graphs", graphId, "executions"],
});
queryClient.invalidateQueries({ queryKey: ["v2", "executions"] });
toast({
title: `Sensitive action safe mode ${newSafeMode ? "enabled" : "disabled"}`,
description: newSafeMode
? "Sensitive action blocks will require manual review"
: "Sensitive action blocks will proceed automatically",
duration: 2000,
});
} catch (error) {
handleToggleError(error, isAgent, toast);
}
}, [
currentSensitiveActionSafeMode,
currentSafeMode,
graphId,
isAgent,
graph.id,
@@ -182,53 +141,11 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
]);
return {
// HITL safe mode
currentHITLSafeMode,
showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle,
// Sensitive action safe mode
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
// General
currentSafeMode,
isPending,
shouldShowToggle,
// Backwards compatibility
currentSafeMode: currentHITLSafeMode,
isStateUndetermined: isHITLStateUndetermined,
handleToggle: handleHITLToggle,
hasHITLBlocks: showHITLToggle,
isStateUndetermined,
handleToggle,
hasHITLBlocks: shouldShowToggle,
};
}
function handleToggleError(
error: unknown,
isAgent: boolean,
toast: ReturnType<typeof useToast>["toast"],
) {
const isNotFoundError =
error instanceof Error &&
(error.message.includes("404") || error.message.includes("not found"));
if (!isAgent && isNotFoundError) {
toast({
title: "Safe mode not available",
description:
"To configure safe mode, please save this graph to your library first.",
variant: "destructive",
});
} else {
toast({
title: "Failed to update safe mode",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
}
}

View File

@@ -327,8 +327,8 @@ export type GraphExecutionMeta = {
| "FAILED"
| "INCOMPLETE"
| "REVIEW";
started_at: Date;
ended_at: Date;
started_at: Date | null;
ended_at: Date | null;
stats: {
error: string | null;
cost: number;

View File

@@ -0,0 +1 @@
# Video editing blocks

View File

@@ -1,12 +1,15 @@
[flake8]
max-line-length = 88
extend-ignore = E203
exclude =
.tox,
__pycache__,
*.pyc,
.env
venv*/*,
.venv/*,
reports/*,
dist/*,
data/*,
.env,
venv*,
.venv,
reports,
dist,
data,
.benchmark_workspaces,
.autogpt,

291
classic/CLAUDE.md Normal file
View File

@@ -0,0 +1,291 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
AutoGPT Classic is an experimental, **unsupported** project demonstrating autonomous GPT-4 operation. Dependencies will not be updated, and the codebase contains known vulnerabilities. This is preserved for educational/historical purposes.
## Repository Structure
```
classic/
├── pyproject.toml # Single consolidated Poetry project
├── poetry.lock # Single lock file
├── forge/
│ └── forge/ # Core agent framework package
├── original_autogpt/
│ └── autogpt/ # AutoGPT agent package
├── direct_benchmark/
│ └── direct_benchmark/ # Benchmark harness package
└── benchmark/ # Challenge definitions (data, not code)
```
All packages are managed by a single `pyproject.toml` at the classic/ root.
## Common Commands
### Setup & Install
```bash
# Install everything from classic/ directory
cd classic
poetry install
```
### Running Agents
```bash
# Run forge agent
poetry run python -m forge
# Run original autogpt server
poetry run serve --debug
# Run autogpt CLI
poetry run autogpt
```
Agents run on `http://localhost:8000` by default.
### Benchmarking
```bash
# Run benchmarks
poetry run direct-benchmark run
# Run specific strategies and models
poetry run direct-benchmark run \
--strategies one_shot,rewoo \
--models claude \
--parallel 4
# Run a single test
poetry run direct-benchmark run --tests ReadFile
# List available commands
poetry run direct-benchmark --help
```
### Testing
```bash
poetry run pytest # All tests
poetry run pytest forge/tests/ # Forge tests only
poetry run pytest original_autogpt/tests/ # AutoGPT tests only
poetry run pytest -k test_name # Single test by name
poetry run pytest path/to/test.py # Specific test file
poetry run pytest --cov # With coverage
```
### Linting & Formatting
Run from the classic/ directory:
```bash
# Format everything (recommended to run together)
poetry run black . && poetry run isort .
# Check formatting (CI-style, no changes)
poetry run black --check . && poetry run isort --check-only .
# Lint
poetry run flake8 # Style linting
# Type check
poetry run pyright # Type checking (some errors are expected in infrastructure code)
```
Note: Always run linters over the entire directory, not specific files, for best results.
## Architecture
### Forge (Core Framework)
The `forge` package is the foundation that other components depend on:
- `forge/agent/` - Agent implementation and protocols
- `forge/llm/` - Multi-provider LLM integrations (OpenAI, Anthropic, Groq, LiteLLM)
- `forge/components/` - Reusable agent components
- `forge/file_storage/` - File system abstraction
- `forge/config/` - Configuration management
### Original AutoGPT
- `original_autogpt/autogpt/app/` - CLI application entry points
- `original_autogpt/autogpt/agents/` - Agent implementations
- `original_autogpt/autogpt/agent_factory/` - Agent creation logic
### Direct Benchmark
Benchmark harness for testing agent performance:
- `direct_benchmark/direct_benchmark/` - CLI and harness code
- `benchmark/agbenchmark/challenges/` - Test cases organized by category (code, retrieval, data, etc.)
- Reports generated in `direct_benchmark/reports/`
### Package Structure
All three packages are included in a single Poetry project. Imports are fully qualified:
- `from forge.agent.base import BaseAgent`
- `from autogpt.agents.agent import Agent`
- `from direct_benchmark.harness import BenchmarkHarness`
## Code Style
- Python 3.12 target
- Line length: 88 characters (Black default)
- Black for formatting, isort for imports (profile="black")
- Type hints with Pyright checking
## Testing Patterns
- Async support via pytest-asyncio
- Fixtures defined in `conftest.py` files provide: `tmp_project_root`, `storage`, `config`, `llm_provider`, `agent`
- Tests requiring API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY) will skip if not set
## Environment Setup
Copy `.env.example` to `.env` in the relevant directory and add your API keys:
```bash
cp .env.example .env
# Edit .env with your OPENAI_API_KEY, etc.
```
## Workspaces
Agents operate within a **workspace** - a directory containing all agent data and files. The workspace root defaults to the current working directory.
### Workspace Structure
```
{workspace}/
├── .autogpt/
│ ├── autogpt.yaml # Workspace-level permissions
│ ├── ap_server.db # Agent Protocol database (server mode)
│ └── agents/
│ └── AutoGPT-{agent_id}/
│ ├── state.json # Agent profile, directives, action history
│ ├── permissions.yaml # Agent-specific permission overrides
│ └── workspace/ # Agent's sandboxed working directory
```
### Key Concepts
- **Multiple agents** can coexist in the same workspace (each gets its own subdirectory)
- **File access** is sandboxed to the agent's `workspace/` directory by default
- **State persistence** - agent state saves to `state.json` and survives across sessions
- **Storage backends** - supports local filesystem, S3, and GCS (via `FILE_STORAGE_BACKEND` env var)
### Specifying a Workspace
```bash
# Default: uses current directory
cd /path/to/my/project && poetry run autogpt
# Or specify explicitly via CLI (if supported)
poetry run autogpt --workspace /path/to/workspace
```
## Settings Location
Configuration uses a **layered system** with three levels (in order of precedence):
### 1. Environment Variables (Global)
Loaded from `.env` file in the working directory:
```bash
# Required
OPENAI_API_KEY=sk-...
# Optional LLM settings
SMART_LLM=gpt-4o # Model for complex reasoning
FAST_LLM=gpt-4o-mini # Model for simple tasks
EMBEDDING_MODEL=text-embedding-3-small
# Optional search providers (for web search component)
TAVILY_API_KEY=tvly-...
SERPER_API_KEY=...
GOOGLE_API_KEY=...
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=...
# Optional infrastructure
LOG_LEVEL=DEBUG # DEBUG, INFO, WARNING, ERROR
DATABASE_STRING=sqlite:///agent.db # Agent Protocol database
PORT=8000 # Server port
FILE_STORAGE_BACKEND=local # local, s3, or gcs
```
### 2. Workspace Settings (`{workspace}/.autogpt/autogpt.yaml`)
Workspace-wide permissions that apply to **all agents** in this workspace:
```yaml
allow:
- read_file({workspace}/**)
- write_to_file({workspace}/**)
- list_folder({workspace}/**)
- web_search(*)
deny:
- read_file(**.env)
- read_file(**.env.*)
- read_file(**.key)
- read_file(**.pem)
- execute_shell(rm -rf:*)
- execute_shell(sudo:*)
```
Auto-generated with sensible defaults if missing.
### 3. Agent Settings (`{workspace}/.autogpt/agents/{id}/permissions.yaml`)
Agent-specific permission overrides:
```yaml
allow:
- execute_python(*)
- web_search(*)
deny:
- execute_shell(*)
```
## Permissions
The permission system uses **pattern matching** with a **first-match-wins** evaluation order.
### Permission Check Order
1. Agent deny list → **Block**
2. Workspace deny list → **Block**
3. Agent allow list → **Allow**
4. Workspace allow list → **Allow**
5. Session denied list → **Block** (commands denied during this session)
6. **Prompt user** → Interactive approval (if in interactive mode)
### Pattern Syntax
Format: `command_name(glob_pattern)`
| Pattern | Description |
|---------|-------------|
| `read_file({workspace}/**)` | Read any file in workspace (recursive) |
| `write_to_file({workspace}/*.txt)` | Write only .txt files in workspace root |
| `execute_shell(python:**)` | Execute Python commands only |
| `execute_shell(git:*)` | Execute any git command |
| `web_search(*)` | Allow all web searches |
Special tokens:
- `{workspace}` - Replaced with actual workspace path
- `**` - Matches any path including `/`
- `*` - Matches any characters except `/`
### Interactive Approval Scopes
When prompted for permission, users can choose:
| Scope | Effect |
|-------|--------|
| **Once** | Allow this one time only (not saved) |
| **Agent** | Always allow for this agent (saves to agent `permissions.yaml`) |
| **Workspace** | Always allow for all agents (saves to `autogpt.yaml`) |
| **Deny** | Deny this command (saves to appropriate deny list) |
### Default Security
Out of the box, the following are **denied by default**:
- Reading sensitive files (`.env`, `.key`, `.pem`)
- Destructive shell commands (`rm -rf`, `sudo`)
- Operations outside the workspace directory

View File

@@ -1,182 +0,0 @@
## CLI Documentation
This document describes how to interact with the project's CLI (Command Line Interface). It includes the types of outputs you can expect from each command. Note that the `agents stop` command will terminate any process running on port 8000.
### 1. Entry Point for the CLI
Running the `./run` command without any parameters will display the help message, which provides a list of available commands and options. Additionally, you can append `--help` to any command to view help information specific to that command.
```sh
./run
```
**Output**:
```
Usage: cli.py [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
agent Commands to create, start and stop agents
benchmark Commands to start the benchmark and list tests and categories
setup Installs dependencies needed for your system.
```
If you need assistance with any command, simply add the `--help` parameter to the end of your command, like so:
```sh
./run COMMAND --help
```
This will display a detailed help message regarding that specific command, including a list of any additional options and arguments it accepts.
### 2. Setup Command
```sh
./run setup
```
**Output**:
```
Setup initiated
Installation has been completed.
```
This command initializes the setup of the project.
### 3. Agents Commands
**a. List All Agents**
```sh
./run agent list
```
**Output**:
```
Available agents: 🤖
🐙 forge
🐙 autogpt
```
Lists all the available agents.
**b. Create a New Agent**
```sh
./run agent create my_agent
```
**Output**:
```
🎉 New agent 'my_agent' created and switched to the new directory in agents folder.
```
Creates a new agent named 'my_agent'.
**c. Start an Agent**
```sh
./run agent start my_agent
```
**Output**:
```
... (ASCII Art representing the agent startup)
[Date and Time] [forge.sdk.db] [DEBUG] 🐛 Initializing AgentDB with database_string: sqlite:///agent.db
[Date and Time] [forge.sdk.agent] [INFO] 📝 Agent server starting on http://0.0.0.0:8000
```
Starts the 'my_agent' and displays startup ASCII art and logs.
**d. Stop an Agent**
```sh
./run agent stop
```
**Output**:
```
Agent stopped
```
Stops the running agent.
### 4. Benchmark Commands
**a. List Benchmark Categories**
```sh
./run benchmark categories list
```
**Output**:
```
Available categories: 📚
📖 code
📖 safety
📖 memory
... (and so on)
```
Lists all available benchmark categories.
**b. List Benchmark Tests**
```sh
./run benchmark tests list
```
**Output**:
```
Available tests: 📚
📖 interface
🔬 Search - TestSearch
🔬 Write File - TestWriteFile
... (and so on)
```
Lists all available benchmark tests.
**c. Show Details of a Benchmark Test**
```sh
./run benchmark tests details TestWriteFile
```
**Output**:
```
TestWriteFile
-------------
Category: interface
Task: Write the word 'Washington' to a .txt file
... (and other details)
```
Displays the details of the 'TestWriteFile' benchmark test.
**d. Start Benchmark for the Agent**
```sh
./run benchmark start my_agent
```
**Output**:
```
(more details about the testing process shown whilst the test are running)
============= 13 failed, 1 passed in 0.97s ============...
```
Displays the results of the benchmark tests on 'my_agent'.

View File

@@ -2,7 +2,7 @@
ARG BUILD_TYPE=dev
# Use an official Python base image from the Docker Hub
FROM python:3.10-slim AS autogpt-base
FROM python:3.12-slim AS autogpt-base
# Install browsers
RUN apt-get update && apt-get install -y \
@@ -34,9 +34,6 @@ COPY original_autogpt/pyproject.toml original_autogpt/poetry.lock ./
# Include forge so it can be used as a path dependency
COPY forge/ ../forge
# Include frontend
COPY frontend/ ../frontend
# Set the entrypoint
ENTRYPOINT ["poetry", "run", "autogpt"]
CMD []

View File

@@ -1,173 +0,0 @@
# Quickstart Guide
> For the complete getting started [tutorial series](https://aiedge.medium.com/autogpt-forge-e3de53cc58ec) <- click here
Welcome to the Quickstart Guide! This guide will walk you through setting up, building, and running your own AutoGPT agent. Whether you're a seasoned AI developer or just starting out, this guide will provide you with the steps to jumpstart your journey in AI development with AutoGPT.
## System Requirements
This project supports Linux (Debian-based), Mac, and Windows Subsystem for Linux (WSL). If you use a Windows system, you must install WSL. You can find the installation instructions for WSL [here](https://learn.microsoft.com/en-us/windows/wsl/).
## Getting Setup
1. **Fork the Repository**
To fork the repository, follow these steps:
- Navigate to the main page of the repository.
![Repository](../docs/content/imgs/quickstart/001_repo.png)
- In the top-right corner of the page, click Fork.
![Create Fork UI](../docs/content/imgs/quickstart/002_fork.png)
- On the next page, select your GitHub account to create the fork.
- Wait for the forking process to complete. You now have a copy of the repository in your GitHub account.
2. **Clone the Repository**
To clone the repository, you need to have Git installed on your system. If you don't have Git installed, download it from [here](https://git-scm.com/downloads). Once you have Git installed, follow these steps:
- Open your terminal.
- Navigate to the directory where you want to clone the repository.
- Run the git clone command for the fork you just created
![Clone the Repository](../docs/content/imgs/quickstart/003_clone.png)
- Then open your project in your ide
![Open the Project in your IDE](../docs/content/imgs/quickstart/004_ide.png)
4. **Setup the Project**
Next, we need to set up the required dependencies. We have a tool to help you perform all the tasks on the repo.
It can be accessed by running the `run` command by typing `./run` in the terminal.
The first command you need to use is `./run setup.` This will guide you through setting up your system.
Initially, you will get instructions for installing Flutter and Chrome and setting up your GitHub access token like the following image:
![Setup the Project](../docs/content/imgs/quickstart/005_setup.png)
### For Windows Users
If you're a Windows user and experience issues after installing WSL, follow the steps below to resolve them.
#### Update WSL
Run the following command in Powershell or Command Prompt:
1. Enable the optional WSL and Virtual Machine Platform components.
2. Download and install the latest Linux kernel.
3. Set WSL 2 as the default.
4. Download and install the Ubuntu Linux distribution (a reboot may be required).
```shell
wsl --install
```
For more detailed information and additional steps, refer to [Microsoft's WSL Setup Environment Documentation](https://learn.microsoft.com/en-us/windows/wsl/setup/environment).
#### Resolve FileNotFoundError or "No such file or directory" Errors
When you run `./run setup`, if you encounter errors like `No such file or directory` or `FileNotFoundError`, it might be because Windows-style line endings (CRLF - Carriage Return Line Feed) are not compatible with Unix/Linux style line endings (LF - Line Feed).
To resolve this, you can use the `dos2unix` utility to convert the line endings in your script from CRLF to LF. Heres how to install and run `dos2unix` on the script:
```shell
sudo apt update
sudo apt install dos2unix
dos2unix ./run
```
After executing the above commands, running `./run setup` should work successfully.
#### Store Project Files within the WSL File System
If you continue to experience issues, consider storing your project files within the WSL file system instead of the Windows file system. This method avoids path translations and permissions issues and provides a more consistent development environment.
You can keep running the command to get feedback on where you are up to with your setup.
When setup has been completed, the command will return an output like this:
![Setup Complete](../docs/content/imgs/quickstart/006_setup_complete.png)
## Creating Your Agent
After completing the setup, the next step is to create your agent template.
Execute the command `./run agent create YOUR_AGENT_NAME`, where `YOUR_AGENT_NAME` should be replaced with your chosen name.
Tips for naming your agent:
* Give it its own unique name, or name it after yourself
* Include an important aspect of your agent in the name, such as its purpose
Examples: `SwiftyosAssistant`, `PwutsPRAgent`, `MySuperAgent`
![Create an Agent](../docs/content/imgs/quickstart/007_create_agent.png)
## Running your Agent
Your agent can be started using the command: `./run agent start YOUR_AGENT_NAME`
This starts the agent on the URL: `http://localhost:8000/`
![Start the Agent](../docs/content/imgs/quickstart/009_start_agent.png)
The front end can be accessed from `http://localhost:8000/`; first, you must log in using either a Google account or your GitHub account.
![Login](../docs/content/imgs/quickstart/010_login.png)
Upon logging in, you will get a page that looks something like this: your task history down the left-hand side of the page, and the 'chat' window to send tasks to your agent.
![Login](../docs/content/imgs/quickstart/011_home.png)
When you have finished with your agent or just need to restart it, use Ctl-C to end the session. Then, you can re-run the start command.
If you are having issues and want to ensure the agent has been stopped, there is a `./run agent stop` command, which will kill the process using port 8000, which should be the agent.
## Benchmarking your Agent
The benchmarking system can also be accessed using the CLI too:
```bash
agpt % ./run benchmark
Usage: cli.py benchmark [OPTIONS] COMMAND [ARGS]...
Commands to start the benchmark and list tests and categories
Options:
--help Show this message and exit.
Commands:
categories Benchmark categories group command
start Starts the benchmark command
tests Benchmark tests group command
agpt % ./run benchmark categories
Usage: cli.py benchmark categories [OPTIONS] COMMAND [ARGS]...
Benchmark categories group command
Options:
--help Show this message and exit.
Commands:
list List benchmark categories command
agpt % ./run benchmark tests
Usage: cli.py benchmark tests [OPTIONS] COMMAND [ARGS]...
Benchmark tests group command
Options:
--help Show this message and exit.
Commands:
details Benchmark test details command
list List benchmark tests command
```
The benchmark has been split into different categories of skills you can test your agent on. You can see what categories are available with
```bash
./run benchmark categories list
# And what tests are available with
./run benchmark tests list
```
![Login](../docs/content/imgs/quickstart/012_tests.png)
Finally, you can run the benchmark with
```bash
./run benchmark start YOUR_AGENT_NAME
```
>

View File

@@ -4,7 +4,7 @@ AutoGPT Classic was an experimental project to demonstrate autonomous GPT-4 oper
## Project Status
⚠️ **This project is unsupported, and dependencies will not be updated. It was an experiment that has concluded its initial research phase. If you want to use AutoGPT, you should use the [AutoGPT Platform](/autogpt_platform)**
**This project is unsupported, and dependencies will not be updated.** It was an experiment that has concluded its initial research phase. If you want to use AutoGPT, you should use the [AutoGPT Platform](/autogpt_platform).
For those interested in autonomous AI agents, we recommend exploring more actively maintained alternatives or referring to this codebase for educational purposes only.
@@ -16,37 +16,171 @@ AutoGPT Classic was one of the first implementations of autonomous AI agents - A
- Learn from the results and adjust its approach
- Chain multiple actions together to achieve an objective
## Key Features
- 🔄 Autonomous task chaining
- 🛠 Tool and API integration capabilities
- 💾 Memory management for context retention
- 🔍 Web browsing and information gathering
- 📝 File operations and content creation
- 🔄 Self-prompting and task breakdown
## Structure
The project is organized into several key components:
- `/benchmark` - Performance testing tools
- `/forge` - Core autonomous agent framework
- `/frontend` - User interface components
- `/original_autogpt` - Original implementation
```
classic/
├── pyproject.toml # Single consolidated Poetry project
├── poetry.lock # Single lock file
├── forge/ # Core autonomous agent framework
├── original_autogpt/ # Original implementation
├── direct_benchmark/ # Benchmark harness
└── benchmark/ # Challenge definitions (data)
```
## Getting Started
While this project is no longer actively maintained, you can still explore the codebase:
### Prerequisites
- Python 3.12+
- [Poetry](https://python-poetry.org/docs/#installation)
### Installation
1. Clone the repository:
```bash
# Clone the repository
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd classic
# Install everything
poetry install
```
2. Review the documentation:
- For reference, see the [documentation](https://docs.agpt.co). You can browse at the same point in time as this commit so the docs don't change.
- Check `CLI-USAGE.md` for command-line interface details
- Refer to `TROUBLESHOOTING.md` for common issues
### Configuration
Configuration uses a layered system:
1. **Environment variables** (`.env` file)
2. **Workspace settings** (`.autogpt/autogpt.yaml`)
3. **Agent settings** (`.autogpt/agents/{id}/permissions.yaml`)
Copy the example environment file and add your API keys:
```bash
cp .env.example .env
```
Key environment variables:
```bash
# Required
OPENAI_API_KEY=sk-...
# Optional LLM settings
SMART_LLM=gpt-4o # Model for complex reasoning
FAST_LLM=gpt-4o-mini # Model for simple tasks
# Optional search providers
TAVILY_API_KEY=tvly-...
SERPER_API_KEY=...
# Optional infrastructure
LOG_LEVEL=DEBUG
PORT=8000
FILE_STORAGE_BACKEND=local # local, s3, or gcs
```
### Running
All commands run from the `classic/` directory:
```bash
# Run forge agent
poetry run python -m forge
# Run original autogpt server
poetry run serve --debug
# Run autogpt CLI
poetry run autogpt
```
Agents run on `http://localhost:8000` by default.
### Benchmarking
```bash
poetry run direct-benchmark run
```
### Testing
```bash
poetry run pytest # All tests
poetry run pytest forge/tests/ # Forge tests only
poetry run pytest original_autogpt/tests/ # AutoGPT tests only
```
## Workspaces
Agents operate within a **workspace** directory that contains all agent data and files:
```
{workspace}/
├── .autogpt/
│ ├── autogpt.yaml # Workspace-level permissions
│ ├── ap_server.db # Agent Protocol database (server mode)
│ └── agents/
│ └── AutoGPT-{agent_id}/
│ ├── state.json # Agent profile, directives, history
│ ├── permissions.yaml # Agent-specific permissions
│ └── workspace/ # Agent's sandboxed working directory
```
- The workspace defaults to the current working directory
- Multiple agents can coexist in the same workspace
- Agent file access is sandboxed to their `workspace/` subdirectory
- State persists across sessions via `state.json`
## Permissions
AutoGPT uses a **layered permission system** with pattern matching:
### Permission Files
| File | Scope | Location |
|------|-------|----------|
| `autogpt.yaml` | All agents in workspace | `.autogpt/autogpt.yaml` |
| `permissions.yaml` | Single agent | `.autogpt/agents/{id}/permissions.yaml` |
### Permission Format
```yaml
allow:
- read_file({workspace}/**) # Read any file in workspace
- write_to_file({workspace}/**) # Write any file in workspace
- web_search(*) # All web searches
deny:
- read_file(**.env) # Block .env files
- execute_shell(sudo:*) # Block sudo commands
```
### Check Order (First Match Wins)
1. Agent deny → Block
2. Workspace deny → Block
3. Agent allow → Allow
4. Workspace allow → Allow
5. Prompt user → Interactive approval
### Interactive Approval
When prompted, users can approve commands with different scopes:
- **Once** - Allow this one time only
- **Agent** - Always allow for this agent
- **Workspace** - Always allow for all agents
- **Deny** - Block this command
### Default Security
Denied by default:
- Sensitive files (`.env`, `.key`, `.pem`)
- Destructive commands (`rm -rf`, `sudo`)
- Operations outside the workspace
## Security Notice
This codebase has **known vulnerabilities** and issues with its dependencies. It will not be updated to new dependencies. Use for educational purposes only.
## License
@@ -55,27 +189,3 @@ This project segment is licensed under the MIT License - see the [LICENSE](LICEN
## Documentation
Please refer to the [documentation](https://docs.agpt.co) for more detailed information about the project's architecture and concepts.
You can browse at the same point in time as this commit so the docs don't change.
## Historical Impact
AutoGPT Classic played a significant role in advancing the field of autonomous AI agents:
- Demonstrated practical implementation of AI autonomy
- Inspired numerous derivative projects and research
- Contributed to the development of AI agent architectures
- Helped identify key challenges in AI autonomy
## Security Notice
If you're studying this codebase, please understand this has KNOWN vulnerabilities and issues with its dependencies. It will not be updated to new dependencies.
## Community & Support
While active development has concluded:
- The codebase remains available for study and reference
- Historical discussions can be found in project issues
- Related research and developments continue in the broader AI agent community
## Acknowledgments
Thanks to all contributors who participated in this experimental project and helped advance the field of autonomous AI agents.

View File

@@ -1,4 +0,0 @@
AGENT_NAME=mini-agi
REPORTS_FOLDER="reports/mini-agi"
OPENAI_API_KEY="sk-" # for LLM eval
BUILD_SKILL_TREE=false # set to true to build the skill tree.

View File

@@ -1,12 +0,0 @@
[flake8]
max-line-length = 88
# Ignore rules that conflict with Black code style
extend-ignore = E203, W503
exclude =
__pycache__/,
*.pyc,
.pytest_cache/,
venv*/,
.venv/,
reports/,
agbenchmark/reports/,

View File

@@ -1,174 +0,0 @@
agbenchmark_config/workspace/
backend/backend_stdout.txt
reports/df*.pkl
reports/raw*
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
.DS_Store
```
secrets.json
agbenchmark_config/challenges_already_beaten.json
agbenchmark_config/challenges/pri_*
agbenchmark_config/updates.json
agbenchmark_config/reports/*
agbenchmark_config/reports/success_rate.json
agbenchmark_config/reports/regression_tests.json

View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 AutoGPT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,25 +0,0 @@
# Auto-GPT Benchmarks
Built for the purpose of benchmarking the performance of agents regardless of how they work.
Objectively know how well your agent is performing in categories like code, retrieval, memory, and safety.
Save time and money while doing it through smart dependencies. The best part? It's all automated.
## Scores:
<img width="733" alt="Screenshot 2023-07-25 at 10 35 01 AM" src="https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/assets/9652976/98963e0b-18b9-4b17-9a6a-4d3e4418af70">
## Ranking overall:
- 1- [Beebot](https://github.com/AutoPackAI/beebot)
- 2- [mini-agi](https://github.com/muellerberndt/mini-agi)
- 3- [Auto-GPT](https://github.com/Significant-Gravitas/AutoGPT)
## Detailed results:
<img width="733" alt="Screenshot 2023-07-25 at 10 42 15 AM" src="https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/assets/9652976/39be464c-c842-4437-b28a-07d878542a83">
[Click here to see the results and the raw data!](https://docs.google.com/spreadsheets/d/1WXm16P2AHNbKpkOI0LYBpcsGG0O7D8HYTG5Uj0PaJjA/edit#gid=203558751)!
More agents coming soon !

View File

@@ -1,69 +0,0 @@
## As a user
1. `pip install auto-gpt-benchmarks`
2. Add boilerplate code to run and kill agent
3. `agbenchmark`
- `--category challenge_category` to run tests in a specific category
- `--mock` to only run mock tests if they exists for each test
- `--noreg` to skip any tests that have passed in the past. When you run without this flag and a previous challenge that passed fails, it will now not be regression tests
4. We call boilerplate code for your agent
5. Show pass rate of tests, logs, and any other metrics
## Contributing
##### Diagrams: https://whimsical.com/agbenchmark-5n4hXBq1ZGzBwRsK4TVY7x
### To run the existing mocks
1. clone the repo `auto-gpt-benchmarks`
2. `pip install poetry`
3. `poetry shell`
4. `poetry install`
5. `cp .env_example .env`
6. `git submodule update --init --remote --recursive`
7. `uvicorn server:app --reload`
8. `agbenchmark --mock`
Keep config the same and watch the logs :)
### To run with mini-agi
1. Navigate to `auto-gpt-benchmarks/agent/mini-agi`
2. `pip install -r requirements.txt`
3. `cp .env_example .env`, set `PROMPT_USER=false` and add your `OPENAI_API_KEY=`. Sset `MODEL="gpt-3.5-turbo"` if you don't have access to `gpt-4` yet. Also make sure you have Python 3.10^ installed
4. set `AGENT_NAME=mini-agi` in `.env` file and where you want your `REPORTS_FOLDER` to be
5. Make sure to follow the commands above, and remove mock flag `agbenchmark`
- To add requirements `poetry add requirement`.
Feel free to create prs to merge with `main` at will (but also feel free to ask for review) - if you can't send msg in R&D chat for access.
If you push at any point and break things - it'll happen to everyone - fix it asap. Step 1 is to revert `master` to last working commit
Let people know what beautiful code you write does, document everything well
Share your progress :)
#### Dataset
Manually created, existing challenges within Auto-Gpt, https://osu-nlp-group.github.io/Mind2Web/
## How do I add new agents to agbenchmark ?
Example with smol developer.
1- Create a github branch with your agent following the same pattern as this example:
https://github.com/smol-ai/developer/pull/114/files
2- Create the submodule and the github workflow by following the same pattern as this example:
https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/pull/48/files
## How do I run agent in different environments?
**To just use as the benchmark for your agent**. `pip install` the package and run `agbenchmark`
**For internal Auto-GPT ci runs**, specify the `AGENT_NAME` you want you use and set the `HOME_ENV`.
Ex. `AGENT_NAME=mini-agi`
**To develop agent alongside benchmark**, you can specify the `AGENT_NAME` you want you use and add as a submodule to the repo

View File

@@ -1,352 +0,0 @@
import logging
import os
import sys
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Optional
import click
from click_default_group import DefaultGroup
from dotenv import load_dotenv
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.utils.logging import configure_logging
load_dotenv()
# try:
# if os.getenv("HELICONE_API_KEY"):
# import helicone # noqa
# helicone_enabled = True
# else:
# helicone_enabled = False
# except ImportError:
# helicone_enabled = False
class InvalidInvocationError(ValueError):
pass
logger = logging.getLogger(__name__)
BENCHMARK_START_TIME_DT = datetime.now(timezone.utc)
BENCHMARK_START_TIME = BENCHMARK_START_TIME_DT.strftime("%Y-%m-%dT%H:%M:%S+00:00")
# if helicone_enabled:
# from helicone.lock import HeliconeLockManager
# HeliconeLockManager.write_custom_property(
# "benchmark_start_time", BENCHMARK_START_TIME
# )
@click.group(cls=DefaultGroup, default_if_no_args=True)
@click.option("--debug", is_flag=True, help="Enable debug output")
def cli(
debug: bool,
) -> Any:
configure_logging(logging.DEBUG if debug else logging.INFO)
@cli.command(hidden=True)
def start():
raise DeprecationWarning(
"`agbenchmark start` is deprecated. Use `agbenchmark run` instead."
)
@cli.command(default=True)
@click.option(
"-N", "--attempts", default=1, help="Number of times to run each challenge."
)
@click.option(
"-c",
"--category",
multiple=True,
help="(+) Select a category to run.",
)
@click.option(
"-s",
"--skip-category",
multiple=True,
help="(+) Exclude a category from running.",
)
@click.option("--test", multiple=True, help="(+) Select a test to run.")
@click.option("--maintain", is_flag=True, help="Run only regression tests.")
@click.option("--improve", is_flag=True, help="Run only non-regression tests.")
@click.option(
"--explore",
is_flag=True,
help="Run only challenges that have never been beaten.",
)
@click.option(
"--no-dep",
is_flag=True,
help="Run all (selected) challenges, regardless of dependency success/failure.",
)
@click.option("--cutoff", type=int, help="Override the challenge time limit (seconds).")
@click.option("--nc", is_flag=True, help="Disable the challenge time limit.")
@click.option("--mock", is_flag=True, help="Run with mock")
@click.option("--keep-answers", is_flag=True, help="Keep answers")
@click.option(
"--backend",
is_flag=True,
help="Write log output to a file instead of the terminal.",
)
# @click.argument(
# "agent_path",
# type=click.Path(exists=True, file_okay=False, path_type=Path),
# required=False,
# )
def run(
maintain: bool,
improve: bool,
explore: bool,
mock: bool,
no_dep: bool,
nc: bool,
keep_answers: bool,
test: tuple[str],
category: tuple[str],
skip_category: tuple[str],
attempts: int,
cutoff: Optional[int] = None,
backend: Optional[bool] = False,
# agent_path: Optional[Path] = None,
) -> None:
"""
Run the benchmark on the agent in the current directory.
Options marked with (+) can be specified multiple times, to select multiple items.
"""
from agbenchmark.main import run_benchmark, validate_args
agbenchmark_config = AgentBenchmarkConfig.load()
logger.debug(f"agbenchmark_config: {agbenchmark_config.agbenchmark_config_dir}")
try:
validate_args(
maintain=maintain,
improve=improve,
explore=explore,
tests=test,
categories=category,
skip_categories=skip_category,
no_cutoff=nc,
cutoff=cutoff,
)
except InvalidInvocationError as e:
logger.error("Error: " + "\n".join(e.args))
sys.exit(1)
original_stdout = sys.stdout # Save the original standard output
exit_code = None
if backend:
with open("backend/backend_stdout.txt", "w") as f:
sys.stdout = f
exit_code = run_benchmark(
config=agbenchmark_config,
maintain=maintain,
improve=improve,
explore=explore,
mock=mock,
no_dep=no_dep,
no_cutoff=nc,
keep_answers=keep_answers,
tests=test,
categories=category,
skip_categories=skip_category,
attempts_per_challenge=attempts,
cutoff=cutoff,
)
sys.stdout = original_stdout
else:
exit_code = run_benchmark(
config=agbenchmark_config,
maintain=maintain,
improve=improve,
explore=explore,
mock=mock,
no_dep=no_dep,
no_cutoff=nc,
keep_answers=keep_answers,
tests=test,
categories=category,
skip_categories=skip_category,
attempts_per_challenge=attempts,
cutoff=cutoff,
)
sys.exit(exit_code)
@cli.command()
@click.option("--port", type=int, help="Port to run the API on.")
def serve(port: Optional[int] = None):
"""Serve the benchmark frontend and API on port 8080."""
import uvicorn
from agbenchmark.app import setup_fastapi_app
config = AgentBenchmarkConfig.load()
app = setup_fastapi_app(config)
# Run the FastAPI application using uvicorn
port = port or int(os.getenv("PORT", 8080))
uvicorn.run(app, host="0.0.0.0", port=port)
@cli.command()
def config():
"""Displays info regarding the present AGBenchmark config."""
from .utils.utils import pretty_print_model
try:
config = AgentBenchmarkConfig.load()
except FileNotFoundError as e:
click.echo(e, err=True)
return 1
pretty_print_model(config, include_header=False)
@cli.group()
def challenge():
logging.getLogger().setLevel(logging.WARNING)
@challenge.command("list")
@click.option(
"--all", "include_unavailable", is_flag=True, help="Include unavailable challenges."
)
@click.option(
"--names", "only_names", is_flag=True, help="List only the challenge names."
)
@click.option("--json", "output_json", is_flag=True)
def list_challenges(include_unavailable: bool, only_names: bool, output_json: bool):
"""Lists [available|all] challenges."""
import json
from tabulate import tabulate
from .challenges.builtin import load_builtin_challenges
from .challenges.webarena import load_webarena_challenges
from .utils.data_types import Category, DifficultyLevel
from .utils.utils import sorted_by_enum_index
DIFFICULTY_COLORS = {
difficulty: color
for difficulty, color in zip(
DifficultyLevel,
["black", "blue", "cyan", "green", "yellow", "red", "magenta", "white"],
)
}
CATEGORY_COLORS = {
category: f"bright_{color}"
for category, color in zip(
Category,
["blue", "cyan", "green", "yellow", "magenta", "red", "white", "black"],
)
}
# Load challenges
challenges = filter(
lambda c: c.info.available or include_unavailable,
[
*load_builtin_challenges(),
*load_webarena_challenges(skip_unavailable=False),
],
)
challenges = sorted_by_enum_index(
challenges, DifficultyLevel, key=lambda c: c.info.difficulty
)
if only_names:
if output_json:
click.echo(json.dumps([c.info.name for c in challenges]))
return
for c in challenges:
click.echo(
click.style(c.info.name, fg=None if c.info.available else "black")
)
return
if output_json:
click.echo(
json.dumps([json.loads(c.info.model_dump_json()) for c in challenges])
)
return
headers = tuple(
click.style(h, bold=True) for h in ("Name", "Difficulty", "Categories")
)
table = [
tuple(
v if challenge.info.available else click.style(v, fg="black")
for v in (
challenge.info.name,
(
click.style(
challenge.info.difficulty.value,
fg=DIFFICULTY_COLORS[challenge.info.difficulty],
)
if challenge.info.difficulty
else click.style("-", fg="black")
),
" ".join(
click.style(cat.value, fg=CATEGORY_COLORS[cat])
for cat in sorted_by_enum_index(challenge.info.category, Category)
),
)
)
for challenge in challenges
]
click.echo(tabulate(table, headers=headers))
@challenge.command()
@click.option("--json", is_flag=True)
@click.argument("name")
def info(name: str, json: bool):
from itertools import chain
from .challenges.builtin import load_builtin_challenges
from .challenges.webarena import load_webarena_challenges
from .utils.utils import pretty_print_model
for challenge in chain(
load_builtin_challenges(),
load_webarena_challenges(skip_unavailable=False),
):
if challenge.info.name != name:
continue
if json:
click.echo(challenge.info.model_dump_json())
break
pretty_print_model(challenge.info)
break
else:
click.echo(click.style(f"Unknown challenge '{name}'", fg="red"), err=True)
@cli.command()
def version():
"""Print version info for the AGBenchmark application."""
import toml
package_root = Path(__file__).resolve().parent.parent
pyproject = toml.load(package_root / "pyproject.toml")
version = pyproject["tool"]["poetry"]["version"]
click.echo(f"AGBenchmark version {version}")
if __name__ == "__main__":
cli()

View File

@@ -1,111 +0,0 @@
import logging
import time
from pathlib import Path
from typing import AsyncIterator, Optional
from agent_protocol_client import (
AgentApi,
ApiClient,
Configuration,
Step,
TaskRequestBody,
)
from agbenchmark.agent_interface import get_list_of_file_paths
from agbenchmark.config import AgentBenchmarkConfig
logger = logging.getLogger(__name__)
async def run_api_agent(
task: str,
config: AgentBenchmarkConfig,
timeout: int,
artifacts_location: Optional[Path] = None,
*,
mock: bool = False,
) -> AsyncIterator[Step]:
configuration = Configuration(host=config.host)
async with ApiClient(configuration) as api_client:
api_instance = AgentApi(api_client)
task_request_body = TaskRequestBody(input=task, additional_input=None)
start_time = time.time()
response = await api_instance.create_agent_task(
task_request_body=task_request_body
)
task_id = response.task_id
if artifacts_location:
logger.debug("Uploading task input artifacts to agent...")
await upload_artifacts(
api_instance, artifacts_location, task_id, "artifacts_in"
)
logger.debug("Running agent until finished or timeout...")
while True:
step = await api_instance.execute_agent_task_step(task_id=task_id)
yield step
if time.time() - start_time > timeout:
raise TimeoutError("Time limit exceeded")
if step and mock:
step.is_last = True
if not step or step.is_last:
break
if artifacts_location:
# In "mock" mode, we cheat by giving the correct artifacts to pass the test
if mock:
logger.debug("Uploading mock artifacts to agent...")
await upload_artifacts(
api_instance, artifacts_location, task_id, "artifacts_out"
)
logger.debug("Downloading agent artifacts...")
await download_agent_artifacts_into_folder(
api_instance, task_id, config.temp_folder
)
async def download_agent_artifacts_into_folder(
api_instance: AgentApi, task_id: str, folder: Path
):
artifacts = await api_instance.list_agent_task_artifacts(task_id=task_id)
for artifact in artifacts.artifacts:
# current absolute path of the directory of the file
if artifact.relative_path:
path: str = (
artifact.relative_path
if not artifact.relative_path.startswith("/")
else artifact.relative_path[1:]
)
folder = (folder / path).parent
if not folder.exists():
folder.mkdir(parents=True)
file_path = folder / artifact.file_name
logger.debug(f"Downloading agent artifact {artifact.file_name} to {folder}")
with open(file_path, "wb") as f:
content = await api_instance.download_agent_task_artifact(
task_id=task_id, artifact_id=artifact.artifact_id
)
f.write(content)
async def upload_artifacts(
api_instance: AgentApi, artifacts_location: Path, task_id: str, type: str
) -> None:
for file_path in get_list_of_file_paths(artifacts_location, type):
relative_path: Optional[str] = "/".join(
str(file_path).split(f"{type}/", 1)[-1].split("/")[:-1]
)
if not relative_path:
relative_path = None
await api_instance.upload_agent_task_artifacts(
task_id=task_id, file=str(file_path), relative_path=relative_path
)

View File

@@ -1,27 +0,0 @@
import os
import shutil
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
HELICONE_GRAPHQL_LOGS = os.getenv("HELICONE_GRAPHQL_LOGS", "").lower() == "true"
def get_list_of_file_paths(
challenge_dir_path: str | Path, artifact_folder_name: str
) -> list[Path]:
source_dir = Path(challenge_dir_path) / artifact_folder_name
if not source_dir.exists():
return []
return list(source_dir.iterdir())
def copy_challenge_artifacts_into_workspace(
challenge_dir_path: str | Path, artifact_folder_name: str, workspace: str | Path
) -> None:
file_paths = get_list_of_file_paths(challenge_dir_path, artifact_folder_name)
for file_path in file_paths:
if file_path.is_file():
shutil.copy(file_path, workspace)

View File

@@ -1,339 +0,0 @@
import datetime
import glob
import json
import logging
import sys
import time
import uuid
from collections import deque
from multiprocessing import Process
from pathlib import Path
from typing import Optional
import httpx
import psutil
from agent_protocol_client import AgentApi, ApiClient, ApiException, Configuration
from agent_protocol_client.models import Task, TaskRequestBody
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel, ConfigDict, ValidationError
from agbenchmark.challenges import ChallengeInfo
from agbenchmark.config import AgentBenchmarkConfig
from agbenchmark.reports.processing.report_types_v2 import (
BenchmarkRun,
Metrics,
RepositoryInfo,
RunDetails,
TaskInfo,
)
from agbenchmark.schema import TaskEvalRequestBody
from agbenchmark.utils.utils import write_pretty_json
sys.path.append(str(Path(__file__).parent.parent))
logger = logging.getLogger(__name__)
CHALLENGES: dict[str, ChallengeInfo] = {}
challenges_path = Path(__file__).parent / "challenges"
challenge_spec_files = deque(
glob.glob(
f"{challenges_path}/**/data.json",
recursive=True,
)
)
logger.debug("Loading challenges...")
while challenge_spec_files:
challenge_spec_file = Path(challenge_spec_files.popleft())
challenge_relpath = challenge_spec_file.relative_to(challenges_path.parent)
if challenge_relpath.is_relative_to("challenges/deprecated"):
continue
logger.debug(f"Loading {challenge_relpath}...")
try:
challenge_info = ChallengeInfo.model_validate_json(
challenge_spec_file.read_text()
)
except ValidationError as e:
if logging.getLogger().level == logging.DEBUG:
logger.warning(f"Spec file {challenge_relpath} failed to load:\n{e}")
logger.debug(f"Invalid challenge spec: {challenge_spec_file.read_text()}")
continue
if not challenge_info.eval_id:
challenge_info.eval_id = str(uuid.uuid4())
# this will sort all the keys of the JSON systematically
# so that the order is always the same
write_pretty_json(challenge_info.model_dump(), challenge_spec_file)
CHALLENGES[challenge_info.eval_id] = challenge_info
class BenchmarkTaskInfo(BaseModel):
task_id: str
start_time: datetime.datetime
challenge_info: ChallengeInfo
task_informations: dict[str, BenchmarkTaskInfo] = {}
def find_agbenchmark_without_uvicorn():
pids = []
for process in psutil.process_iter(
attrs=[
"pid",
"cmdline",
"name",
"username",
"status",
"cpu_percent",
"memory_info",
"create_time",
"cwd",
"connections",
]
):
try:
# Convert the process.info dictionary values to strings and concatenate them
full_info = " ".join([str(v) for k, v in process.as_dict().items()])
if "agbenchmark" in full_info and "uvicorn" not in full_info:
pids.append(process.pid)
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
pass
return pids
class CreateReportRequest(BaseModel):
test: str
test_run_id: str
# category: Optional[str] = []
mock: Optional[bool] = False
model_config = ConfigDict(extra="forbid")
updates_list = []
origins = [
"http://localhost:8000",
"http://localhost:8080",
"http://127.0.0.1:5000",
"http://localhost:5000",
]
def stream_output(pipe):
for line in pipe:
print(line, end="")
def setup_fastapi_app(agbenchmark_config: AgentBenchmarkConfig) -> FastAPI:
from agbenchmark.agent_api_interface import upload_artifacts
from agbenchmark.challenges import get_challenge_from_source_uri
from agbenchmark.main import run_benchmark
configuration = Configuration(
host=agbenchmark_config.host or "http://localhost:8000"
)
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
router = APIRouter()
@router.post("/reports")
def run_single_test(body: CreateReportRequest) -> dict:
pids = find_agbenchmark_without_uvicorn()
logger.info(f"pids already running with agbenchmark: {pids}")
logger.debug(f"Request to /reports: {body.model_dump()}")
# Start the benchmark in a separate thread
benchmark_process = Process(
target=lambda: run_benchmark(
config=agbenchmark_config,
tests=(body.test,),
mock=body.mock or False,
)
)
benchmark_process.start()
# Wait for the benchmark to finish, with a timeout of 200 seconds
timeout = 200
start_time = time.time()
while benchmark_process.is_alive():
if time.time() - start_time > timeout:
logger.warning(f"Benchmark run timed out after {timeout} seconds")
benchmark_process.terminate()
break
time.sleep(1)
else:
logger.debug(f"Benchmark finished running in {time.time() - start_time} s")
# List all folders in the current working directory
reports_folder = agbenchmark_config.reports_folder
folders = [folder for folder in reports_folder.iterdir() if folder.is_dir()]
# Sort the folders based on their names
sorted_folders = sorted(folders, key=lambda x: x.name)
# Get the last folder
latest_folder = sorted_folders[-1] if sorted_folders else None
# Read report.json from this folder
if latest_folder:
report_path = latest_folder / "report.json"
logger.debug(f"Getting latest report from {report_path}")
if report_path.exists():
with report_path.open() as file:
data = json.load(file)
logger.debug(f"Report data: {data}")
else:
raise HTTPException(
502,
"Could not get result after running benchmark: "
f"'report.json' does not exist in '{latest_folder}'",
)
else:
raise HTTPException(
504, "Could not get result after running benchmark: no reports found"
)
return data
@router.post("/agent/tasks", tags=["agent"])
async def create_agent_task(task_eval_request: TaskEvalRequestBody) -> Task:
"""
Creates a new task using the provided TaskEvalRequestBody and returns a Task.
Args:
task_eval_request: `TaskRequestBody` including an eval_id.
Returns:
Task: A new task with task_id, input, additional_input,
and empty lists for artifacts and steps.
Example:
Request (TaskEvalRequestBody defined in schema.py):
{
...,
"eval_id": "50da533e-3904-4401-8a07-c49adf88b5eb"
}
Response (Task defined in `agent_protocol_client.models`):
{
"task_id": "50da533e-3904-4401-8a07-c49adf88b5eb",
"input": "Write the word 'Washington' to a .txt file",
"artifacts": []
}
"""
try:
challenge_info = CHALLENGES[task_eval_request.eval_id]
async with ApiClient(configuration) as api_client:
api_instance = AgentApi(api_client)
task_input = challenge_info.task
task_request_body = TaskRequestBody(
input=task_input, additional_input=None
)
task_response = await api_instance.create_agent_task(
task_request_body=task_request_body
)
task_info = BenchmarkTaskInfo(
task_id=task_response.task_id,
start_time=datetime.datetime.now(datetime.timezone.utc),
challenge_info=challenge_info,
)
task_informations[task_info.task_id] = task_info
if input_artifacts_dir := challenge_info.task_artifacts_dir:
await upload_artifacts(
api_instance,
input_artifacts_dir,
task_response.task_id,
"artifacts_in",
)
return task_response
except ApiException as e:
logger.error(f"Error whilst trying to create a task:\n{e}")
logger.error(
"The above error was caused while processing request: "
f"{task_eval_request}"
)
raise HTTPException(500)
@router.post("/agent/tasks/{task_id}/steps")
async def proxy(request: Request, task_id: str):
timeout = httpx.Timeout(300.0, read=300.0) # 5 minutes
async with httpx.AsyncClient(timeout=timeout) as client:
# Construct the new URL
new_url = f"{configuration.host}/ap/v1/agent/tasks/{task_id}/steps"
# Forward the request
response = await client.post(
new_url,
content=await request.body(),
headers=dict(request.headers),
)
# Return the response from the forwarded request
return Response(content=response.content, status_code=response.status_code)
@router.post("/agent/tasks/{task_id}/evaluations")
async def create_evaluation(task_id: str) -> BenchmarkRun:
task_info = task_informations[task_id]
challenge = get_challenge_from_source_uri(task_info.challenge_info.source_uri)
try:
async with ApiClient(configuration) as api_client:
api_instance = AgentApi(api_client)
eval_results = await challenge.evaluate_task_state(
api_instance, task_id
)
eval_info = BenchmarkRun(
repository_info=RepositoryInfo(),
run_details=RunDetails(
command=f"agbenchmark --test={challenge.info.name}",
benchmark_start_time=(
task_info.start_time.strftime("%Y-%m-%dT%H:%M:%S+00:00")
),
test_name=challenge.info.name,
),
task_info=TaskInfo(
data_path=challenge.info.source_uri,
is_regression=None,
category=[c.value for c in challenge.info.category],
task=challenge.info.task,
answer=challenge.info.reference_answer or "",
description=challenge.info.description or "",
),
metrics=Metrics(
success=all(e.passed for e in eval_results),
success_percentage=(
100 * sum(e.score for e in eval_results) / len(eval_results)
if eval_results # avoid division by 0
else 0
),
attempted=True,
),
config={},
)
logger.debug(
f"Returning evaluation data:\n{eval_info.model_dump_json(indent=4)}"
)
return eval_info
except ApiException as e:
logger.error(f"Error {e} whilst trying to evaluate task: {task_id}")
raise HTTPException(500)
app.include_router(router, prefix="/ap/v1")
return app

View File

@@ -1,128 +0,0 @@
import json
import sys
from datetime import datetime
from pathlib import Path
from typing import Optional
from pydantic import Field, ValidationInfo, field_validator
from pydantic_settings import BaseSettings
def _calculate_info_test_path(base_path: Path, benchmark_start_time: datetime) -> Path:
"""
Calculates the path to the directory where the test report will be saved.
"""
# Ensure the reports path exists
base_path.mkdir(parents=True, exist_ok=True)
# Get current UTC date-time stamp
date_stamp = benchmark_start_time.strftime("%Y%m%dT%H%M%S")
# Default run name
run_name = "full_run"
# Map command-line arguments to their respective labels
arg_labels = {
"--test": None,
"--category": None,
"--maintain": "maintain",
"--improve": "improve",
"--explore": "explore",
}
# Identify the relevant command-line argument
for arg, label in arg_labels.items():
if arg in sys.argv:
test_arg = sys.argv[sys.argv.index(arg) + 1] if label is None else None
run_name = arg.strip("--")
if test_arg:
run_name = f"{run_name}_{test_arg}"
break
# Create the full new directory path with ISO standard UTC date-time stamp
report_path = base_path / f"{date_stamp}_{run_name}"
# Ensure the new directory is created
# FIXME: this is not a desirable side-effect of loading the config
report_path.mkdir(exist_ok=True)
return report_path
class AgentBenchmarkConfig(BaseSettings, extra="allow"):
"""
Configuration model and loader for the AGBenchmark.
Projects that want to use AGBenchmark should contain an agbenchmark_config folder
with a config.json file that - at minimum - specifies the `host` at which the
subject application exposes an Agent Protocol compliant API.
"""
agbenchmark_config_dir: Path = Field(exclude=True)
"""Path to the agbenchmark_config folder of the subject agent application."""
categories: list[str] | None = None
"""Categories to benchmark the agent for. If omitted, all categories are assumed."""
host: str
"""Host (scheme://address:port) of the subject agent application."""
reports_folder: Path = Field(None)
"""
Path to the folder where new reports should be stored.
Defaults to {agbenchmark_config_dir}/reports.
"""
@classmethod
def load(cls, config_dir: Optional[Path] = None) -> "AgentBenchmarkConfig":
config_dir = config_dir or cls.find_config_folder()
with (config_dir / "config.json").open("r") as f:
return cls(
agbenchmark_config_dir=config_dir,
**json.load(f),
)
@staticmethod
def find_config_folder(for_dir: Path = Path.cwd()) -> Path:
"""
Find the closest ancestor folder containing an agbenchmark_config folder,
and returns the path of that agbenchmark_config folder.
"""
current_directory = for_dir
while current_directory != Path("/"):
if (path := current_directory / "agbenchmark_config").exists():
if (path / "config.json").is_file():
return path
current_directory = current_directory.parent
raise FileNotFoundError(
"No 'agbenchmark_config' directory found in the path hierarchy."
)
@property
def config_file(self) -> Path:
return self.agbenchmark_config_dir / "config.json"
@field_validator("reports_folder", mode="before")
def set_reports_folder(cls, value: Path, info: ValidationInfo):
if not value:
return info.data["agbenchmark_config_dir"] / "reports"
return value
def get_report_dir(self, benchmark_start_time: datetime) -> Path:
return _calculate_info_test_path(self.reports_folder, benchmark_start_time)
@property
def regression_tests_file(self) -> Path:
return self.reports_folder / "regression_tests.json"
@property
def success_rate_file(self) -> Path:
return self.reports_folder / "success_rate.json"
@property
def challenges_already_beaten_file(self) -> Path:
return self.agbenchmark_config_dir / "challenges_already_beaten.json"
@property
def temp_folder(self) -> Path:
return self.agbenchmark_config_dir / "temp_folder"

Some files were not shown because too many files have changed in this diff Show More