mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-30 03:00:41 -04:00
80bfd64ffa0e4454be369ddca794e916b8d357d1
271 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
80bfd64ffa | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
0076ad2a1a |
hotfix(blocks): bump stagehand ^0.5.1 → ^3.4.0 to fix yanked litellm (#12539)
## Summary **Critical CI fix** — litellm was compromised in a supply chain attack (versions 1.82.7/1.82.8 contained infostealer malware) and PyPI subsequently yanked many litellm versions including the 1.7x range that stagehand 0.5.x depended on. This breaks `poetry lock` in CI for all PRs. - Bump `stagehand` from `^0.5.1` to `^3.4.0` — Stagehand v3 is a Stainless-generated HTTP API client that **no longer depends on litellm**, completely removing litellm from our dependency tree - Migrate stagehand blocks to use `AsyncStagehand` + session-based API (`sessions.start`, `session.navigate/act/observe/extract`) - Net reduction of ~430 lines in `poetry.lock` from dropping litellm and its transitive dependencies ## Why All CI pipelines are blocked because `poetry lock` fails to resolve yanked litellm versions that stagehand 0.5.x required. ## Test plan - [x] CI passes (poetry lock resolves, backend tests green) - [ ] Verify stagehand blocks still function with the new session-based API |
||
|
|
9381057079 |
refactor(platform): rename SmartDecisionMakerBlock to OrchestratorBlock (#12511)
## Summary - Renames `SmartDecisionMakerBlock` to `OrchestratorBlock` across the entire codebase - The block supports iteration/agent mode and general tool orchestration, so "Smart Decision Maker" no longer accurately describes its capabilities - Block UUID (`3b191d9f-356f-482d-8238-ba04b6d18381`) remains unchanged — fully backward compatible with existing graphs ## Changes - Renamed block class, constants, file names, test files, docs, and frontend enum - Updated copilot agent generator (helpers, validator, fixer) references - Updated agent generation guide documentation - No functional changes — pure rename refactor ### For code changes - [x] I have clearly listed my changes in the PR description - [x] I have made corresponding changes to the documentation - [x] My changes do not generate new warnings or errors - [x] New and existing unit tests pass locally with my changes ## Test plan - [x] All pre-commit hooks pass (typecheck, lint, format) - [x] Existing graphs with this block continue to load and execute (same UUID) - [x] Agent mode / iteration mode works as before - [x] Copilot agent generator correctly references the renamed block |
||
|
|
ee5382a064 |
feat(copilot): add tool/block capability filtering to AutoPilotBlock (#12482)
## Summary - Adds `CopilotPermissions` model (`copilot/permissions.py`) — a capability filter that restricts which tools and blocks the AutoPilot/Copilot may use during a single execution - Exposes 4 new `advanced=True` fields on `AutoPilotBlock`: `tools`, `tools_exclude`, `blocks`, `blocks_exclude` - Threads permissions through the full execution path: `AutoPilotBlock` → `collect_copilot_response` → `stream_chat_completion_sdk` → `run_block` - Implements recursion inheritance via contextvar: sub-agent executions can only be *more* restrictive than their parent ## Design **Tool filtering** (`tools` + `tools_exclude`): - `tools_exclude=True` (default): `tools` is a **blacklist** — listed tools denied, all others allowed. Empty list = allow all. - `tools_exclude=False`: `tools` is a **whitelist** — only listed tools are allowed. - Users specify short names (`run_block`, `web_fetch`, `Read`, `Task`, …) — mapped to full SDK format internally. - Validated eagerly at block-run time with a clear error listing valid names. **Block filtering** (`blocks` + `blocks_exclude`): - Same semantics as tool filtering, applied inside `run_block` via contextvar. - Each entry can be a full UUID, an 8-char partial UUID (first segment), or a case-insensitive block name. - Validated against the live block registry; invalid identifiers surface a helpful error before the session is created. **Recursion inheritance**: - `_inherited_permissions` contextvar stores the parent execution's permissions. - On each `AutoPilotBlock.run()`, the child's permissions are merged with the parent via `merged_with_parent()` — effective allowed sets are intersected (tools) and the parent chain is kept for block checks. - Sub-agents can never expand what the parent allowed. ## Test plan - [x] 68 new unit tests in `copilot/permissions_test.py` and `blocks/autopilot_permissions_test.py` - [x] Block identifier matching: full UUID, partial UUID, name, case-insensitivity - [x] Tool allow/deny list semantics including edge cases (empty list, unknown tool) - [x] Parent/child merging and recursion ceiling correctness - [x] `validate_tool_names` / `validate_block_identifiers` with mock block registry - [x] `apply_tool_permissions` SDK tool-list integration - [x] `AutoPilotBlock.run()` — invalid tool/block yields error before session creation - [x] `AutoPilotBlock.run()` — valid permissions forwarded to `execute_copilot` - [x] Existing `AutoPilotBlock` block tests still pass (2/2) - [x] All hooks pass (pyright, ruff, black, isort) - [x] E2E: CoPilot chat works end-to-end with E2B sandbox (12s stream) - [x] E2E: Permission fields render in Builder UI (Tools combobox, exclude toggles) - [x] E2E: Agent with restricted permissions (whitelist web_fetch only) executes correctly - [x] E2E: Permission values preserved through API round-trip |
||
|
|
f01f668674 |
fix(backend): support Responses API in SmartDecisionMakerBlock (#12489)
## Summary - Fixes SmartDecisionMakerBlock conversation management to work with OpenAI's Responses API, which was introduced in #12099 (commit |
||
|
|
cbff3b53d3 |
Revert "feat(backend): migrate OpenAI provider to Responses API" (#12490)
Reverts Significant-Gravitas/AutoGPT#12099
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Reverts the OpenAI integration in `llm_call` from the Responses API
back to `chat.completions`, which can change tool-calling, JSON-mode
behavior, and token accounting across core AI blocks. The change is
localized but touches the primary LLM execution path and associated
tests/docs.
>
> **Overview**
> Reverts the OpenAI path in `backend/blocks/llm.py` from the Responses
API back to `chat.completions`, including updating JSON-mode
(`response_format`), tool handling, and usage extraction to match the
Chat Completions response shape.
>
> Removes the now-unused `backend/util/openai_responses.py` helpers and
their unit tests, updates LLM tests to mock `chat.completions.create`,
and adds `gpt-3.5-turbo` to the supported model list, cost config, and
LLM docs.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
1240f38f75 |
feat(backend): migrate OpenAI provider to Responses API (#12099)
## Summary Migrates the OpenAI provider in the LLM block from `chat.completions.create` to `responses.create` — OpenAI's newer, unified API. Also removes the obsolete GPT-3.5-turbo model. Resolves #11624 Linear: [OPEN-2911](https://linear.app/autogpt/issue/OPEN-2911/update-openai-calls-to-use-responsescreate) ## Changes - **`backend/blocks/llm.py`** — OpenAI provider now uses `responses.create` exclusively. Removed GPT-3.5-turbo enum + metadata. - **`backend/util/openai_responses.py`** *(new)* — Helpers for the Responses API: tool format conversion, content/reasoning/usage/tool-call extraction. - **`backend/util/openai_responses_test.py`** *(new)* — Unit tests for all helper functions. - **`backend/data/block_cost_config.py`** — Removed GPT-3.5 cost entry. - **`docs/integrations/block-integrations/llm.md`** — Regenerated block docs. ## Key API differences handled | Aspect | Chat Completions | Responses API | |--------|-----------------|---------------| | Messages param | `messages` | `input` | | Max tokens param | `max_completion_tokens` | `max_output_tokens` | | Usage fields | `prompt_tokens` / `completion_tokens` | `input_tokens` / `output_tokens` | | Tool format | Nested under `function` key | Flat structure | ## Test plan - [x] Unit tests for all `openai_responses.py` helpers - [x] Existing LLM block tests updated for Responses API mocks - [x] Regular OpenAI models work - [x] Reasoning OpenAI models work - [x] Non-OpenAI models work --------- Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> |
||
|
|
5d9a169e04 |
feat(blocks): add AutoPilotBlock for invoking AutoPilot from graphs (#12439)
## Summary - Adds `AutogptCopilotBlock` that invokes the platform's copilot system (`stream_chat_completion_sdk`) directly from graph executions - Enables sub-agent patterns: copilot can call this block recursively (with depth limiting via `contextvars`) - Enables scheduled copilot execution through the agent executor system - No user credentials needed — uses server-side copilot config ## Inputs/Outputs **Inputs:** prompt, system_context, session_id (continuation), timeout, max_recursion_depth **Outputs:** response text, tool_calls list, conversation_history JSON, session_id, token_usage ## Test plan - [x] Block test passes (`test_available_blocks[AutogptCopilotBlock]`) - [x] Pre-commit hooks pass (format, lint, typecheck) - [ ] Manual test: add block to graph, send prompt, verify response - [ ] Manual test: chain two copilot blocks with session_id to verify continuation |
||
|
|
e657472162 |
feat(blocks): Add Nano Banana 2 to image generator, customizer, and editor blocks (#12218)
Requested by @Torantulino Add `google/nano-banana-2` (Gemini 3.1 Flash Image) support across all three image blocks. ### Changes **`ai_image_customizer.py`** - Add `NANO_BANANA_2 = "google/nano-banana-2"` to `GeminiImageModel` enum - Update block description to reference Nano-Banana models generically **`ai_image_generator_block.py`** - Add `NANO_BANANA_2` to `ImageGenModel` enum - Add generation branch (identical to NBP except model name) **`flux_kontext.py` (AI Image Editor)** - Rename `FluxKontextModelName` → `ImageEditorModel` (with backwards-compatible alias) - Add `NANO_BANANA_PRO` and `NANO_BANANA_2` to the editor - Model-aware branching in `run_model()`: NB models use `image_input` list (not `input_image`), no `seed`, and add `output_format` **`block_cost_config.py`** - Add NB2 cost entries for all three blocks (14 credits, matching NBP) - Add NB Pro cost entry for editor block - Update editor block refs from `.PRO`/`.MAX` to `.FLUX_KONTEXT_PRO`/`.FLUX_KONTEXT_MAX` Resolves SECRT-2047 --------- Co-authored-by: Torantulino <Torantulino@users.noreply.github.com> Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> |
||
|
|
e32d258a7e |
feat(blocks): add AgentMail integration blocks (#12417)
## Summary - Add a full AgentMail integration with blocks for managing inboxes, messages, threads, drafts, attachments, lists, and pods - Includes shared provider configuration (`_config.py`) with API key authentication - 8 block modules covering ~25 individual blocks across all AgentMail API surfaces ## Block Modules | Module | Blocks | |--------|--------| | `inbox.py` | Create, Get, List, Update, Delete inboxes | | `messages.py` | Send, Get, List, Delete messages + org-wide listing | | `threads.py` | Get, List, Delete threads + org-wide listing | | `drafts.py` | Create, Get, List, Update, Send, Delete drafts + org-wide listing | | `attachments.py` | Download attachments | | `lists.py` | Create, Get, List, Update, Delete mailing lists | | `pods.py` | Create, Get, List, Update, Delete pods | ## Test plan - [x] `poetry run pytest 'backend/blocks/test/test_block.py' -xvs` — all new blocks pass the standard block test suite - [x] test all blocks manually |
||
|
|
8892bcd230 |
docs: Add workspace and media file architecture documentation (#11989)
### Changes 🏗️
- Added comprehensive architecture documentation at
`docs/platform/workspace-media-architecture.md` covering:
- Database models (`UserWorkspace`, `UserWorkspaceFile`)
- `WorkspaceManager` API with session scoping
- `store_media_file()` media normalization pipeline (input types, return
formats)
- Virus scanning responsibility boundaries
- Decision tree for choosing `WorkspaceManager` vs `store_media_file()`
- Configuration reference including `clamav_max_concurrency` and
`clamav_mark_failed_scans_as_clean`
- Common patterns with error handling examples
- Updated `autogpt_platform/backend/CLAUDE.md` with a "Workspace & Media
Files" section referencing the new docs
- Removed duplicate `scan_content_safe()` call from
`WriteWorkspaceFileTool` — `WorkspaceManager.write_file()` already scans
internally, so the tool was double-scanning every file
- Replaced removed comment in `workspace.py` with explicit ownership
comment clarifying that `WorkspaceManager` is the single scanning
boundary
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified `scan_content_safe()` is called inside
`WorkspaceManager.write_file()` (workspace.py:186)
- [x] Verified `store_media_file()` scans all input branches including
local paths (file.py:351)
- [x] Verified documentation accuracy against current source code after
merge with dev
- [x] CI checks all passing
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Low Risk**
> Mostly adds documentation and internal developer guidance; the only
code change is a comment clarifying `WorkspaceManager.write_file()` as
the single virus-scanning boundary, with no behavior change.
>
> **Overview**
> Adds a new `docs/platform/workspace-media-architecture.md` describing
the Workspace storage layer vs the `store_media_file()` media pipeline,
including session scoping and virus-scanning/persistence responsibility
boundaries.
>
> Updates backend `CLAUDE.md` to point contributors to the new doc when
working on CoPilot uploads/downloads or
`WorkspaceManager`/`store_media_file()`, and clarifies in
`WorkspaceManager.write_file()` (comment-only) that callers should not
duplicate virus scanning.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
ef446e4fe9 |
feat(llm): Add Cohere Command A Family Models (#12339)
## Summary Adds the Cohere Command A family of models to AutoGPT Platform with proper pricing configuration. ## Models Added - **Command A 03.2025**: Flagship model (256k context, 8k output) - 3 credits - **Command A Translate 08.2025**: State-of-the-art translation (8k context, 8k output) - 3 credits - **Command A Reasoning 08.2025**: First reasoning model (256k context, 32k output) - 6 credits - **Command A Vision 07.2025**: First vision-capable model (128k context, 8k output) - 3 credits ## Changes - Added 4 new LlmModel enum entries with proper OpenRouter model IDs - Added ModelMetadata for each model with correct context windows, output limits, and price tiers - Added pricing configuration in block_cost_config.py ## Testing - [ ] Models appear in AutoGPT Platform model selector - [ ] Pricing is correctly applied when using models Resolves **SECRT-2083** |
||
|
|
7b1e8ed786 |
feat(llm): Add Microsoft Phi-4 model support (#12342)
## Changes - Added `MICROSOFT_PHI_4` to LlmModel enum (`microsoft/phi-4`) - Configured model metadata: - 16K context window - 16K max output tokens - OpenRouter provider - Set cost tier: 1 - Input: $0.06 per 1M tokens - Output: $0.14 per 1M tokens ## Details Microsoft Phi-4 is a 14B parameter model available through OpenRouter. This PR adds proper support in the autogpt_platform backend. Resolves SECRT-2086 |
||
|
|
3595c6e769 |
feat(llm): add Perplexity Sonar Reasoning Pro model (#12341)
## Summary Adds support for Perplexity's new reasoning model: `perplexity/sonar-reasoning-pro` ## Changes - ✅ Added `PERPLEXITY_SONAR_REASONING_PRO` to `LlmModel` enum - ✅ Added model metadata (128K context window, 8K max output tokens, tier 2) - ✅ Set pricing at 5 credits (matches sonar-pro tier) ## Model Details - **Model ID:** `perplexity/sonar-reasoning-pro` - **Provider:** OpenRouter - **Context Window:** 128,000 tokens - **Max Output:** 8,000 tokens - **Pricing:** $0.000002/token (prompt), $0.000008/token (completion) - **Cost Tier:** 2 (5 credits) ## Testing - ✅ Black formatting passed - ✅ Ruff linting passed Resolves SECRT-2084 |
||
|
|
ade2baa58f |
feat(llm): Add Grok 3 model support (#12343)
## Summary Adds support for xAI's Grok 3 model to AutoGPT. ## Changes - Added `GROK_3` to `LlmModel` enum with identifier `x-ai/grok-3` - Configured model metadata: - Context window: 131,072 tokens (128k) - Max output: 32,768 tokens (32k) - Provider: OpenRouter - Creator: xAI - Price tier: 2 (mid-tier) - Set model cost to 3 credits (mid-tier pricing between fast models and Grok 4) - Updated block documentation to include Grok 3 in model lists ## Pricing Rationale - **Grok 4**: 9 credits (tier 3 - premium, 256k context) - **Grok 3**: 3 credits (tier 2 - mid-tier, 128k context) ← NEW - **Grok 4 Fast/4.1 Fast/Code Fast**: 1 credit (tier 1 - affordable) Grok 3 is positioned as a mid-tier model, priced similarly to other tier 2 models. ## Testing - [x] Code passes `black` formatting - [x] Code passes `ruff` linting - [x] Model metadata and cost configuration added - [x] Documentation updated Closes SECRT-2079 |
||
|
|
89a5b3178a |
fix(llm): Update Gemini model lineup - add 3.1 models, deprecate 3 Pro Preview (#12331)
## 🔴 URGENT: Gemini 3 Pro Preview Shutdown - March 9, 2026 Google is shutting down Gemini 3 Pro Preview **tomorrow (March 9, 2026)**. This PR addresses SECRT-2067 by updating the Gemini model lineup to prevent disruption. --- ## Changes ### ✅ P0 - Critical (This Week) - [x] **Remove/Replace Gemini 3 Pro Preview** → Migrated to 3.1 Pro Preview - [x] **Add Gemini 3.1 Pro Preview** (released Feb 19, 2026) ### ✅ P1 - High Priority - [x] **Add Gemini 3.1 Flash Lite Preview** (released Mar 3, 2026) - [x] **Add Gemini 3 Flash Preview** (released Dec 17, 2025) ### ✅ P2 - Medium Priority - [x] **Add Gemini 2.5 Pro (stable/GA)** (released Jun 17, 2025) --- ## Model Details | Model | Context | Input Cost | Output Cost | Price Tier | |-------|---------|------------|-------------|------------| | **Gemini 3.1 Pro Preview** | 1.05M | $2.00/1M | $12.00/1M | 2 | | **Gemini 3.1 Flash Lite Preview** | 1.05M | $0.25/1M | $1.50/1M | 1 | | **Gemini 3 Flash Preview** | 1.05M | $0.50/1M | $3.00/1M | 1 | | **Gemini 2.5 Pro (GA)** | 1.05M | $1.25/1M | $10.00/1M | 2 | | ~~Gemini 3 Pro Preview~~ | ~~1.05M~~ | ~~$2.00/1M~~ | ~~$12.00/1M~~ | **DEPRECATED** | --- ## Migration Strategy **Database Migration:** `20260308095500_migrate_deprecated_gemini_3_pro_preview` - Automatically migrates all existing graphs using `google/gemini-3-pro-preview` to `google/gemini-3.1-pro-preview` - Updates: AgentBlock, AgentGraphExecution, AgentNodeExecution, AgentGraph - Zero user-facing disruption - Migration runs on next deployment (before March 9 shutdown) --- ## Testing - [ ] Verify new models appear in LLM block dropdown - [ ] Test migration on staging database - [ ] Confirm existing graphs using deprecated model auto-migrate - [ ] Validate cost calculations for new models --- ## References - **Linear Issue:** [SECRT-2067](https://linear.app/autogpt/issue/SECRT-2067) - **OpenRouter Models:** https://openrouter.ai/models/google - **Google Deprecation Notice:** https://ai.google.dev/gemini-api/docs/deprecations --- ## Checklist - [x] Models added to `LlmModel` enum - [x] Model metadata configured - [x] Cost config updated - [x] Database migration created - [x] Deprecated model commented out (not removed for historical reference) - [ ] PR reviewed and approved - [ ] Merged before March 9, 2026 deadline --- **Priority:** 🔴 Critical - Must merge before March 9, 2026 |
||
|
|
34a2f9a0a2 |
feat(llm): add Mistral flagship models (Large 3, Medium 3.1, Small 3.2, Codestral) (#12337)
## Summary Adds four missing Mistral AI flagship models to address the critical coverage gap identified in [SECRT-2082](https://linear.app/autogpt/issue/SECRT-2082). ## Models Added | Model | Context | Max Output | Price Tier | Use Case | |-------|---------|------------|------------|----------| | **Mistral Large 3** | 262K | None | 2 (Medium) | Flagship reasoning model, 41B active params (675B total), MoE architecture | | **Mistral Medium 3.1** | 131K | None | 2 (Medium) | Balanced performance/cost, 8x cheaper than traditional large models | | **Mistral Small 3.2** | 131K | 131K | 1 (Low) | Fast, cost-efficient, high-volume use cases | | **Codestral 2508** | 256K | None | 1 (Low) | Code generation specialist (FIM, correction, test gen) | ## Problem Previously, the platform only offered: - Mistral Nemo (1 official model) - dolphin-mistral (third-party Ollama fine-tune) This left significant gaps in Mistral's lineup, particularly: - No flagship reasoning model - No balanced mid-tier option - No code-specialized model - Missing multimodal capabilities (Large 3, Medium 3.1, Small 3.2 all support text+image) ## Changes **File:** `autogpt_platform/backend/backend/blocks/llm.py` - Added 4 enum entries in `LlmModel` class - Added 4 metadata entries in `MODEL_METADATA` dict - All models use OpenRouter provider - Follows existing pattern for model additions ## Testing - ✅ Enum values match OpenRouter model IDs - ✅ Metadata follows existing format - ✅ Context windows verified from OpenRouter API - ✅ Price tiers assigned appropriately ## Closes - SECRT-2082 --- **Note:** All models are available via OpenRouter and tested. This brings Mistral coverage in line with other major providers (OpenAI, Anthropic, Google). |
||
|
|
9f4caa7dfc |
feat(blocks): add and harden GitHub blocks for full-cycle development (#12334)
## Summary - Add 8 new GitHub blocks: GetRepositoryInfo, ForkRepository, ListCommits, SearchCode, CompareBranches, GetRepositoryTree, MultiFileCommit, MergePullRequest - Split `repo.py` (2094 lines, 19 blocks) into domain-specific modules: `repo.py`, `repo_branches.py`, `repo_files.py`, `commits.py` - Concurrent blob creation via `asyncio.gather()` in MultiFileCommit - URL-encode branch/ref params via `urllib.parse.quote()` for defense-in-depth - Step-level error handling in MultiFileCommit ref update with recovery SHA - Collapse FileOperation CREATE/UPDATE into UPSERT (Git Trees API treats them identically) - Add `ge=1, le=100` constraints on per_page SchemaFields - Preserve URL scheme in `prepare_pr_api_url` - Handle null commit authors gracefully in ListCommits - Add unit tests for `prepare_pr_api_url`, error-path tests for MergePR/MultiFileCommit, FileOperation enum validation tests ## Test plan - [ ] Block tests pass for all 19 GitHub blocks (CI: `test_available_blocks`) - [ ] New test file `test_github_blocks.py` passes (prepare_pr_api_url, error paths, enum) - [ ] `check-docs-sync` passes with regenerated docs - [ ] pyright/ruff clean on all changed files |
||
|
|
c7124a5240 |
Add documentation for Google Gemini integration (#12283)
## Summary Adding comprehensive documentation for Google Gemini integration with AutoGPT. ## Changes - Added setup instructions for Gemini API - Documented configuration options - Added examples and best practices ## Related Issues N/A - Documentation improvement ## Testing - Verified documentation accuracy - Tested all code examples ## Checklist - [x] Code follows project style - [x] Documentation updated - [x] Tests pass (if applicable) |
||
|
|
aa08063939 |
refactor(backend/db): Improve & clean up Marketplace DB layer & API (#12284)
These changes were part of #12206, but here they are separately for easier review. This is all primarily to make the v2 API (#11678) work possible/easier. ### Changes 🏗️ - Fix relations between `Profile`, `StoreListing`, and `AgentGraph` - Redefine `StoreSubmission` view with more efficient joins (100x speed-up on dev DB) and more consistent field names - Clean up query functions in `store/db.py` - Clean up models in `store/model.py` - Add missing fields to `StoreAgent` and `StoreSubmission` views - Rename ambiguous `agent_id` -> `graph_id` - Clean up API route definitions & docs in `store/routes.py` - Make routes more consistent - Avoid collision edge-case between `/agents/{username}/{agent_name}` and `/agents/{store_listing_version_id}/*` - Replace all usages of legacy `BackendAPI` for store endpoints with generated client - Remove scope requirements on public store endpoints in v1 external API ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Test all Marketplace views (including admin views) - [x] Download an agent from the marketplace - [x] Submit an agent to the Marketplace - [x] Approve/reject Marketplace submission |
||
|
|
7c8c7bf395 |
feat(llm): add Claude Sonnet 4.6 model (#12158)
## Summary Adds Claude Sonnet 4.6 (`claude-sonnet-4-6`) to the platform. ## Model Details (from [Anthropic docs](https://www.anthropic.com/news/claude-sonnet-4-6)) - **API ID:** `claude-sonnet-4-6` - **Pricing:** $3 / input MTok, $15 / output MTok (same as Sonnet 4.5) - **Context window:** 200K tokens (1M beta) - **Max output:** 64K tokens - **Knowledge cutoff:** Aug 2025 (reliable), Jan 2026 (training data) ## Changes - Added `CLAUDE_4_6_SONNET` to `LlmModel` enum - Added metadata entry with correct context/output limits - Updated Stagehand to use Sonnet 4.6 (better for browser automation tasks) ## Why Sonnet 4.6 brings major improvements in coding, computer use, and reasoning. Developers with early access often prefer it to even Opus 4.5. --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
a1cb3d2a91 |
feat(blocks): Add Telegram blocks (#12141)
Add Telegram blocks that allow the use of [Telegram bots' API features](https://core.telegram.org/bots/features). ### Changes 🏗️ 1. Credentials & API layer: Bot token auth via `APIKeyCredentials`, helper functions for JSON API calls (call_telegram_api) and multipart file uploads (call_telegram_api_with_file) 2. Trigger blocks: - `TelegramMessageTriggerBlock` — receives messages (text, photo, voice, audio, document, video, edited message) with configurable event filters - `TelegramMessageReactionTriggerBlock` — fires on reaction changes (private chats auto, groups require admin) 2. Action blocks (11 total): - Send: Message, Photo, Voice, Audio, Document, Video - Reply to Message, Edit Message, Delete Message - Get File (download by file_id) 3. Webhook manager: Registers/deregisters webhooks via Telegram's setWebhook API, validates incoming requests using X-Telegram-Bot-Api-Secret-Token header 4. Provider registration: Added TELEGRAM to ProviderName enum and registered `TelegramWebhooksManager` 5. Media send blocks support both URL passthrough (Telegram fetches directly) and file upload for workspace/data URI inputs ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Non-AI UUIDs - [x] Blocks work correctly - [x] SendTelegramMessageBlock - [x] SendTelegramPhotoBlock - [x] SendTelegramVoiceBlock - [x] SendTelegramAudioBlock - [x] SendTelegramDocumentBlock - [x] SendTelegramVideoBlock - [x] ReplyToTelegramMessageBlock - [x] GetTelegramFileBlock - [x] DeleteTelegramMessageBlock - [x] EditTelegramMessageBlock - [x] TelegramMessageTriggerBlock (works for every trigger type) - [x] TelegramMessageReactionTriggerBlock --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
ef42b17e3b |
docs: add Podman compatibility warning (#12120)
## Summary Adds a warning to the Getting Started docs clarifying that **Podman and podman-compose are not supported**. ## Problem Users on Windows using `podman-compose` instead of Docker get errors like: ``` Error: the specified Containerfile or Dockerfile does not exist, ..\..\autogpt_platform\backend\Dockerfile ``` This is because Podman handles relative paths differently than Docker, causing incorrect path resolution on Windows. ## Solution - Added a clear warning section after the Windows WSL 2 notes - Explains the error users might see - Directs them to install Docker Desktop instead Closes #11358 <!-- greptile_comment --> <details><summary><h3>Greptile Summary</h3></summary> Adds a "Podman Not Supported" warning section to the Getting Started documentation, placed after the Windows/WSL 2 installation notes. The section clarifies that Docker is required, shows the typical error message users encounter when using Podman, and directs them to install Docker Desktop instead. This addresses issue #11358 where Windows users using `podman-compose` hit path resolution errors. - Adds `### ⚠️ Podman Not Supported` section under Manual Setup, after Windows Installation Note - Includes the specific error message users see with Podman for easy identification - Links to Docker Desktop installation docs as the recommended solution - Formatting is consistent with existing sections in the document (emoji headings, code blocks for errors) </details> <details><summary><h3>Confidence Score: 5/5</h3></summary> - This PR is safe to merge — it only adds a documentation warning section with no code changes. - The change is a small, well-written documentation addition that adds a Podman compatibility warning. It touches only one markdown file, introduces no code changes, and is consistent with the existing document structure and style. No issues were found. - No files require special attention. </details> <details><summary><h3>Flowchart</h3></summary> ```mermaid flowchart TD A[User wants to run AutoGPT] --> B{Which container runtime?} B -->|Docker / Docker Desktop| C[docker compose up -d --build] C --> D[AutoGPT starts successfully] B -->|Podman / podman-compose| E[podman-compose up -d --build] E --> F[Error: Containerfile or Dockerfile does not exist] F --> G[New warning section directs user to install Docker Desktop] G --> C ``` </details> <sub>Last reviewed commit: 23ea6bd</sub> <!-- greptile_other_comments_section --> <!-- /greptile_comment --> |
||
|
|
647c8ed8d4 |
feat(backend/blocks): enhance list concatenation with advanced operations (#12105)
## Summary Enhances the existing `ConcatenateListsBlock` and adds five new companion blocks for comprehensive list manipulation, addressing issue #11139 ("Implement block to concatenate lists"). ### Changes - **Enhanced `ConcatenateListsBlock`** with optional deduplication (`deduplicate`) and None-value filtering (`remove_none`), plus an output `length` field - **New `FlattenListBlock`**: Recursively flattens nested list structures with configurable `max_depth` - **New `InterleaveListsBlock`**: Round-robin interleaving of elements from multiple lists - **New `ZipListsBlock`**: Zips corresponding elements from multiple lists with support for padding to longest or truncating to shortest - **New `ListDifferenceBlock`**: Computes set difference between two lists (regular or symmetric) - **New `ListIntersectionBlock`**: Finds common elements between two lists, preserving order ### Helper Utilities Extracted reusable helper functions for validation, flattening, deduplication, interleaving, chunking, and statistics computation to support the blocks and enable future reuse. ### Test Coverage Comprehensive test suite with 188 test functions across 29 test classes covering: - Built-in block test harness validation for all 6 blocks - Manual edge-case tests for each block (empty inputs, large lists, mixed types, nested structures) - Internal method tests for all block classes - Unit tests for all helper utility functions Closes #11139 ## Test plan - [x] All files pass Python syntax validation (`ast.parse`) - [x] Built-in `test_input`/`test_output` tests defined for all blocks - [x] Manual tests cover edge cases: empty lists, large lists, mixed types, nested structures, deduplication, None removal - [x] Helper function tests validate all utility functions independently - [x] All block IDs are valid UUID4 - [x] Block categories set to `BlockCategory.BASIC` for consistency with existing list blocks <!-- greptile_comment --> <h2>Greptile Overview</h2> <details><summary><h3>Greptile Summary</h3></summary> Enhanced `ConcatenateListsBlock` with deduplication and None-filtering options, and added five new list manipulation blocks (`FlattenListBlock`, `InterleaveListsBlock`, `ZipListsBlock`, `ListDifferenceBlock`, `ListIntersectionBlock`) with comprehensive helper functions and test coverage. **Key Changes:** - Enhanced `ConcatenateListsBlock` with `deduplicate` and `remove_none` options, plus `length` output field - Added `FlattenListBlock` for recursively flattening nested lists with configurable `max_depth` - Added `InterleaveListsBlock` for round-robin element interleaving - Added `ZipListsBlock` with support for padding/truncation - Added `ListDifferenceBlock` and `ListIntersectionBlock` for set operations - Extracted 12 reusable helper functions for validation, flattening, deduplication, etc. - Comprehensive test suite with 188 test functions covering edge cases **Minor Issues:** - Helper function `_deduplicate_list` has redundant logic in the `else` branch that duplicates the `if` branch - Three helper functions (`_filter_empty_collections`, `_compute_list_statistics`, `_chunk_list`) are defined but unused - consider removing unless planned for future use - The `_make_hashable` function uses `hash(repr(item))` for unhashable types, which correctly treats structurally identical dicts/lists as duplicates </details> <details><summary><h3>Confidence Score: 4/5</h3></summary> - Safe to merge with minor style improvements recommended - The implementation is well-structured with comprehensive test coverage (188 tests), proper error handling, and follows existing block patterns. All blocks use valid UUID4 IDs and correct categories. The helper functions provide good code reuse. The minor issues are purely stylistic (redundant code, unused helpers) and don't affect functionality or safety. - No files require special attention - both files are well-tested and follow project conventions </details> <details><summary><h3>Sequence Diagram</h3></summary> ```mermaid sequenceDiagram participant User participant Block as List Block participant Helper as Helper Functions participant Output User->>Block: Input (lists/parameters) Block->>Helper: _validate_all_lists() Helper-->>Block: validation result alt validation fails Block->>Output: error message else validation succeeds Block->>Helper: _concatenate_lists_simple() / _flatten_nested_list() / etc. Helper-->>Block: processed result opt deduplicate enabled Block->>Helper: _deduplicate_list() Helper-->>Block: deduplicated result end opt remove_none enabled Block->>Helper: _filter_none_values() Helper-->>Block: filtered result end Block->>Output: result + length end Output-->>User: Block outputs ``` </details> <sub>Last reviewed commit: a6d5445</sub> <!-- greptile_other_comments_section --> <sub>(2/5) Greptile learns from your feedback when you react with thumbs up/down!</sub> <!-- /greptile_comment --> --------- Co-authored-by: Otto <otto@agpt.co> |
||
|
|
f9f358c526 |
feat(mcp): Add MCP tool block with OAuth, tool discovery, and standard credential integration (#12011)
## Summary <img width="1000" alt="image" src="https://github.com/user-attachments/assets/18e8ef34-d222-453c-8b0a-1b25ef8cf806" /> <img width="250" alt="image" src="https://github.com/user-attachments/assets/ba97556c-09c5-4f76-9f4e-49a2e8e57468" /> <img width="250" alt="image" src="https://github.com/user-attachments/assets/68f7804a-fe74-442d-9849-39a229c052cf" /> <img width="250" alt="image" src="https://github.com/user-attachments/assets/700690ba-f9fe-4726-8871-3bfbab586001" /> Full-stack MCP (Model Context Protocol) tool block integration that allows users to connect to any MCP server, discover available tools, authenticate via OAuth, and execute tools — all through the standard AutoGPT credential system. ### Backend - **MCPToolBlock** (`blocks/mcp/block.py`): New block using `CredentialsMetaInput` pattern with optional credentials (`default={}`), supporting both authenticated (OAuth) and public MCP servers. Includes auto-lookup fallback for backward compatibility. - **MCP Client** (`blocks/mcp/client.py`): HTTP transport with JSON-RPC 2.0, tool discovery, tool execution with robust error handling (type-checked error fields, non-JSON response handling) - **MCP OAuth Handler** (`blocks/mcp/oauth.py`): RFC 8414 discovery, dynamic per-server OAuth with PKCE, token storage and refresh via `raise_for_status=True` - **MCP API Routes** (`api/features/mcp/routes.py`): `discover-tools`, `oauth/login`, `oauth/callback` endpoints with credential cleanup, defensive OAuth metadata validation - **Credential system integration**: - `CredentialsMetaInput` model_validator normalizes legacy `"ProviderName.MCP"` format from Python 3.13's `str(StrEnum)` change - `CredentialsFieldInfo.combine()` supports URL-based credential discrimination (each MCP server gets its own credential entry) - `aggregate_credentials_inputs` checks block schema defaults for credential optionality - Executor normalizes credential data for both Pydantic and JSON schema validation paths - Chat credential matching handles MCP server URL filtering - `provider_matches()` helper used consistently for Python 3.13 StrEnum compatibility - **Pre-run validation**: `_validate_graph_get_errors` now calls `get_missing_input()` for custom block-level validation (MCP tool arguments) - **Security**: HTML tag stripping loop to prevent XSS bypass, SSRF protection (removed trusted_origins) ### Frontend - **MCPToolDialog** (`MCPToolDialog.tsx`): Full tool discovery UI — enter server URL, authenticate if needed, browse tools, select tool and configure - **OAuth popup** (`oauth-popup.ts`): Shared utility supporting cross-origin MCP OAuth flows with BroadcastChannel + localStorage fallback - **Credential integration**: MCP-specific OAuth flow in `useCredentialsInput`, server URL filtering in `useCredentials`, MCP callback page - **CredentialsSelect**: Auto-selects first available credential instead of defaulting to "None", credentials listed before "None" in dropdown - **Node rendering**: Dynamic tool input schema rendering on MCP nodes, proper handling in both legacy and new flow editors - **Block title persistence**: `customized_name` set at block creation for both MCP and Agent blocks — no fallback logic needed, titles survive save/load reliably - **Stable credential ordering**: Removed `sortByUnsetFirst` that caused credential inputs to jump when selected ### Tests (~2060 lines) - Unit tests: block, client, tool execution - Integration tests: mock MCP server with auth - OAuth flow tests - API endpoint tests - Credential combining/optionality tests - E2e tests (skipped in CI, run manually) ## Key Design Decisions 1. **Optional credentials via `default={}`**: MCP servers can be public (no auth) or private (OAuth). The `credentials` field has `default={}` making it optional at the schema level, so public servers work without prompting for credentials. 2. **URL-based credential discrimination**: Each MCP server URL gets its own credential entry in the "Run agent" form (via `discriminator="server_url"`), so agents using multiple MCP servers prompt for each independently. 3. **Model-level normalization**: Python 3.13 changed `str(StrEnum)` to return `"ClassName.MEMBER"`. Rather than scattering fixes across the codebase, a Pydantic `model_validator(mode="before")` on `CredentialsMetaInput` handles normalization centrally, and `provider_matches()` handles lookups. 4. **Credential auto-select**: `CredentialsSelect` component defaults to the first available credential and notifies the parent state, ensuring credentials are pre-filled in the "Run agent" dialog without requiring manual selection. 5. **customized_name for block titles**: Both MCP and Agent blocks set `customized_name` in metadata at creation time. This eliminates convoluted runtime fallback logic (`agent_name`, hostname extraction) — the title is persisted once and read directly. ## Test plan - [x] Unit/integration tests pass (68 MCP + 11 graph = 79 tests) - [x] Manual: MCP block with public server (DeepWiki) — no credentials needed, tools discovered and executable - [x] Manual: MCP block with OAuth server (Linear, Sentry) — OAuth flow prompts correctly - [x] Manual: "Run agent" form shows correct credential requirements per MCP server - [x] Manual: Credential auto-selects when exactly one matches, pre-selects first when multiple exist - [x] Manual: Credential ordering stays stable when selecting/deselecting - [x] Manual: MCP block title persists after save and refresh - [x] Manual: Agent block title persists after save and refresh (via customized_name) - [ ] Manual: Shared agent with MCP block prompts new user for credentials --------- Co-authored-by: Otto <otto@agpt.co> Co-authored-by: Ubbe <hi@ubbe.dev> |
||
|
|
cb166dd6fb |
feat(blocks): Store sandbox files to workspace (#12073)
Store files created by sandbox blocks (Claude Code, Code Executor) to
the user's workspace for persistence across runs.
### Changes 🏗️
- **New `sandbox_files.py` utility** (`backend/util/sandbox_files.py`)
- Shared module for extracting files from E2B sandboxes
- Stores files to workspace via `store_media_file()` (includes virus
scanning, size limits)
- Returns `SandboxFileOutput` with path, content, and `workspace_ref`
- **Claude Code block** (`backend/blocks/claude_code.py`)
- Added `workspace_ref` field to `FileOutput` schema
- Replaced inline `_extract_files()` with shared utility
- Files from working directory now stored to workspace automatically
- **Code Executor block** (`backend/blocks/code_executor.py`)
- Added `files` output field to `ExecuteCodeBlock.Output`
- Creates `/output` directory in sandbox before execution
- Extracts all files (text + binary) from `/output` after execution
- Updated `execute_code()` to support file extraction with
`extract_files` param
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create agent with Claude Code block, have it create a file, verify
`workspace_ref` in output
- [x] Create agent with Code Executor block, write file to `/output`,
verify `workspace_ref` in output
- [x] Verify files persist in workspace after sandbox disposal
- [x] Verify binary files (images, etc.) work correctly in Code Executor
- [x] Verify existing graphs using `content` field still work (backward
compat)
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes required - this is purely additive backend
code.
---
**Related:** Closes SECRT-1931
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds automatic extraction and workspace storage of sandbox-written
files (including binaries for code execution), which can affect output
payload size, performance, and file-handling edge cases.
>
> **Overview**
> **Sandbox blocks now persist generated files to workspace.** A new
shared utility (`backend/util/sandbox_files.py`) extracts files from an
E2B sandbox (scoped by a start timestamp) and stores them via
`store_media_file`, returning `SandboxFileOutput` with `workspace_ref`.
>
> `ClaudeCodeBlock` replaces its inline file-scraping logic with this
utility and updates the `files` output schema to include
`workspace_ref`.
>
> `ExecuteCodeBlock` adds a `files` output and extends the executor
mixin to optionally extract/store files (text + binary) when an
`execution_context` is provided; related mocks/tests and docs are
updated accordingly.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
113e87a23c |
refactor(backend): Reduce circular imports (#12068)
I'm getting circular import issues because there is a lot of cross-importing between `backend.data`, `backend.blocks`, and other modules. This change reduces block-related cross-imports and thus risk of breaking circular imports. ### Changes 🏗️ - Strip down `backend.data.block` - Move `Block` base class and related class/enum defs to `backend.blocks._base` - Move `is_block_auth_configured` to `backend.blocks._utils` - Move `get_blocks()`, `get_io_block_ids()` etc. to `backend.blocks` (`__init__.py`) - Update imports everywhere - Remove unused and poorly typed `Block.create()` - Change usages from `block_cls.create()` to `block_cls()` - Improve typing of `load_all_blocks` and `get_blocks` - Move cross-import of `backend.api.features.library.model` from `backend/data/__init__.py` to `backend/data/integrations.py` - Remove deprecated attribute `NodeModel.webhook` - Re-generate OpenAPI spec and fix frontend usage - Eliminate module-level `backend.blocks` import from `blocks/agent.py` - Eliminate module-level `backend.data.execution` and `backend.executor.manager` imports from `blocks/helpers/review.py` - Replace `BlockInput` with `GraphInput` for graph inputs ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - CI static type-checking + tests should be sufficient for this |
||
|
|
36aeb0b2b3 |
docs(blocks): clarify HumanInTheLoop output descriptions for agent builder (#12069)
## Problem The agent builder (LLM) misinterprets the HumanInTheLoop block outputs. It thinks `approved_data` and `rejected_data` will yield status strings like "APPROVED" or "REJECTED" instead of understanding that the actual input data passes through. This leads to unnecessary complexity - the agent builder adds comparison blocks to check for status strings that don't exist. ## Solution Enriched the block docstring and all input/output field descriptions to make it explicit that: 1. The output is the actual data itself, not a status string 2. The routing is determined by which output pin fires 3. How to use the block correctly (connect downstream blocks to appropriate output pins) ## Changes - Updated block docstring with clear "How it works" and "Example usage" sections - Enhanced `data` input description to explain data flow - Enhanced `name` input description for reviewer context - Enhanced `approved_data` output to explicitly state it's NOT a status string - Enhanced `rejected_data` output to explicitly state it's NOT a status string - Enhanced `review_message` output for clarity ## Testing Documentation-only change to schema descriptions. No functional changes. Fixes SECRT-1930 <!-- greptile_comment --> <h2>Greptile Overview</h2> <details><summary><h3>Greptile Summary</h3></summary> Enhanced documentation for the `HumanInTheLoopBlock` to clarify how output pins work. The key improvement explicitly states that output pins (`approved_data` and `rejected_data`) yield the actual input data, not status strings like "APPROVED" or "REJECTED". This prevents the agent builder (LLM) from misinterpreting the block's behavior and adding unnecessary comparison blocks. **Key changes:** - Added "How it works" and "Example usage" sections to the block docstring - Clarified that routing is determined by which output pin fires, not by comparing output values - Enhanced all input/output field descriptions with explicit data flow explanations - Emphasized that downstream blocks should be connected to the appropriate output pin based on desired workflow path This is a documentation-only change with no functional modifications to the code logic. </details> <details><summary><h3>Confidence Score: 5/5</h3></summary> - This PR is safe to merge with no risk - Documentation-only change that accurately reflects the existing code behavior. No functional changes, no runtime impact, and the enhanced descriptions correctly explain how the block outputs work based on verification of the implementation code. - No files require special attention </details> <!-- greptile_other_comments_section --> <!-- /greptile_comment --> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
85b6520710 |
feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)
**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations
**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
- [x] Resource cleanup is implemented in `finally` blocks
- [x] Exception chaining is properly implemented
- [x] Input validation is in place
- [x] Test mocks are provided for CI environments
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
N/A - No configuration changes required.
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
>
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
>
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
>
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
bfa942e032 |
feat(platform): Add Claude Opus 4.6 model support (#11983)
## Summary Adds support for Anthropic's newly released Claude Opus 4.6 model. ## Changes - Added `claude-opus-4-6` to the `LlmModel` enum - Added model metadata: 200K context window (1M beta), **128K max output tokens** - Added block cost config (same pricing tier as Opus 4.5: $5/MTok input, $25/MTok output) - Updated chat config default model to Claude Opus 4.6 ## Model Details From [Anthropic's docs](https://docs.anthropic.com/en/docs/about-claude/models): - **API ID:** `claude-opus-4-6` - **Context window:** 200K tokens (1M beta) - **Max output:** 128K tokens (up from 64K on Opus 4.5) - **Extended thinking:** Yes - **Adaptive thinking:** Yes (new, Opus 4.6 exclusive) - **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training) - **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5) --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> |
||
|
|
3ca2387631 |
feat(blocks): Implement Text Encode block (#11857)
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).
## Changes
### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`
### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)
### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines
## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).
## Related Issue
Fixes #11111
---------
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
|
||
|
|
4f908d5cb3 |
fix(platform): Improve Linear Search Block [SECRT-1880] (#11967)
## Summary Implements [SECRT-1880](https://linear.app/autogpt/issue/SECRT-1880) - Improve Linear Search Block ## Changes ### Models (`models.py`) - Added `State` model with `id`, `name`, and `type` fields for workflow state information - Added `state: State | None` field to `Issue` model ### API Client (`_api.py`) - Updated `try_search_issues()` to: - Add `max_results` parameter (default 10, was ~50) to reduce token usage - Add `team_id` parameter for team filtering - Return `createdAt`, `state`, `project`, and `assignee` fields in results - Fixed `try_get_team_by_name()` to return descriptive error message when team not found instead of crashing with `IndexError` ### Block (`issues.py`) - Added `max_results` input parameter (1-100, default 10) - Added `team_name` input parameter for optional team filtering - Added `error` output field for graceful error handling - Added categories (`PRODUCTIVITY`, `ISSUE_TRACKING`) - Updated test fixtures to include new fields ## Breaking Changes | Change | Before | After | Mitigation | |--------|--------|-------|------------| | Default result count | ~50 | 10 | Users can set `max_results` up to 100 if needed | ## Non-Breaking Changes - `state` field added to `Issue` (optional, defaults to `None`) - `max_results` param added (has default value) - `team_name` param added (optional, defaults to `None`) - `error` output added (follows established pattern from GitHub blocks) ## Testing - [x] Format/lint checks pass - [x] Unit test fixtures updated Resolves SECRT-1880 --------- Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com> |
||
|
|
7ee94d986c |
docs: add credentials prerequisites to create-basic-agent guide (#11913)
## Summary Addresses #11785 - users were encountering `openai_api_key_credentials` errors when following the create-basic-agent guide because it didn't mention the need to configure API credentials before using AI blocks. ## Changes Added a **Prerequisites** section to `docs/platform/create-basic-agent.md` explaining: - **Cloud users:** Go to Profile → Integrations to add API keys - **Self-hosted (Docker):** Add keys to `autogpt_platform/backend/.env` and restart services Also added a note that the Calculator example doesn't need credentials, making it a good first test. ## Related - Issue: #11785 |
||
|
|
de0ec3d388 |
chore(llm): remove deprecated Claude 3.7 Sonnet model with migration and defensive handling (#11841)
## Summary Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of Anthropic's API retirement, with comprehensive migration and defensive error handling. ## Background Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`) on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from the platform and migrates existing users to prevent service interruptions. ## Changes ### Code Changes - Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py` - Remove corresponding `ModelMetadata` entry - Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum - Remove `CLAUDE_3_7_SONNET` from block cost config - Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum - Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to `CLAUDE_4_5_SONNET` (staying in Claude family) - Add defensive error handling in `CredentialsFieldInfo.discriminate()` for deprecated model values ### Database Migration - Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet` - Migrates `AgentNode.constantInput` model references - Migrates `AgentNodeExecutionInputOutput.data` preset overrides ### Documentation - Updated `docs/integrations/block-integrations/llm.md` to remove deprecated model - Updated `docs/integrations/block-integrations/stagehand/blocks.md` to remove deprecated model and add Claude 4.5 Sonnet ## Notes - Agent JSON files in `autogpt_platform/backend/agents/` still reference this model in their provider mappings. These are auto-generated and should be regenerated separately. ## Testing - [ ] Verify LLM block still functions with remaining models - [ ] Confirm no import errors in affected files - [ ] Verify migration runs successfully - [ ] Verify deprecated model gives helpful error message instead of KeyError |
||
|
|
4cd5da678d |
refactor(claude): Split autogpt_platform/CLAUDE.md into project-specific files (#11788)
Split `autogpt_platform/CLAUDE.md` into project-specific files, to make the scope of the instructions clearer. Also, some minor improvements: - Change references to other Markdown files to @file/path.md syntax that Claude recognizes - Update ambiguous/incorrect/outdated instructions - Remove trailing slashes - Fix broken file path references in other docs (including comments) |
||
|
|
7668c17d9c |
feat(platform): add User Workspace for persistent CoPilot file storage (#11867)
Implements persistent User Workspace storage for CoPilot, enabling
blocks to save and retrieve files across sessions. Files are stored in
session-scoped virtual paths (`/sessions/{session_id}/`).
Fixes SECRT-1833
### Changes 🏗️
**Database & Storage:**
- Add `UserWorkspace` and `UserWorkspaceFile` Prisma models
- Implement `WorkspaceStorageBackend` abstraction (GCS for cloud, local
filesystem for self-hosted)
- Add `workspace_id` and `session_id` fields to `ExecutionContext`
**Backend API:**
- Add REST endpoints: `GET/POST /api/workspace/files`, `GET/DELETE
/api/workspace/files/{id}`, `GET /api/workspace/files/{id}/download`
- Add CoPilot tools: `list_workspace_files`, `read_workspace_file`,
`write_workspace_file`
- Integrate workspace storage into `store_media_file()` - returns
`workspace://file-id` references
**Block Updates:**
- Refactor all file-handling blocks to use unified `ExecutionContext`
parameter
- Update media-generating blocks to persist outputs to workspace
(AIImageGenerator, AIImageCustomizer, FluxKontext, TalkingHead, FAL
video, Bannerbear, etc.)
**Frontend:**
- Render `workspace://` image references in chat via proxy endpoint
- Add "AI cannot see this image" overlay indicator
**CoPilot Context Mapping:**
- Session = Agent (graph_id) = Run (graph_exec_id)
- Files scoped to `/sessions/{session_id}/`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Create CoPilot session, generate image with AIImageGeneratorBlock
- [ ] Verify image returns `workspace://file-id` (not base64)
- [ ] Verify image renders in chat with visibility indicator
- [ ] Verify workspace files persist across sessions
- [ ] Test list/read/write workspace files via CoPilot tools
- [ ] Test local storage backend for self-hosted deployments
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.ai/code)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Medium Risk**
> Introduces a new persistent file-storage surface area (DB tables,
storage backends, download API, and chat tools) and rewires
`store_media_file()`/block execution context across many blocks, so
regressions could impact file handling, access control, or storage
costs.
>
> **Overview**
> Adds a **persistent per-user Workspace** (new
`UserWorkspace`/`UserWorkspaceFile` models plus `WorkspaceManager` +
`WorkspaceStorageBackend` with GCS/local implementations) and wires it
into the API via a new `/api/workspace/files/{file_id}/download` route
(including header-sanitized `Content-Disposition`) and shutdown
lifecycle hooks.
>
> Extends `ExecutionContext` to carry execution identity +
`workspace_id`/`session_id`, updates executor tooling to clone
node-specific contexts, and updates `run_block` (CoPilot) to create a
session-scoped workspace and synthetic graph/run/node IDs.
>
> Refactors `store_media_file()` to require `execution_context` +
`return_format` and to support `workspace://` references; migrates many
media/file-handling blocks and related tests to the new API and to
persist generated media as `workspace://...` (or fall back to data URIs
outside CoPilot), and adds CoPilot chat tools for
listing/reading/writing/deleting workspace files with safeguards against
context bloat.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
82d7134fc6 |
feat(blocks): Add ClaudeCodeBlock for executing tasks via Claude Code in E2B sandbox (#11761)
Introduces a new ClaudeCodeBlock that enables execution of coding tasks using Anthropic's Claude Code in an E2B sandbox. This block unlocks powerful agentic coding capabilities - Claude Code can autonomously create files, install packages, run commands, and build complete applications within a secure sandboxed environment. Changes 🏗️ - New file backend/blocks/claude_code.py: - ClaudeCodeBlock - Execute tasks using Claude Code in an E2B sandbox - Dual credential support: E2B API key (sandbox) + Anthropic API key (Claude Code) - Session continuation support via session_id, sandbox_id, and conversation_history - Automatic file extraction with path, relative_path, name, and content fields - Configurable timeout, setup commands, and working directory - dispose_sandbox option to keep sandbox alive for multi-turn conversations Checklist 📋 For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Create and execute ClaudeCodeBlock with a simple prompt ("Create a hello world HTML file") - [x] Verify files output includes correct path, relative_path, name, and content - [x] Test session continuation by passing session_id and sandbox_id back - [x] Build "Any API → Instant App" demo agent combining Firecrawl + ClaudeCodeBlock + GitHub blocks - [x] Verify generated files are pushed to GitHub with correct folder structure using relative_path Here are two example agents i made that can be used to test this agent, they require github, anthropic and e2b access via api keys that are set via the user/on the platform is testing on dev The first agent is my Any API → Instant App "Transform any API documentation into a fully functional web application. Just provide a docs URL and get a complete, ready-to-deploy app pushed to a new GitHub repository." [Any API → Instant App_v36.json](https://github.com/user-attachments/files/24600326/Any.API.Instant.App_v36.json) The second agent is my Idea to project "Simply enter your coding project's idea and this agent will make all of the base initial code needed for you to start working on that project and place it on github for you!" [Idea to project_v11.json](https://github.com/user-attachments/files/24600346/Idea.to.project_v11.json) If you have any questions or issues let me know. References https://e2b.dev/blog/python-guide-run-claude-code-in-an-e2b-sandbox https://github.com/e2b-dev/e2b-cookbook/tree/main/examples/anthropic-claude-code-in-sandbox-python https://code.claude.com/docs/en/cli-reference I tried to use E2b's "anthropic-claude-code" template but it kept complaining it was out of date, so I make it manually spin up a E2b instance and make it install the latest claude code and it uses that |
||
|
|
90466908a8 |
refactor(docs): restructure platform docs for GitBook and remove MkDo… (#11825)
<!-- Clearly explain the need for these changes: -->
we met some reality when merging into the docs site but this fixes it
### Changes 🏗️
updates paths, adds some guides
<!-- Concisely describe all of the changes made in this pull request:
-->
update to match reality
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] deploy it and validate
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Aligns block integrations documentation with GitBook.
>
> - Changes generator default output to
`docs/integrations/block-integrations` and writes overview `README.md`
and `SUMMARY.md` at `docs/integrations/`
> - Adds GitBook frontmatter and hint syntax to overview; prefixes block
links with `block-integrations/`
> - Introduces `generate_summary_md` to build GitBook navigation
(including optional `guides/`)
> - Preserves per-block manual sections and adds optional `extras` +
file-level `additional_content`
> - Updates sync checker to validate parent `README.md` and `SUMMARY.md`
> - Rewrites `docs/integrations/README.md` with GitBook frontmatter and
updated links; adds `docs/integrations/SUMMARY.md`
> - Adds new guides: `guides/llm-providers.md`,
`guides/voice-providers.md`
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
2169b433c9 |
feat(backend/blocks): add ConcatenateListsBlock (#11567)
# feat(backend/blocks): add ConcatenateListsBlock
## Description
This PR implements a new block `ConcatenateListsBlock` that concatenates
multiple lists into a single list. This addresses the "good first issue"
for implementing a list concatenation block in the platform/blocks area.
The block takes a list of lists as input and combines all elements in
order into a single concatenated list. This is useful for workflows that
need to merge data from multiple sources or combine results from
different operations.
### Changes 🏗️
- **Added `ConcatenateListsBlock` class** in
`autogpt_platform/backend/backend/blocks/data_manipulation.py`
- Input: `lists: List[List[Any]]` - accepts a list of lists to
concatenate
- Output: `concatenated_list: List[Any]` - returns a single concatenated
list
- Error output: `error: str` - provides clear error messages for invalid
input types
- Block ID: `3cf9298b-5817-4141-9d80-7c2cc5199c8e`
- Category: `BlockCategory.BASIC` (consistent with other list
manipulation blocks)
- **Added comprehensive test suite** in
`autogpt_platform/backend/test/blocks/test_concatenate_lists.py`
- Tests using built-in `test_input`/`test_output` validation
- Manual test cases covering edge cases (empty lists, single list, empty
input)
- Error handling tests for invalid input types
- Category consistency verification
- All tests passing
- **Implementation details:**
- Uses `extend()` method for efficient list concatenation
- Preserves element order from all input lists
- **Runtime type validation**: Explicitly checks `isinstance(lst, list)`
before calling `extend()` to prevent:
- Strings being iterated character-by-character (e.g., `extend("abc")` →
`['a', 'b', 'c']`)
- Non-iterable types causing `TypeError` (e.g., `extend(1)`)
- Clear error messages indicating which index has invalid input
- Handles edge cases: empty lists, empty input, single list, None values
- Follows existing block patterns and conventions
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `poetry run pytest test/blocks/test_concatenate_lists.py -v` -
all tests pass
- [x] Verified block can be imported and instantiated
- [x] Tested with built-in test cases (4 test scenarios)
- [x] Tested manual edge cases (empty lists, single list, empty input)
- [x] Tested error handling for invalid input types
- [x] Verified category is `BASIC` for consistency
- [x] Verified no linting errors
- [x] Confirmed block follows same patterns as other blocks in
`data_manipulation.py`
#### Code Quality:
- [x] Code follows existing patterns and conventions
- [x] Type hints are properly used
- [x] Documentation strings are clear and descriptive
- [x] Runtime type validation implemented
- [x] Error handling with clear error messages
- [x] No linting errors
- [x] Prisma client generated successfully
### Testing
**Test Results:**
```
test/blocks/test_concatenate_lists.py::test_concatenate_lists_block_builtin_tests PASSED
test/blocks/test_concatenate_lists.py::test_concatenate_lists_manual PASSED
============================== 2 passed in 8.35s ==============================
```
**Test Coverage:**
- Basic concatenation: `[[1, 2, 3], [4, 5, 6]]` → `[1, 2, 3, 4, 5, 6]`
- Mixed types: `[["a", "b"], ["c"], ["d", "e", "f"]]` → `["a", "b", "c",
"d", "e", "f"]`
- Empty list handling: `[[1, 2], []]` → `[1, 2]`
- Empty input: `[]` → `[]`
- Single list: `[[1, 2, 3]]` → `[1, 2, 3]`
- Error handling: Invalid input types (strings, non-lists) produce clear
error messages
- Category verification: Confirmed `BlockCategory.BASIC` for consistency
### Review Feedback Addressed
- **Category Consistency**: Changed from `BlockCategory.DATA` to
`BlockCategory.BASIC` to match other list manipulation blocks
(`AddToListBlock`, `FindInListBlock`, etc.)
- **Type Robustness**: Added explicit runtime validation with
`isinstance(lst, list)` check before calling `extend()` to prevent:
- Strings being iterated character-by-character
- Non-iterable types causing `TypeError`
- **Error Handling**: Added `error` output field with clear, descriptive
error messages indicating which index has invalid input
- **Test Coverage**: Added test case for error handling with invalid
input types
### Related Issues
- Addresses: "Implement block to concatenate lists" (good first issue,
platform/blocks, hacktoberfest)
### Notes
- This is a straightforward data manipulation block that doesn't require
external dependencies
- The block will be automatically discovered by the block loading system
- No database or configuration changes required
- Compatible with existing workflow system
- All review feedback has been addressed and incorporated
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Adds a new list utility and updates docs.
>
> - **New block**: `ConcatenateListsBlock` in
`backend/blocks/data_manipulation.py`
> - Input `lists: List[List[Any]]`; outputs `concatenated_list` or
`error`
> - Skips `None` entries; emits error for non-list items; preserves
order
> - **Docs**: Adds "Concatenate Lists" section to
`docs/integrations/basic.md` and links it in
`docs/integrations/README.md`
> - **Contributor guide**: New `docs/CLAUDE.md` with manual doc section
guidelines
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
c1a1767034 |
feat(docs): Add block documentation auto-generation system (#11707)
- Add generate_block_docs.py script that introspects block code to
generate markdown
- Support manual content preservation via <!-- MANUAL: --> markers
- Add migrate_block_docs.py to preserve existing manual content from git
HEAD
- Add CI workflow (docs-block-sync.yml) to fail if docs drift from code
- Add Claude PR review workflow (docs-claude-review.yml) for doc changes
- Add manual LLM enhancement workflow (docs-enhance.yml)
- Add GitBook configuration (.gitbook.yaml, SUMMARY.md)
- Fix non-deterministic category ordering (categories is a set)
- Add comprehensive test suite (32 tests)
- Generate docs for 444 blocks with 66 preserved manual sections
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Extensively test code generation for the docs pages
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces an automated documentation pipeline for blocks and
integrates it into CI.
>
> - Adds `scripts/generate_block_docs.py` (+ tests) to introspect blocks
and generate `docs/integrations/**`, preserving `<!-- MANUAL: -->`
sections
> - New CI workflows: **docs-block-sync** (fails if docs drift),
**docs-claude-review** (AI review for block/docs PRs), and
**docs-enhance** (optional LLM improvements)
> - Updates existing Claude workflows to use `CLAUDE_CODE_OAUTH_TOKEN`
instead of `ANTHROPIC_API_KEY`
> - Improves numerous block descriptions/typos and links across backend
blocks to standardize docs output
> - Commits initial generated docs including
`docs/integrations/README.md` and many provider/category pages
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
e80e4d9cbb |
ci: update dev from gitbook (#11757)
<!-- Clearly explain the need for these changes: -->
gitbook changes via ui
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> **Docs sync from GitBook**
>
> - Updates `docs/home/README.md` with a new Developer Platform landing
page (cards, links to Platform, Integrations, Contribute, Discord,
GitHub) and metadata/cover settings
> - Adds `docs/home/SUMMARY.md` defining the table of contents linking
to `README.md`
> - No application/runtime code changes
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
43cbe2e011 |
feat!(blocks): Add Reddit OAuth2 integration and advanced Reddit blocks (#11623)
Replaces user/password Reddit credentials with OAuth2, adds
RedditOAuthHandler, and updates Reddit blocks to support OAuth2
authentication. Introduces new blocks for creating posts, fetching post
details, searching, editing posts, and retrieving subreddit info.
Updates test credentials and input handling to use OAuth2 tokens.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Rebuild the reddit blocks to support oauth2 rather than requiring users
to provide their password and username.
This is done via a swap from script based to web based authentication on
the reddit side faciliatated by the approval of an oauth app by reddit
on the account `ntindle`
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Build a super agent
- [x] Upload the super agent and a video of it working
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces full Reddit OAuth2 support and substantially expands Reddit
capabilities across the platform.
>
> - Adds `RedditOAuthHandler` with token exchange, refresh, revoke;
registers handler in `integrations/oauth/__init__.py`
> - Refactors Reddit blocks to use `OAuth2Credentials` and `praw` via
refresh tokens; updates models (e.g., `post_id`, richer outputs) and
adds `strip_reddit_prefix`
> - New blocks: create/edit/delete posts, post/get/delete comments,
reply to comments, get post details, user posts (self/others), search,
inbox, subreddit info/rules/flairs, send messages
> - Updates default `settings.config.reddit_user_agent` and test
credentials; minor `.branchlet.json` addition
> - Docs: clarifies block error-handling with
`BlockInputError`/`BlockExecutionError` guidance
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
a318832414 |
feat(docs): update dev from gitbook changes (#11740)
<!-- Clearly explain the need for these changes: -->
gitbook branch has changes that need synced to dev
### Changes 🏗️
Pull changes from gitbook into dev
<!-- Concisely describe all of the changes made in this pull request:
-->
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Migrates documentation to GitBook and removes the old MkDocs setup.
>
> - Removes MkDocs configuration and infra: `docs/mkdocs.yml`,
`docs/netlify.toml`, `docs/overrides/main.html`,
`docs/requirements.txt`, and JS assets (`_javascript/mathjax.js`,
`_javascript/tablesort.js`)
> - Updates `docs/content/contribute/index.md` to describe GitBook
workflow (gitbook branch, editing, previews, and `SUMMARY.md`)
> - Adds GitBook navigation file `docs/platform/SUMMARY.md` and a new
platform overview page `docs/platform/what-is-autogpt-platform.md`
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
7ee28197a3 |
docs(gitbook): sync documentation updates with dev branch (#11709)
## Summary Sync GitBook documentation changes from the gitbook branch to dev. This PR contains comprehensive documentation updates including new assets, content restructuring, and infrastructure improvements. ## Changes 🏗️ ### Documentation Updates - **New GitBook Assets**: Added 9 new documentation images and screenshots - Platform overview images (AGPT_Platform.png, Banner_image.png) - Feature illustrations (Contribute.png, Integrations.png, hosted.jpg, no-code.jpg, api-reference.jpg) - Screenshots and examples for better user guidance - **Content Updates**: Enhanced README.md and SUMMARY.md with improved structure and navigation - **Visual Documentation**: Added comprehensive visual guides for platform features ### Infrastructure - **Cloudflare Worker**: Added redirect handler for docs.agpt.co → agpt.co/docs migration - Complete URL mapping for 71+ redirect patterns - Handles platform blocks restructuring and edge cases - Ready for deployment to Cloudflare Workers ### Merge Conflict Resolution - **Clean merge from dev**: Successfully merged dev's major backend restructuring (server/ → api/) - **File resurrection fix**: Removed files that were accidentally resurrected during merge conflict resolution - Cleaned up BuilderActionButton.tsx (deleted in dev) - Cleaned up old PreviewBanner.tsx location (moved in dev) - Synced pnpm-lock.yaml and layout.tsx with dev's current state ## Technical Details This PR represents a careful synchronization that: 1. **Preserves all GitBook documentation work** while staying current with dev 2. **Maintains clean diff**: Only documentation-related changes remain after merge cleanup 3. **Resolves merge conflicts**: Handled major backend API restructuring without breaking docs 4. **Infrastructure ready**: Cloudflare Worker ready for docs migration deployment ## Files Changed - `docs/`: GitBook documentation assets and content - `autogpt_platform/cloudflare_worker.js`: Docs infrastructure for URL redirects ## Validation - ✅ All TypeScript compilation errors resolved - ✅ Pre-commit hooks passing (Prettier, TypeCheck) - ✅ Only documentation changes remain in diff vs dev - ✅ Cloudflare Worker tested with comprehensive URL mapping - ✅ No non-documentation code changes after cleanup ## Deployment Notes The Cloudflare Worker can be deployed via: ```bash # Cloudflare Dashboard → Workers → Create → Paste code → Add route docs.agpt.co/* ``` This completes the GitBook synchronization and prepares for docs site migration. --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: bobby.gaffin <bobby.gaffin@agpt.co> Co-authored-by: Bently <Github@bentlybro.com> Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> Co-authored-by: Swifty <craigswift13@gmail.com> Co-authored-by: Ubbe <hi@ubbe.dev> Co-authored-by: Reinier van der Leer <pwuts@agpt.co> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Lluis Agusti <hi@llu.lu> |
||
|
|
3dbc03e488 |
feat(platform): OAuth API & Single Sign-On (#11617)
We want to provide Single Sign-On for multiple AutoGPT apps that use the Platform as their backend. ### Changes 🏗️ Backend: - DB + logic + API for OAuth flow (w/ tests) - DB schema additions for OAuth apps, codes, and tokens - Token creation/validation/management logic - OAuth flow endpoints (app info, authorize, token exchange, introspect, revoke) - E2E OAuth API integration tests - Other OAuth-related endpoints (upload app logo, list owned apps, external `/me` endpoint) - App logo asset management - Adjust external API middleware to support auth with access token - Expired token clean-up job - Add `OAUTH_TOKEN_CLEANUP_INTERVAL_HOURS` setting (optional) - `poetry run oauth-tool`: dev tool to test the OAuth flows and register new OAuth apps - `poetry run export-api-schema`: dev tool to quickly export the OpenAPI schema (much quicker than spinning up the backend) Frontend: - Frontend UI for app authorization (`/auth/authorize`) - Re-redirect after login/signup - Frontend flow to batch-auth integrations on request of the client app (`/auth/integrations/setup-wizard`) - Debug `CredentialInputs` component - Add `/profile/oauth-apps` management page - Add `isOurProblem` flag to `ErrorCard` to hide action buttons when the error isn't our fault - Add `showTitle` flag to `CredentialsInput` to hide built-in title for layout reasons DX: - Add [API guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/api-guide.md) and [OAuth guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/oauth-guide.md) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Manually verify test coverage of OAuth API tests - Test `/auth/authorize` using `poetry run oauth-tool test-server` - [x] Works - [x] Looks okay - Test `/auth/integrations/setup-wizard` using `poetry run oauth-tool test-server` - [x] Works - [x] Looks okay - Test `/profile/oauth-apps` page - [x] All owned OAuth apps show up - [x] Enabling/disabling apps works - [ ] ~~Uploading logos works~~ can only test this once deployed to dev #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) |
||
|
|
37b3e4e82e |
feat(blocks)!: Update Exa search block to match latest API specification (#11185)
BREAKING CHANGE: Removed deprecated use_auto_prompt field from Input
schema. Existing workflows using this field will need to be updated to
use the type field set to "auto" instead.
## Summary of Changes 📝
This PR comprehensively updates all Exa search blocks to match the
latest Exa API specification and adds significant new functionality
through the Websets API integration.
### Core API Updates 🔄
- **Migration to Exa SDK**: Replaced manual API calls with the official
`exa_py` AsyncExa SDK across all blocks for better reliability and
maintainability
- **Removed deprecated fields**: Eliminated
`use_auto_prompt`/`useAutoprompt` field (breaking change)
- **Fixed incomplete field definitions**: Corrected `user_location`
field definition
- **Added new input fields**: Added `moderation` and `context` fields
for enhanced content filtering
### Enhanced Content Settings 🛠️
- **Text field improvements**: Support both boolean and advanced object
configurations
- **New content options**:
- Added `livecrawl` settings (never, fallback, always, preferred)
- Added `subpages` support for deeper content retrieval
- Added `extras` settings for links and images
- Added `context` settings for additional contextual information
- **Updated settings**: Enhanced `highlight` and `summary`
configurations with new query and schema options
### Comprehensive Cost Tracking 💰
- Added detailed cost tracking models:
- `CostDollars` for monetary costs
- `CostCredits` for API credit tracking
- `CostDuration` for time-based costs
- New output fields: `request_id`, `resolved_search_type`,
`cost_dollars`
- Improved response handling to conditionally yield fields based on
availability
### New Websets API Integration 🚀
Added eight new specialized blocks for Exa's Websets API:
- **`websets.py`**: Core webset management (create, get, list, delete)
- **`websets_search.py`**: Search operations within websets
- **`websets_items.py`**: Individual item management (add, get, update,
delete)
- **`websets_enrichment.py`**: Data enrichment operations
- **`websets_import_export.py`**: Bulk import/export functionality
- **`websets_monitor.py`**: Monitor and track webset changes
- **`websets_polling.py`**: Poll for updates and changes
### New Special-Purpose Blocks 🎯
- **`code_context.py`**: Code search capabilities for finding relevant
code snippets from open source repositories, documentation, and Stack
Overflow
- **`research.py`**: Asynchronous research capabilities that explore the
web, gather sources, synthesize findings, and return structured results
with citations
### Code Organization Improvements 📁
- **Removed legacy code**: Deleted `model.py` file containing deprecated
API models
- **Centralized helpers**: Consolidated shared models and utilities in
`helpers.py`
- **Improved modularity**: Each webset operation is now in its own
dedicated file
### Other Changes 🔧
- Updated `.gitignore` for better development workflow
- Updated `CLAUDE.md` with project-specific instructions
- Updated documentation in `docs/content/platform/new_blocks.md` with
error handling, data models, and file input guidelines
- Improved webhook block implementations with SDK integration
### Files Changed 📂
- **Modified (11 files)**:
- `.gitignore`
- `autogpt_platform/CLAUDE.md`
- `autogpt_platform/backend/backend/blocks/exa/answers.py`
- `autogpt_platform/backend/backend/blocks/exa/contents.py`
- `autogpt_platform/backend/backend/blocks/exa/helpers.py`
- `autogpt_platform/backend/backend/blocks/exa/search.py`
- `autogpt_platform/backend/backend/blocks/exa/similar.py`
- `autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py`
- `autogpt_platform/backend/backend/blocks/exa/websets.py`
- `docs/content/platform/new_blocks.md`
- **Added (8 files)**:
- `autogpt_platform/backend/backend/blocks/exa/code_context.py`
- `autogpt_platform/backend/backend/blocks/exa/research.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_import_export.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_items.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_monitor.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_polling.py`
- `autogpt_platform/backend/backend/blocks/exa/websets_search.py`
- **Deleted (1 file)**:
- `autogpt_platform/backend/backend/blocks/exa/model.py`
### Migration Guide 🚦
For users with existing workflows using the deprecated `use_auto_prompt`
field:
1. Remove the `use_auto_prompt` field from your input configuration
2. Set the `type` field to `ExaSearchTypes.AUTO` (or "auto" in JSON) to
achieve the same behavior
3. Review any custom content settings as the structure has been enhanced
### Testing Recommendations ✅
- Test existing workflows to ensure they handle the breaking change
- Verify cost tracking fields are properly returned
- Test new content settings options (livecrawl, subpages, extras,
context)
- Validate websets functionality if using the new Websets API blocks
🤖 Generated with [Claude Code](https://claude.com/claude-code)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] made + ran a test agent for the blocks and flows between them
[Exa
Tests_v44.json](https://github.com/user-attachments/files/23226143/Exa.Tests_v44.json)
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Migrates Exa blocks to AsyncExa SDK, adds comprehensive
Websets/research/code-context blocks, updates existing
search/content/answers/similar, deletes legacy models, adjusts
tests/docs; breaking: remove `use_auto_prompt` in favor of
`type="auto"`.
>
> - **Backend — Exa integration (SDK migration & BREAKING)**:
> - Replace manual HTTP calls with `exa_py.AsyncExa` across `search`,
`similar`, `contents`, `answers`, and webhooks; richer outputs
(citations, context, costs, resolved search type).
> - BREAKING: remove `Input.use_auto_prompt`; use `type = "auto"`.
> - Centralize models/utilities in `exa/helpers.py` (content settings,
cost models, result mappers).
> - **New Blocks**:
> - **Websets**: management (`websets.py`), searches, items,
enrichments, imports/exports, monitors, polling (new files under
`exa/websets_*`).
> - **Research**: async research task create/get/wait/list
(`exa/research.py`).
> - **Code Context**: code snippet/context retrieval
(`exa/code_context.py`).
> - **Removals**:
> - Delete deprecated `exa/model.py`.
> - **Docs & DX**:
> - Update `docs/new_blocks.md` (error handling, models, file input) and
`CLAUDE.md`; ignore backend logs in `.gitignore`.
> - **Frontend Tests**:
> - Split/extend “e” block tests and improve block add robustness in
Playwright (`build.spec.ts`, `build.page.ts`).
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
|
||
|
|
2f8cdf62ba |
feat(backend): Standardize error handling with BlockSchemaInput & BlockSchemaOutput base class (#11257)
<!-- Clearly explain the need for these changes: --> This PR addresses the need for consistent error handling across all blocks in the AutoGPT platform. Previously, each block had to manually define an `error` field in their output schema, leading to code duplication and potential inconsistencies. Some blocks might forget to include the error field, making error handling unpredictable. ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> - **Created `BlockSchemaOutput` base class**: New base class that extends `BlockSchema` with a standardized `error` field - **Created `BlockSchemaInput` base class**: Added for consistency and future extensibility - **Updated 140+ block implementations**: Changed all block `Output` classes from `class Output(BlockSchema):` to `class Output(BlockSchemaOutput):` - **Removed manual error field definitions**: Eliminated hundreds of duplicate `error: str = SchemaField(...)` definitions - **Updated type annotations**: Changed `Block[BlockSchema, BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout the codebase - **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput` imports to all relevant files - **Maintained backward compatibility**: Updated `EmptySchema` to inherit from `BlockSchemaOutput` **Key Benefits:** - Consistent error handling across all blocks - Reduced code duplication (removed ~200 lines of repetitive error field definitions) - Type safety improvements with distinct input/output schema types - Blocks can still override error field with more specific descriptions when needed ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Verified `poetry run format` passes (all linting, formatting, and type checking) - [x] Tested block instantiation works correctly (MediaDurationBlock, UnrealTextToSpeechBlock) - [x] Confirmed error fields are automatically present in all updated blocks - [x] Verified block loading system works (successfully loads 353+ blocks) - [x] Tested backward compatibility with EmptySchema - [x] Confirmed blocks can still override error field with custom descriptions - [x] Validated core schema inheritance chain works correctly #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) *Note: No configuration changes were needed for this refactoring.* 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Lluis Agusti <hi@llu.lu> Co-authored-by: Ubbe <hi@ubbe.dev> |
||
|
|
90af8f8e1a |
feat(backend): Add language fallback for YouTube transcription block (#11057)
## Problem The YouTube transcription block would fail when attempting to transcribe videos that only had transcripts available in non-English languages. Even when usable transcripts existed in other languages, the block would raise a `NoTranscriptFound` error because it only requested English transcripts. **Example video that would fail:** https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian transcripts) **Error message:** ``` Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ! No transcripts were found for any of the requested language codes: ('en',) For this video (3AMl5d2NKpQ) transcripts are available in the following languages: (GENERATED) - hu ("Hungarian (auto-generated)") ``` ## Solution Implemented intelligent language fallback in the `TranscribeYoutubeVideoBlock.get_transcript()` method: 1. **First**, tries to fetch English transcript (maintains backward compatibility) 2. **If English unavailable**, lists all available transcripts and selects the first one using this priority: - Manually created transcripts (any language) - Auto-generated transcripts (any language) 3. **Only fails** if no transcripts exist at all **Example behavior:** ```python # Before: Video with only Hungarian transcript get_transcript("3AMl5d2NKpQ") # ❌ Raises NoTranscriptFound # After: Video with only Hungarian transcript get_transcript("3AMl5d2NKpQ") # ✅ Returns Hungarian transcript ``` ## Changes - **Modified** `backend/blocks/youtube.py`: Added try-catch logic to fallback to any available language when English is not found - **Added** `test/blocks/test_youtube.py`: Comprehensive test suite covering URL extraction, language fallback, transcript preferences, and error handling (7 tests) - **Updated** `docs/content/platform/blocks/youtube.md`: Documented the language fallback behavior and transcript priority order ## Testing - ✅ All 7 new unit tests pass - ✅ Block integration test passes - ✅ Full test suite: 621 passed, 0 failed (no regressions) - ✅ Code formatting and linting pass ## Impact This fix enables the YouTube transcription block to work with international content while maintaining full backward compatibility: - ✅ Videos in any language can now be transcribed - ✅ English is still preferred when available - ✅ No breaking changes to existing functionality - ✅ Graceful degradation to available languages Fixes #10637 Fixes https://linear.app/autogpt/issue/OPEN-2626 > [!WARNING] > > <details> > <summary>Firewall rules blocked me from connecting to one or more addresses (expand for details)</summary> > > #### I tried to connect to the following addresses, but was blocked by firewall rules: > > - `www.youtube.com` > - Triggering command: `/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3` (dns block) > > If you need me to access, download, or install something from one of these locations, you can either: > > - Configure [Actions setup steps](https://gh.io/copilot/actions-setup-steps) to set up my environment, which run before the firewall is enabled > - Add the appropriate URLs or hosts to the custom allowlist in this repository's [Copilot coding agent settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent) (admins only) > > </details> <!-- START COPILOT CODING AGENT SUFFIX --> <details> <summary>Original prompt</summary> > Issue Title: if theres only one lanague available for transcribe youtube return that langage not an error > Issue Description: `Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused by: No transcripts were found for any of the requested language codes: ('en',) For this video (3AMl5d2NKpQ) transcripts are available in the following languages: (MANUALLY CREATED) None (GENERATED) - hu ("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are sure that the described cause is not responsible for this error and that a transcript should be retrievable, please create an issue at https://github.com/jdepoix/youtube-transcript-api/issues. Please add which version of youtube_transcript_api you are using and provide the information needed to replicate the error. Also make sure that there are no open issues which already describe your problem!` you can use this video to test: [https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60) > Fixes https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return > > > Comment by User : > This thread is for an agent session with githubcopilotcodingagent. > > Comment by User : > This thread is for an agent session with githubcopilotcodingagent. > > Comment by User : > This comment thread is synced to a corresponding [GitHub issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637). All replies are displayed in both locations. > > </details> <!-- START COPILOT CODING AGENT TIPS --> --- ✨ Let Copilot coding agent [set things up for you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot) — coding agent works faster and does higher quality work when set up for your repo. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
9bc9b53b99 |
fix(backend): Add channel ID support to SendDiscordMessageBlock for consistency with other Discord blocks (#11055)
## Problem
The `SendDiscordMessageBlock` only accepted channel names, while other
Discord blocks like `SendDiscordFileBlock` and `SendDiscordEmbedBlock`
accept both channel IDs and channel names. This inconsistency made it
difficult to use channel IDs with the message sending block, which is
often more reliable and direct than name-based lookup.
## Solution
Updated `SendDiscordMessageBlock` to accept both channel IDs and channel
names through the `channel_name` field, matching the implementation
pattern used in other Discord blocks.
### Changes Made
1. **Enhanced channel resolution logic** to try parsing the input as a
channel ID first, then fall back to name-based search:
```python
# Try to parse as channel ID first
try:
channel_id = int(channel_name)
channel = client.get_channel(channel_id)
except ValueError:
# Not an ID, treat as channel name
# ... search guilds for matching channel name
```
2. **Updated field descriptions** to clarify the dual functionality:
- `channel_name`: Now describes that it accepts "Channel ID or channel
name"
- `server_name`: Clarified as "only needed if using channel name"
3. **Added type checking** to ensure the resolved channel can send
messages before attempting to send
4. **Updated documentation** to reflect the new capability
## Backward Compatibility
✅ **Fully backward compatible**: The field name remains `channel_name`
(not renamed), and all existing workflows using channel names will
continue to work exactly as before.
✅ **New capability**: Users can now also provide channel IDs (e.g.,
`"123456789012345678"`) for more direct channel targeting.
## Testing
- All existing tests pass, including `SendDiscordMessageBlock` and all
other Discord block tests
- Implementation verified to match the pattern used in
`SendDiscordFileBlock` and `SendDiscordEmbedBlock`
- Code passes all linting, formatting, and type checking
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10909
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: SendDiscordMessage needs to take a channel id as an
option under channelname the same as the other discord blocks
> Issue Description: with how we can process the other discord blocks we
should do the same here with the identifiers being allowed to be a
channel name or id. we can't rename the field though or that will break
backwards compatibility
> Fixes
https://linear.app/autogpt/issue/OPEN-2701/senddiscordmessage-needs-to-take-a-channel-id-as-an-option-under
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User 055a3053-5ab6-449a-bcfa-990768594185:
> the ones with boxes around them need confirmed for lables but yeah its
related but not dupe
>
> Comment by User 264d7bf4-db2a-46fa-a880-7d67b58679e6:
> this might be a duplicate since there is a related ticket but not sure
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10909).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey3.medallia.com/?EAHeSx-AP01bZqG0Ld9QLQ) to start
the survey.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* New Features
* Send Discord Message block now accepts a channel ID in addition to
channel name.
* Server name is only required when using a channel name.
* Improved channel detection and validation with clearer errors if the
channel isn’t found.
* Documentation
* Updated block documentation to reflect support for channel ID or name
and clarify when server name is needed.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bently <Github@bentlybro.com>
|
||
|
|
c5b90f7b09 |
feat(platform): Simplify running of core docker services (#11113)
Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com> |