Introduces a new module with blocks for Google Docs operations,
including reading, creating, appending, inserting, formatting,
exporting, sharing, and managing public access for Google Docs. Updates
dependencies in pyproject.toml and poetry.lock to support these
features.
https://github.com/user-attachments/assets/3597366b-a9eb-4f8e-8a0a-5a0bc8ebc09b
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
Adds lots of basic docs tools + a dependency to use them with markdown
Block | Description | Key Features
-- | -- | --
Read & Create | |
GoogleDocsReadBlock | Read content from a Google Doc | Returns text
content, title, revision ID
GoogleDocsCreateBlock | Create a new Google Doc | Title, optional
initial content
GoogleDocsGetMetadataBlock | Get document metadata | Title, revision ID,
locale, suggested modes
GoogleDocsGetStructureBlock | Get document structure with indexes | Flat
segments or detailed hierarchy; shows start/end indexes
Plain Text Operations | |
GoogleDocsAppendPlainTextBlock | Append plain text to end | No
formatting applied
GoogleDocsInsertPlainTextBlock | Insert plain text at position |
Requires index; no formatting
GoogleDocsFindReplacePlainTextBlock | Find and replace plain text |
Case-sensitive option; no formatting on replacement
Markdown Operations | (ideal for LLM/AI output) |
GoogleDocsAppendMarkdownBlock | Append Markdown to end | Full formatting
via gravitas-md2gdocs
GoogleDocsInsertMarkdownAtBlock | Insert Markdown at position | Requires
index
GoogleDocsReplaceAllWithMarkdownBlock | Replace entire doc with Markdown
| Clears and rewrites
GoogleDocsReplaceRangeWithMarkdownBlock | Replace index range with
Markdown | Requires start/end index
GoogleDocsReplaceContentWithMarkdownBlock | Find text and replace with
Markdown | Text-based search; great for templates
Structural Operations | |
GoogleDocsInsertTableBlock | Insert a table | Rows/columns OR content
array; optional Markdown in cells
GoogleDocsInsertPageBreakBlock | Insert a page break | Position index (0
= end)
GoogleDocsDeleteContentBlock | Delete content range | Requires start/end
index
GoogleDocsFormatTextBlock | Apply formatting to text range | Bold,
italic, underline, font size/color, etc.
Export & Sharing | |
GoogleDocsExportBlock | Export to different formats | PDF, DOCX, TXT,
HTML, RTF, ODT, EPUB
GoogleDocsShareBlock | Share with specific users | Reader, commenter,
writer, owner roles
GoogleDocsSetPublicAccessBlock | Set public access level | Private,
anyone with link (view/comment/edit)
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Build, run, verify, and upload a block super test
- [x] [Google Docs Super
Agent_v8.json](https://github.com/user-attachments/files/24134215/Google.Docs.Super.Agent_v8.json)
works
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Chores**
* Updated backend dependencies.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Adds end-to-end Google Docs capabilities under
`backend/blocks/google/docs.py`, including rich Markdown support.
>
> - New blocks: read/create docs; plain-text
`append`/`insert`/`find_replace`/`delete`; text `format`;
`insert_table`; `insert_page_break`; `get_metadata`; `get_structure`
> - Markdown-powered blocks (via `gravitas_md2gdocs.to_requests`):
`append_markdown`, `insert_markdown_at`, `replace_all_with_markdown`,
`replace_range_with_markdown`, `replace_content_with_markdown`
> - Export and sharing: `export` (PDF/DOCX/TXT/HTML/RTF/ODT/EPUB),
`share` (user roles), `set_public_access`
> - Dependency updates: add `gravitas-md2gdocs` to `pyproject.toml` and
update `poetry.lock`
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73512a95b2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
## Summary
Added support for parsing YouTube Shorts URLs (`youtube.com/shorts/...`)
in the TranscribeYoutubeVideoBlock to extract video IDs correctly.
## Changes
- Modified `_extract_video_id` method in `youtube.py` to handle Shorts
URL format
- Added test cases for YouTube Shorts URL extraction
## Related Issue
Fixes#11500
## Test Plan
- [x] Added unit tests for YouTube Shorts URL extraction
- [x] Verified existing YouTube URL formats still work
- [x] CI should pass all existing tests
---------
Co-authored-by: Ubbe <hi@ubbe.dev>
We'll soon be needing a more feature-complete external API. To make way
for this, I'm moving some files around so:
- We can more easily create new versions of our external API
- The file structure of our internal API is more homogeneous
These changes are quite opinionated, but IMO in any case they're better
than the chaotic structure we have now.
### Changes 🏗️
- Move `backend/server` -> `backend/api`
- Move `backend/server/routers` + `backend/server/v2` ->
`backend/api/features`
- Change absolute sibling imports to relative imports
- Move `backend/server/v2/AutoMod` -> `backend/executor/automod`
- Combine `backend/server/routers/analytics_*test.py` ->
`backend/api/features/analytics_test.py`
- Sort OpenAPI spec file
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- CI tests
- [x] Clicking around in the app -> no obvious breakage
We want to provide Single Sign-On for multiple AutoGPT apps that use the
Platform as their backend.
### Changes 🏗️
Backend:
- DB + logic + API for OAuth flow (w/ tests)
- DB schema additions for OAuth apps, codes, and tokens
- Token creation/validation/management logic
- OAuth flow endpoints (app info, authorize, token exchange, introspect,
revoke)
- E2E OAuth API integration tests
- Other OAuth-related endpoints (upload app logo, list owned apps,
external `/me` endpoint)
- App logo asset management
- Adjust external API middleware to support auth with access token
- Expired token clean-up job
- Add `OAUTH_TOKEN_CLEANUP_INTERVAL_HOURS` setting (optional)
- `poetry run oauth-tool`: dev tool to test the OAuth flows and register
new OAuth apps
- `poetry run export-api-schema`: dev tool to quickly export the OpenAPI
schema (much quicker than spinning up the backend)
Frontend:
- Frontend UI for app authorization (`/auth/authorize`)
- Re-redirect after login/signup
- Frontend flow to batch-auth integrations on request of the client app
(`/auth/integrations/setup-wizard`)
- Debug `CredentialInputs` component
- Add `/profile/oauth-apps` management page
- Add `isOurProblem` flag to `ErrorCard` to hide action buttons when the
error isn't our fault
- Add `showTitle` flag to `CredentialsInput` to hide built-in title for
layout reasons
DX:
- Add [API
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/api-guide.md)
and [OAuth
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/oauth-guide.md)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manually verify test coverage of OAuth API tests
- Test `/auth/authorize` using `poetry run oauth-tool test-server`
- [x] Works
- [x] Looks okay
- Test `/auth/integrations/setup-wizard` using `poetry run oauth-tool
test-server`
- [x] Works
- [x] Looks okay
- Test `/profile/oauth-apps` page
- [x] All owned OAuth apps show up
- [x] Enabling/disabling apps works
- [ ] ~~Uploading logos works~~ can only test this once deployed to dev
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.
### Changes 🏗️
- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution
**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
- [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes required - this only adds test data and a
loading script.
Youtube can blocks cloud ips causing the youtube transcribe blocks to
not work. This PR adds webshare proxy to get around this issue
### Changes 🏗️
- add webshare proxy to youtube transcribe block
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have tested this works locally using the proxy
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Routes YouTube transcript fetching through Webshare proxy using
user/password credentials, wiring in provider enum, settings, default
credentials, and updated tests.
>
> - **Blocks** (`backend/blocks/youtube.py`):
> - Use `WebshareProxyConfig` with `YouTubeTranscriptApi` to fetch
transcripts via proxy.
> - Add `credentials` input (`user_password` for `webshare_proxy`);
include test credentials and mocks.
> - Update method signatures: `get_transcript(video_id, credentials)`
and `run(..., *, credentials, ...)`.
> - Change description to indicate proxy usage; add logging.
> - **Integrations**:
> - Providers (`backend/integrations/providers.py`): add
`ProviderName.WEBSHARE_PROXY`.
> - Credentials store (`backend/integrations/credentials_store.py`): add
`webshare_proxy` `UserPasswordCredentials`; include in
`DEFAULT_CREDENTIALS` and conditionally in `get_all_creds`.
> - **Settings** (`backend/util/settings.py`): add secrets
`webshare_proxy_username` and `webshare_proxy_password`.
> - **Tests** (`test/blocks/test_youtube.py`): update to pass
credentials and assert proxy config; add custom-credentials test; adjust
fallback/priority tests.
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d060898488. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
- Resolves#11316
- Durable fix to replace #11318
### Changes 🏗️
- Expand graph execution permissions check
- Don't require library membership for execution as sub-graph
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Can run sub-agent with non-latest graph version
- [x] Can run sub-agent that is available in Marketplace but not added
to Library
<!-- Clearly explain the need for these changes: -->
This PR addresses the need for consistent error handling across all
blocks in the AutoGPT platform. Previously, each block had to manually
define an `error` field in their output schema, leading to code
duplication and potential inconsistencies. Some blocks might forget to
include the error field, making error handling unpredictable.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Created `BlockSchemaOutput` base class**: New base class that
extends `BlockSchema` with a standardized `error` field
- **Created `BlockSchemaInput` base class**: Added for consistency and
future extensibility
- **Updated 140+ block implementations**: Changed all block `Output`
classes from `class Output(BlockSchema):` to `class
Output(BlockSchemaOutput):`
- **Removed manual error field definitions**: Eliminated hundreds of
duplicate `error: str = SchemaField(...)` definitions
- **Updated type annotations**: Changed `Block[BlockSchema,
BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout
the codebase
- **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput`
imports to all relevant files
- **Maintained backward compatibility**: Updated `EmptySchema` to
inherit from `BlockSchemaOutput`
**Key Benefits:**
- Consistent error handling across all blocks
- Reduced code duplication (removed ~200 lines of repetitive error field
definitions)
- Type safety improvements with distinct input/output schema types
- Blocks can still override error field with more specific descriptions
when needed
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verified `poetry run format` passes (all linting, formatting, and
type checking)
- [x] Tested block instantiation works correctly (MediaDurationBlock,
UnrealTextToSpeechBlock)
- [x] Confirmed error fields are automatically present in all updated
blocks
- [x] Verified block loading system works (successfully loads 353+
blocks)
- [x] Tested backward compatibility with EmptySchema
- [x] Confirmed blocks can still override error field with custom
descriptions
- [x] Validated core schema inheritance chain works correctly
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes were needed for this refactoring.*
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
Changes to providers blocks to store in db
### Changes 🏗️
- revet change
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have reverted the merge
## Problem
The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.
**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)
**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ!
No transcripts were found for any of the requested language codes: ('en',)
For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```
## Solution
Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:
1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
- Manually created transcripts (any language)
- Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all
**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ❌ Raises NoTranscriptFound
# After: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ✅ Returns Hungarian transcript
```
## Changes
- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order
## Testing
- ✅ All 7 new unit tests pass
- ✅ Block integration test passes
- ✅ Full test suite: 621 passed, 0 failed (no regressions)
- ✅ Code formatting and linting pass
## Impact
This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:
- ✅ Videos in any language can now be transcribed
- ✅ English is still preferred when available
- ✅ No breaking changes to existing functionality
- ✅ Graceful degradation to available languages
Fixes#10637
Fixes https://linear.app/autogpt/issue/OPEN-2626
> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
✨ Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Summary
Implement selective rollout of payment functionality using LaunchDarkly
feature flags to enable gradual deployment to pilot users.
- Add `ENABLE_PLATFORM_PAYMENT` flag to control credit system behavior
- Update `get_user_credit_model` to use user-specific flag evaluation
- Replace hardcoded `NEXT_PUBLIC_SHOW_BILLING_PAGE` with LaunchDarkly
flag
- Enable payment UI components only for flagged users
- Maintain backward compatibility with existing beta credit system
- Default to beta monthly credits when flag is disabled
- Fix tests to work with new async credit model function
## Key Changes
### Backend
- **Credit Model Selection**: The `get_user_credit_model()` function now
takes a `user_id` parameter and uses LaunchDarkly to determine which
credit model to return:
- Flag enabled → `UserCredit` (payment system enabled, no monthly
refills)
- Flag disabled → `BetaUserCredit` (current behavior with monthly
refills)
- **Flag Integration**: Added `ENABLE_PLATFORM_PAYMENT` flag and
integrated LaunchDarkly evaluation throughout the credit system
- **API Updates**: All credit-related endpoints now use the
user-specific credit model instead of a global instance
### Frontend
- **Dynamic UI**: Payment-related components (billing page, wallet
refill) now show/hide based on the LaunchDarkly flag
- **Removed Environment Variable**: Replaced
`NEXT_PUBLIC_SHOW_BILLING_PAGE` with runtime flag evaluation
### Testing
- **Test Fixes**: Updated all tests that referenced the removed global
`_user_credit_model` to use proper mocking of the new async function
## Deployment Strategy
This implementation enables a controlled rollout:
1. Deploy with flag disabled (default) - no behavior change for existing
users
2. Enable flag for pilot/beta users via LaunchDarkly dashboard
3. Monitor usage and feedback from pilot users
4. Gradually expand to more users
5. Eventually enable for all users once validated
## Test Plan
- [x] Unit tests pass for credit system components
- [x] Payment UI components show/hide correctly based on flag
- [x] Default behavior (flag disabled) maintains current functionality
- [x] Flag enabled users get payment system without monthly refills
- [x] Admin credit operations work correctly
- [x] Backward compatibility maintained
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
- Rename wallet and update design
- Update tasks and add Hidden Tasks section
- Update onboarding backend code and related db migration
- Add progress bar for some tasks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tasks can be finished
- [x] Finished tasks add correct amount of credits
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Introduces a Notion Read Page block that fetches a page by ID via the
Notion REST API. This is a first step toward Notion integration in the
AutoGPT Platform.
Motivation - Notion was not integrated yet. Im starting with a small
block to add capability incrementally.
### Notes
- I referred to the Todoist block implementation as a reference since
I’m a beginner.
- This is my first PR here
- The block passed `docker compose run --rm rest_server pytest -q`
successfully
<!-- Clearly explain the need for these changes: -->
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
### Test plan
- [x] Ran `docker compose run --rm rest_server pytest -q
backend/blocks/test/test_block.py -k notion`
- [x] Confirmed tests passed (2 passed, 652 deselected, warnings only).
- [x] Ran poetry run format to fix linters and tests
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Our API key generation, storage, and verification system has a couple of
issues that need to be ironed out before full-scale deployment.
### Changes 🏗️
- Move from unsalted SHA256 to salted Scrypt hashing for API keys
- Avoid false-negative API key validation due to prefix collision
- Refactor API key management code for clarity
- [refactor(backend): Clean up API key DB & API code
(#10797)](https://github.com/Significant-Gravitas/AutoGPT/pull/10797)
- Rename models and properties in `backend.data.api_key` for clarity
- Eliminate redundant/custom/boilerplate error handling/wrapping in API
key endpoint call stack
- Remove redundant/inaccurate `response_model` declarations from API key
endpoints
Dependencies for `autogpt_libs`:
- Add `cryptography` as a dependency
- Add `pyright` as a dev dependency
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Performing these actions through the UI (still) works:
- [x] Creating an API key
- [x] Listing owned API keys
- [x] Deleting an owned API key
- [x] Newly created API key can be used in Swagger UI
- [x] Existing API key can be used in Swagger UI
- [x] Existing API key is re-encrypted with salt on use
Moving to auto-generated frontend types caused returned blocks data to
no longer have proper typing.
### Changes 🏗️
- Add `BlockInfo` model and `get_info` function that returns it to the
`Block` class, including costs
- Move `BlockCost` and `BlockCostType` to `block.py` to prevent circular
imports
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Endpoints using the new type work correctly
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
- resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10511
In this PR, I’ve added backend endpoints and a frontend UI for edit
functionality on the Agent Dashboard. Now, users can update their store
submission, if status is `PENDING` or `APPROVED`, but not for `REJECTED`
and `DRAFT`. When users make changes to a pending status submission, the
changes are made to the same version. However, when users make changes
to an approved status submission, a new store listing version is
created.
Backend works something like this:
<img width="866" height="832" alt="Screenshot 2025-08-15 at 9 39 02 AM"
src="https://github.com/user-attachments/assets/209c60ac-8350-43c1-ba4c-7378d95ecba7"
/>
### Changes
- I’ve updated the `StoreSubmission` view to include `video_url` and
`categories`.
- I’ve added a new frontend UI for editing submissions.
- I’ve created an endpoint for editing submissions.
- I’ve added more end-to-end tests to ensure the edit submission
functionality works as expected.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have checked manually, everything is working perfectly.
- [x] All e2e tests are also passing.
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: neo <neo.dowithless@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Lluis Agusti <hi@llu.lu>
I have added e2e tests for agent dashboard page
It includes, tests like
- dashboard page loads successfully
- submit agent button works correctly
- agent table displays data correctly
- agent table actions work correctly
I’ve also updated the e2e test script to include some static agent
submissions, so I can test if it loads on the frontend.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tests are working perfectly locally
<img width="469" height="177" alt="Screenshot 2025-08-08 at 12 13 42 PM"
src="https://github.com/user-attachments/assets/5e37afc3-c151-476a-84de-0a06f44a0722"
/>
In this PR, I’ve added library page tests.
### Changes
I’ve added 9 tests: 8 for normal flows and 1 for checking edge cases.
Test names are something like:
- Library navigation is accessible from the navbar.
- The library page loads successfully.
- Agents are visible, and cards work correctly.
- Pagination works correctly.
- Sorting works correctly.
- Searching works correctly.
- Pagination while searching works correctly.
- Uploading an agent works correctly.
- Edge case: Search edge cases and error handling behave correctly.
Other than that, I’ve added a new utility that uses the build page to
help us create users at the start, which we could use to test the
library page.
- All tests are passing locally
<img width="514" height="465" alt="Screenshot 2025-07-12 at 11 13 41 AM"
src="https://github.com/user-attachments/assets/7a46c437-7db5-458b-b99a-4fa0d479866f"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All library tests are working locally and on CI perfectly.
This PR updates the existing E2E test data script to support the
creation of featured creators and featured agents. Previously, these
entities were not included, which limited our ability to fully test
certain flows during Playwright E2E testing.
### Changes
- Added logic to create featured creators
- Added logic to create featured agents
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tests are passing locally after updating the data script.
Need: The Gmail integration had several parsing issues that were causing
data loss and
workflow incompatibilities:
1. Email recipient parsing only captured the first recipient, losing
CC/BCC and multiple TO
recipients
2. Email body parsing was inconsistent between blocks, sometimes showing
"This email does
not contain a readable body" for valid emails
3. Type mismatches between blocks caused serialization issues when
connecting them in
workflows (lists being converted to string representations like
"[\"email@example.com\"]")
# Changes 🏗️
1. Enhanced Email Model:
- Added cc and bcc fields to capture all recipients
- Changed to field from string to list for consistency
- Now captures all recipients instead of just the first one
2. Improved Email Parsing:
- Updated GmailReadBlock and GmailGetThreadBlock to parse all recipients
using
getaddresses()
- Unified email body parsing logic across blocks with robust multipart
handling
- Added support for HTML to plain text conversion
- Fixed handling of emails with attachments as body content
3. Fixed Block Compatibility:
- Updated GmailSendBlock and GmailCreateDraftBlock to accept lists for
recipient fields
- Added validation to ensure at least one recipient is provided
- All blocks now consistently use lists for recipient fields, preventing
serialization
issues
4. Updated Test Data:
- Modified all test inputs/outputs to use the new list format for
recipients
- Ensures tests reflect the new data structure
# Checklist 📋
For code changes:
- I have clearly listed my changes in the PR description
- I have made a test plan
- I have tested my changes according to the test plan:
- Run existing Gmail block unit tests with poetry run test
- Create a workflow that reads emails with multiple recipients and
verify all TO, CC, BCC
recipients are captured
- Test email body parsing with plain text, HTML, and multipart emails
- Connect GmailReadBlock → GmailSendBlock in a workflow and verify
recipient data flows
correctly
- Connect GmailReplyBlock → GmailSendBlock and verify no serialization
errors occur
- Test sending emails with multiple recipients via GmailSendBlock
- Test creating drafts with multiple recipients via
GmailCreateDraftBlock
- Verify backwards compatibility by testing with single recipient
strings (should now
require lists)
- Create from scratch and execute an agent with at least 3 blocks
- Import an agent from file upload, and confirm it executes correctly
- Upload agent to marketplace
- Import an agent from marketplace and confirm it executes correctly
- Edit an agent from monitor, and confirm it executes correctly
# Breaking Change Note: The to field in GmailSendBlock and
GmailCreateDraftBlock now requires
a list instead of accepting both string and list. Existing workflows
using strings will
need to be updated to use lists (e.g., ["email@example.com"] instead of
"email@example.com").
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
To allow for a simpler dev experience, the new SDK now auto discovers
providers and registers them. However, the OAuth system was still
requiring these credentials to be hardcoded in the settings object.
This PR changes that to verify the env var is present during
registration and then allows the OAuth system to load them from the env.
### Changes 🏗️
- **OAuth Registration**: Modified `ProviderBuilder.with_oauth(..)` to
check OAuth env vars exist during registration
- **OAuth Loading**: Updated OAuth system to load credentials from env
vars if not using secrets
- **Block Filtering**: Added `is_block_auth_configured()` function to
check if a block has valid authorization options configured at runtime
- **Test Updates**: Fixed failing SDK registry tests to properly mock
environment variables for OAuth registration
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified OAuth system checks that env vars exist during provider
registration
- [x] Confirmed OAuth system can use env vars directly without requiring
hardcoded secrets
- [x] Tested that blocks with unconfigured OAuth providers are filtered
out
- [x] All SDK registry tests pass with proper env var mocking
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- OAuth providers now require their client ID and secret env vars to be
set for registration
- No changes required to `.env.example` or `docker-compose.yml`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Overview
This PR adds comprehensive Airtable integration to the AutoGPT platform,
enabling users to seamlessly connect their Airtable bases with AutoGPT
workflows for powerful no-code automation capabilities.
## Why Airtable Integration?
Airtable is one of the most popular no-code databases used by teams for
project management, CRMs, inventory tracking, and countless other use
cases. This integration brings significant value:
- **Data Automation**: Automate data entry, updates, and synchronization
between Airtable and other services
- **Workflow Triggers**: React to changes in Airtable bases with
webhook-based triggers
- **Schema Management**: Programmatically create and manage Airtable
table structures
- **Bulk Operations**: Efficiently process large amounts of data with
batch create/update/delete operations
## Key Features
### 🔌 Webhook Trigger
- **AirtableWebhookTriggerBlock**: Listens for changes in Airtable bases
and triggers workflows
- Supports filtering by table, view, and specific fields
- Includes webhook signature validation for security
### 📊 Record Operations
- **AirtableCreateRecordsBlock**: Create single or multiple records (up
to 10 at once)
- **AirtableUpdateRecordsBlock**: Update existing records with upsert
support
- **AirtableDeleteRecordsBlock**: Delete single or multiple records
- **AirtableGetRecordBlock**: Retrieve specific record details
- **AirtableListRecordsBlock**: Query records with filtering, sorting,
and pagination
### 🏗️ Schema Management
- **AirtableCreateTableBlock**: Create new tables with custom field
definitions
- **AirtableUpdateTableBlock**: Modify table properties
- **AirtableAddFieldBlock**: Add new fields to existing tables
- **AirtableUpdateFieldBlock**: Update field properties
## Technical Implementation Details
### Authentication
- Supports both API Key and OAuth authentication methods
- OAuth implementation includes proper token refresh handling
- Credentials are securely managed through the platform's credential
system
### Webhook Security
- Added `credentials` parameter to WebhooksManager interface for proper
signature validation
- HMAC-SHA256 signature verification ensures webhook authenticity
- Webhook cursor tracking prevents duplicate event processing
### API Integration
- Comprehensive API client (`_api.py`) with full type safety
- Proper error handling and response validation
- Support for all Airtable field types and operations
## Changes 🏗️
### Added Blocks:
- AirtableWebhookTriggerBlock
- AirtableCreateRecordsBlock
- AirtableDeleteRecordsBlock
- AirtableGetRecordBlock
- AirtableListRecordsBlock
- AirtableUpdateRecordsBlock
- AirtableAddFieldBlock
- AirtableCreateTableBlock
- AirtableUpdateFieldBlock
- AirtableUpdateTableBlock
### Modified Files:
- Updated WebhooksManager interface to support credential-based
validation
- Modified all webhook handlers to support the new interface
## Test Plan 📋
### Manual Testing Performed:
1. **Authentication Testing**
- ✅ Verified API key authentication works correctly
- ✅ Tested OAuth flow including token refresh
- ✅ Confirmed credentials are properly encrypted and stored
2. **Webhook Testing**
- ✅ Created webhook subscriptions for different table events
- ✅ Verified signature validation prevents unauthorized requests
- ✅ Tested cursor tracking to ensure no duplicate events
- ✅ Confirmed webhook cleanup on block deletion
3. **Record Operations Testing**
- ✅ Created single and batch records with various field types
- ✅ Updated records with and without upsert functionality
- ✅ Listed records with filtering, sorting, and pagination
- ✅ Deleted single and multiple records
- ✅ Retrieved individual record details
4. **Schema Management Testing**
- ✅ Created tables with multiple field types
- ✅ Added fields to existing tables
- ✅ Updated table and field properties
- ✅ Verified proper error handling for invalid field types
5. **Error Handling Testing**
- ✅ Tested with invalid credentials
- ✅ Verified proper error messages for API limits
- ✅ Confirmed graceful handling of network errors
### Security Considerations 🔒
1. **API Key Management**
- API keys are stored encrypted in the credential system
- Keys are never logged or exposed in error messages
- Credentials are passed securely through the execution context
2. **Webhook Security**
- HMAC-SHA256 signature validation on all incoming webhooks
- Webhook URLs use secure ingress endpoints
- Proper cleanup of webhooks when blocks are deleted
3. **OAuth Security**
- OAuth tokens are securely stored and refreshed
- Scopes are limited to necessary permissions
- Token refresh happens automatically before expiration
## Configuration Requirements
No additional environment variables or configuration changes are
required. The integration uses the existing credential management
system.
## Checklist 📋
#### For code changes:
- [x] I have read the [contributing
instructions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/.github/CONTRIBUTING.md)
- [x] Confirmed that `make lint` passes
- [x] Confirmed that `make test` passes
- [x] Updated documentation where needed
- [x] Added/updated tests for new functionality
- [x] Manually tested all blocks with real Airtable bases
- [x] Verified backwards compatibility of webhook interface changes
#### Security:
- [x] No hard-coded secrets or sensitive information
- [x] Proper input validation on all user inputs
- [x] Secure credential handling throughout
## Changes 🏗️
- Make docker + deps cache actually work on the FE CI
- Run the E2E test data script before Playwright
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI is faster in repeated runs ( _uses cache_ )
- [x] Test data script runs successfully
### For configuration changes:
None
Currently, we only create a library entry of the top-most graph when
importing the graph from an exported file.
This can cause some complications, as there is no way to remove the
library entry of it.
### Changes 🏗️
Create the library entry for all the subgraphs during the import
process.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Export an agent with subgraphs and import it back.
### Changes
- Introduced a new script to generate test data for end-to-end (E2E)
tests using API functions, ensuring compatibility with future model
changes.
- The script creates test users, agent blocks, graphs, profiles, library
agents, presets, API keys, and store submissions.
- Utilizes external services for image and video URLs, and includes
error handling for data creation processes.
- Provides a summary of created data upon completion, enhancing the
testing framework for the AutoGPT platform.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test scripts are working perfectly and not breaking anything. Data
is also correctly visible in the database.
This PR adds Excel file support to CSV processing and enhances text file
reading capabilities.
### Changes 🏗️
**ReadSpreadsheetBlock (formerly ReadCsvBlock):**
- Renamed `ReadCsvBlock` to `ReadSpreadsheetBlock` for better clarity
- Added Excel file support (.xlsx, .xls) with automatic conversion to
CSV using pandas
- Enhanced parameter `file_in` to `file_input` for consistency
- Excel files are automatically detected by extension and converted to
CSV format
- Maintains all existing CSV processing functionality (delimiters,
headers, etc.)
- Graceful error handling when pandas library is not available
**FileReadBlock:**
- Enhanced text file reading with advanced chunking capabilities
- Added parameters: `skip_size`, `skip_rows`, `row_limit`, `size_limit`,
`delimiter`
- Supports both character-based and row-based processing
- Chunked output for large files based on size limits
- Proper file handling with UTF-8 and latin-1 encoding fallbacks
- Uses `store_media_file` for secure file processing (URLs, data URIs,
local paths)
- Fixed test input to use data URI instead of non-existent file
**General Improvements:**
- Consistent parameter naming across blocks (`file_input`)
- Enhanced error handling and validation
- Comprehensive test coverage
- All existing functionality preserved
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Both ReadSpreadsheetBlock and FileReadBlock instantiate correctly
- [x] ReadSpreadsheetBlock processes CSV data with existing
functionality
- [x] FileReadBlock reads text files with data URI input
- [x] All block tests pass (457 passed, 83 skipped)
- [x] No linting errors in modified files
- [x] Excel support gracefully handles missing pandas dependency
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes required for this PR.*
<!-- Clearly explain the need for these changes: -->
The `GmailReadBlock._get_email_body()` method was only inspecting the
top-level payload and a single `text/plain` part, causing it to return
the fallback string "This email does not contain a text body." for most
Gmail messages. This occurred because Gmail messages are typically
wrapped in `multipart/alternative` or other multipart containers, which
the original implementation couldn't handle.
This critical issue made the Gmail integration unusable for reading
email body content, as virtually every real Gmail message uses multipart
MIME structures.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Changes
#### Core Implementation:
- **Replaced simple `_get_email_body()` with recursive multipart
parser** that can walk through nested MIME structures
- **Added `_walk_for_body()` method** for recursive traversal of email
parts with depth limiting (max 10 levels)
- **Implemented safe base64 decoding** with automatic padding correction
in `_decode_base64()`
- **Added attachment body support** via `_download_attachment_body()`
for emails where body content is stored as attachments
#### Email Format Support:
- **HTML to text conversion** using `html2text` library for HTML-only
emails
- **Multipart/alternative handling** with preference for `text/plain`
over `text/html`
- **Nested multipart structure support** (e.g., `multipart/mixed`
containing `multipart/alternative`)
- **Single-part email support** (maintains backward compatibility)
#### Dependencies & Testing:
- **Added `html2text = "^2024.2.26"`** to `pyproject.toml` for HTML
conversion
- **Created comprehensive unit tests** in `test/blocks/test_gmail.py`
covering all email types and edge cases
- **Added error handling and graceful fallbacks** for malformed data and
missing dependencies
#### Security & Performance:
- **Recursion depth limiting** prevents infinite loops on malformed
email structures
- **Exception handling** ensures graceful degradation when API calls
fail
- **Efficient tree traversal** with early returns for better performance
### Checklist
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<details>
<summary>Test Plan</summary>
- **Single-part text/plain emails** - Verified correct extraction of
plain text content
- **Multipart/alternative emails** - Tested preference for plain text
over HTML when both available
- **HTML-only emails** - Confirmed HTML to text conversion works
correctly
- **Nested multipart structures** - Tested deeply nested
`multipart/mixed` containing `multipart/alternative`
- **Attachment-based body content** - Verified downloading and decoding
of body stored as attachments
- **Base64 padding edge cases** - Tested malformed base64 data with
missing padding
- **Recursion depth limits** - Confirmed protection against infinite
recursion
- **Error handling scenarios** - Tested graceful fallbacks for API
failures and missing dependencies
- **Backward compatibility** - Ensured existing functionality remains
unchanged for edge cases
- **Integration testing** - Ran standalone verification script with 100%
test pass rate
</details>
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration Changes</summary>
- Added `html2text` dependency to `pyproject.toml` - no environment or
infrastructure changes required
- No changes to ports, services, secrets, or databases
- Fully backward compatible with existing Gmail API configuration
</details>
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
### Summary
Performance optimization for the platform's store and creator
functionality by adding targeted database indexes and implementing
materialized views to reduce query execution time.
### Changes 🏗️
**Database Performance Optimizations:**
- Added strategic database indexes for `StoreListing`,
`StoreListingVersion`, `StoreListingReview`, `AgentGraphExecution`, and
`Profile` tables
- Implemented materialized views (`mv_agent_run_counts`,
`mv_review_stats`) to cache expensive aggregation queries
- Optimized `StoreAgent` and `Creator` views to use materialized views
and improved query patterns
- Added automated refresh function with 15-minute scheduling for
materialized views (when pg_cron extension is available)
**Key Performance Improvements:**
- Filtered indexes on approved store listings to speed up marketplace
queries
- GIN index on categories for faster category-based searches
- Composite indexes for common query patterns (e.g., listing + version
lookups)
- Pre-computed agent run counts and review statistics to eliminate
expensive aggregations
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified migration runs successfully without errors
- [x] Confirmed materialized views are created and populated correctly
- [x] Tested StoreAgent and Creator view queries return expected results
- [x] Validated automatic refresh function works properly
- [x] Confirmed rollback migration successfully removes all changes
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note:** No configuration changes were required as this is purely a
database schema optimization.
## Block Development SDK - Simplifying Block Creation
### Problem
Currently, creating a new block requires manual updates to **5+ files**
scattered across the codebase:
- `backend/data/block_cost_config.py` - Manually add block costs
- `backend/integrations/credentials_store.py` - Add default credentials
- `backend/integrations/providers.py` - Register new providers
- `backend/integrations/oauth/__init__.py` - Register OAuth handlers
- `backend/integrations/webhooks/__init__.py` - Register webhook
managers
This creates significant friction for developers, increases the chance
of configuration errors, and makes the platform difficult to scale.
### Solution
This PR introduces a **Block Development SDK** that provides:
- Single import for all block development needs: `from backend.sdk
import *`
- Automatic registration of all block configurations
- Zero external file modifications required
- Provider-based configuration with inheritance
### Changes 🏗️
#### 1. **New SDK Module** (`backend/sdk/`)
- **`__init__.py`**: Unified exports of 68+ block development components
- **`registry.py`**: Central auto-registration system for all block
configurations
- **`builder.py`**: `ProviderBuilder` class for fluent provider
configuration
- **`provider.py`**: Provider configuration management
- **`cost_integration.py`**: Automatic cost application system
#### 2. **Provider Builder Pattern**
```python
# Configure once, use everywhere
my_provider = (
ProviderBuilder("my-service")
.with_api_key("MY_SERVICE_API_KEY", "My Service API Key")
.with_base_cost(5, BlockCostType.RUN)
.build()
)
```
#### 3. **Automatic Cost System**
- Provider base costs automatically applied to all blocks using that
provider
- Override with `@cost` decorator for block-specific pricing
- Tiered pricing support with cost filters
#### 4. **Dynamic Provider Support**
- Modified `ProviderName` enum to accept any string via `_missing_`
method
- No more manual enum updates for new providers
#### 5. **Application Integration**
- Added `sync_all_provider_costs()` to `initialize_blocks()` for
automatic cost registration
- Maintains full backward compatibility with existing blocks
#### 6. **Comprehensive Examples** (`backend/blocks/examples/`)
- `simple_example_block.py` - Basic block structure
- `example_sdk_block.py` - Provider with credentials
- `cost_example_block.py` - Various cost patterns
- `advanced_provider_example.py` - Custom API clients
- `example_webhook_sdk_block.py` - Webhook configuration
#### 7. **Extensive Testing**
- 6 new test modules with 30+ test cases
- Integration tests for all SDK features
- Cost calculation verification
- Provider registration tests
### Before vs After
**Before SDK:**
```python
# 1. Multiple complex imports
from backend.data.block import Block, BlockCategory, BlockOutput
from backend.data.model import SchemaField, CredentialsField
# ... many more imports
# 2. Update block_cost_config.py
BLOCK_COSTS[MyBlock] = [BlockCost(...)]
# 3. Update credentials_store.py
DEFAULT_CREDENTIALS.append(...)
# 4. Update providers.py enum
# 5. Update oauth/__init__.py
# 6. Update webhooks/__init__.py
```
**After SDK:**
```python
from backend.sdk import *
# Everything configured in one place
my_provider = (
ProviderBuilder("my-service")
.with_api_key("MY_API_KEY", "My API Key")
.with_base_cost(10, BlockCostType.RUN)
.build()
)
class MyBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = my_provider.credentials_field()
data: String = SchemaField(description="Input data")
class Output(BlockSchema):
result: String = SchemaField(description="Result")
# That's it\! No external files to modify
```
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created new blocks using SDK pattern with provider configuration
- [x] Verified automatic cost registration for provider-based blocks
- [x] Tested cost override with @cost decorator
- [x] Confirmed custom providers work without enum modifications
- [x] Verified all example blocks execute correctly
- [x] Tested backward compatibility with existing blocks
- [x] Ran all SDK tests (30+ tests, all passing)
- [x] Created blocks with credentials and verified authentication
- [x] Tested webhook block configuration
- [x] Verified application startup with auto-registration
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
(no changes needed)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes needed)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
### Impact
- **Developer Experience**: Block creation time reduced from hours to
minutes
- **Maintainability**: All block configuration in one place
- **Scalability**: Support hundreds of blocks without enum updates
- **Type Safety**: Full IDE support with proper type hints
- **Testing**: Easier to test blocks in isolation
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.
### Changes 🏗️
Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
An anti-virus file scan step is added to each file upload step on the
platform before the file is sent to cloud storage or local machine
storage.
### Changes 🏗️
* Added ClamAV service
* Added AV file scan on each upload step
* Added tests & documentation
* Make the step mandatory even on local development
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tried using FileUploadBlock & AgentFileInputBlock
## Changes 🏗️
The test data script is not working locally, this should fix it 🤞🏽
- Fixed `agentId` → `agentGraphId` field references in preset matching
logic
- Fixed `agentId` → `agentGraphId` field references in store listing
graph lookup
- Added graph uniqueness logic to prevent duplicate library agents per
user
- Improved data consistency by ensuring proper foreign key relationships
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified script runs without database schema errors
- [x] Confirmed foreign key relationships are properly maintained
- [x] Tested that library agents use unique graphs per user
- [x] Validated preset matching uses correct field references
This pull request adds support for setting up (webhook-)triggered agents
in the Library. It contains changes throughout the entire stack to make
everything work in the various phases of a triggered agent's lifecycle:
setup, execution, updates, deletion.
Setting up agents with webhook triggers was previously only possible in
the Builder, limiting their use to the agent's creator only. To make it
work in the Library, this change uses the previously introduced
`AgentPreset` to store information on, instead of on the graph's nodes
to which only a graph's creator has access.
- Initial ticket: #10111
- Builds on #9786


### Changes 🏗️
Frontend:
- Amend the Library's `AgentRunDraftView` to handle creating and editing
Presets
- Add `hideIfSingleCredentialAvailable` parameter to `CredentialsInput`
- Add multi-select support to `TypeBasedInput`
- Add Presets section to `AgentRunsSelectorList`
- Amend `AgentRunSummaryCard` for use for Presets
- Add `AgentStatusChip` to display general agent status (for now: Active
/ Inactive / Error)
- Add Preset loading logic and create/update/delete handlers logic to
`AgentRunsPage`
- Rename `IconClose` to `IconCross`
API:
- Add `LibraryAgent` properties `has_external_trigger`,
`trigger_setup_info`, `credentials_input_schema`
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Remove redundant parameters from `POST
/library/presets/{preset_id}/execute` endpoint
Backend:
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Extract non-node-related logic from `on_node_activate` into
`setup_webhook_for_block`
- Add webhook-related logic to `update_preset` and `delete_preset`
endpoints
- Amend webhook infrastructure to work with AgentPresets
- Add preset trigger support to webhook ingress endpoint
- Amend executor stack to work with passed-in node input
(`nodes_input_masks`, generalized from `node_credentials_input_map`)
- Amend graph validation to work with passed-in node input
- Add `AgentPreset`->`IntegrationWebhook` relation
- Add `WebhookWithRelations` model
- Change behavior of `BaseWebhooksManager.get_manual_webhook(..)` to
avoid unnecessary changes of the webhook URL: ignore `events` to find
matching webhook, and update `events` if necessary.
- Fix & improve `AgentPreset` API, models, and DB logic
- Add `isDeleted` filter to get/list queries
- Add `user_id` attribute to `LibraryAgentPreset` model
- Add separate `credentials` property to `LibraryAgentPreset` model
- Fix `library_db.update_preset(..)` replacement of existing
`InputPresets`
- Make `library_db.update_preset(..)` more usage-friendly with separate
parameters for updateable properties
- Add `user_id` checks to various DB functions
- Fix error handling in various endpoints
- Fix cache race condition on `load_webhook_managers()`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Test existing functionality
- [x] Auto-setup and -teardown of webhooks on save in the builder still
works
- [x] Running an agent normally from the Library still works
- Test new functionality
- [x] Setting up a trigger in the Library
- [x] Updating a trigger in the Library
- [x] Disabling and re-enabling a trigger in the Library
- [x] Deleting a trigger in the Library
- [x] Triggers set up in the Library result in a new run when the
webhook receives a payload
This change introduced async execution for blocks and the execution
engine. Paralellism will be achieved through a single process
asynchronous execution instead of process concurrency.
### Changes 🏗️
* Support async execution for the graph executor
* Removed process creation for node execution
* Update all blocks to support async executions
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph executions, tested many of the impacted blocks.
This pull request introduces a comprehensive backend testing guide and
adds new tests for analytics logging and various API endpoints, focusing
on snapshot testing. It also includes corresponding snapshot files for
these tests. Below are the most significant changes:
### Documentation Updates:
* Added a detailed `TESTING.md` file to the backend, providing a guide
for running tests, snapshot testing, writing API route tests, and best
practices. It includes examples for mocking, fixtures, and CI/CD
integration.
### Analytics Logging Tests:
* Implemented tests for logging raw metrics and analytics in
`analytics_test.py`, covering success scenarios, various input values,
invalid requests, and complex nested data. These tests utilize snapshot
testing for response validation.
* Added snapshot files for analytics logging tests, including responses
for success cases, various metric values, and complex analytics data.
[[1]](diffhunk://#diff-654bc5aa1951008ec5c110a702279ef58709ee455ba049b9fa825fa60f7e3869R1-R3)
[[2]](diffhunk://#diff-e0a434b107abc71aeffb7d7989dbfd8f466b5e53f8dea25a87937ec1b885b122R1-R3)
[[3]](diffhunk://#diff-dd0bc0b72264de1a0c0d3bd0c54ad656061317f425e4de461018ca51a19171a0R1-R3)
[[4]](diffhunk://#diff-63af007073db553d04988544af46930458a768544cabd08412265e0818320d11R1-R30)
### Snapshot Files for API Endpoints:
* Added snapshot files for various API endpoint tests, such as:
- Graph-related operations (`graphs_get_single_response`,
`graphs_get_all_response`, `blocks_get_all_response`).
[[1]](diffhunk://#diff-b25dba271606530cfa428c00073d7e016184a7bb22166148ab1726b3e113dda8R1-R29)
[[2]](diffhunk://#diff-1054e58ec3094715660f55bfba1676d65b6833a81a91a08e90ad57922444d056R1-R31)
[[3]](diffhunk://#diff-cfd403ab6f3efc89188acaf993d85e6f792108d1740c7e7149eb05efb73d918dR1-R14)
- User-related operations (`auth_get_or_create_user_response`,
`auth_update_email_response`).
[[1]](diffhunk://#diff-49e65ab1eb6af4d0163a6c54ed10be621ce7336b2ab5d47d47679bfaefdb7059R1-R5)
[[2]](diffhunk://#diff-ac1216f96878bd4356454c317473654d5d5c7c180125663b80b0b45aa5ab52cbR1-R3)
- Credit-related operations (`credits_get_balance_response`,
`credits_get_auto_top_up_response`, `credits_top_up_request_response`).
[[1]](diffhunk://#diff-189488f8da5be74d80ac3fb7f84f1039a408573184293e9ba2e321d535c57cddR1-R3)
[[2]](diffhunk://#diff-ba3c4a6853793cbed24030cdccedf966d71913451ef8eb4b2c4f426ef18ed87aR1-R4)
[[3]](diffhunk://#diff-43d7daa0c82070a9b6aee88a774add8e87533e630bbccbac5a838b7a7ae56a75R1-R3)
- Graph execution and deletion (`blocks_execute_response`,
`graphs_delete_response`).
[[1]](diffhunk://#diff-a2ade7d646ad85a2801e7ff39799a925a612548a1cdd0ed99b44dd870d1465b5R1-R12)
[[2]](diffhunk://#diff-c0d1cd0a8499ee175ce3007c3a87ba5f3235ce02d38ce837560b36a44fdc4a22R1-R3)##
Summary
- add pytest-snapshot to backend dev requirements
- snapshot server route response JSONs
- mention how to update stored snapshots
## Testing
- `poetry run format`
- `poetry run test`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] run poetry run test
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
- Part of #9307
- ❗ Blocks #9541
### Changes 🏗️
Backend:
- Fix+improve `GET /library/presets` (`list_presets`) endpoint
- Fix pagination
- Add `graph_id` filter parameter
- Allow partial preset updates: `PUT /presets/{preset_id}` -> `PATCH
/presets/{preset_id}`
- Allow creating preset from graph execution through `POST /presets`
- Clean up models & DB functions
- Split `upsert_preset` into `create_preset` + `update_preset`
- Add `LibraryAgentPresetUpdatable`
- Replace `CreateLibraryAgentPresetRequest` with
`LibraryAgentPresetCreatable`
- Use `LibraryAgentPresetCreatable` as base class for
`LibraryAgentPreset`
- Remove redundant `set_is_deleted_for_library_agent(..)`
- Improve log statements
- Improve DB statements (e.g. by using unique keys where possible)
Frontend:
- Add timestamp parsing logic to library agent preset endpoints
- Brand `LibraryAgentPreset.id` + references
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI green
- Since these changes don't affect existing front-end functionality, no
additional testing is needed.
Suppose we have pint with list[list[int]] type, and we want directly
insert the a new value inside the first index of the first list e.g:
list[0][0] = X through a dynamic pin, this will be translated into
list_$_0_$_0, and the system does not currently support this.
### Changes 🏗️
Add support for nested dynamic pins for list, object, and dict.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] lots of unit tests
- [x] Tried inserting the value directly on the `value` nested field on
Google Sheets Write block.
<img width="371" alt="image"
src="https://github.com/user-attachments/assets/0a5e7213-b0e0-4fce-9e89-b39f7a583582"
/>
- Resolves#9987
### Changes 🏗️
- Split `pin_url(..)` out of `validate_url(..)` and call
`extra_url_validator` in between
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] GitHub Read Pull Request Block works with "Include PR Changes"
enabled
The Library Agent credentials UX (#9789) currently doesn't work for
sub-graphs.
### Changes 🏗️
- Include sub-graphs in generating `Graph.credentials_input_schema`
- Propagate `node_credentials_input_map` into `AgentExecutionBlock`
executions
- Fix: also apply `node_credentials_input_map` in `_enqueue_next_nodes`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Import a graph with sub-graphs that need credentials
- Run this agent from the Library
- [x] -> Should work
Using sync code in the async route often introduces a blocking
event-loop code that impacts stability.
The current RPC system only provides a synchronous client to call the
service endpoints.
The scope of this PR is to provide an entirely decoupled signature
between client and server, allowing the client can mix & match async &
sync options on the client code while not changing the async/sync nature
of the server.
### Changes 🏗️
* Add support for flexible async/sync RPC client.
* Migrate scheduler client to all-async client.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Scheduler route test.
- [x] Modified service_test.py
- [x] Run normal agent executions
<!-- Clearly explain the need for these changes: -->
We need a way to refund people who spend money on agents wihout making
manual db actions
### Changes 🏗️
- Adds a bunch for refunding users
- Adds reasons and admin id for actions
- Add admin to db manager
- Add UI for this for the admin panel
- Clean up pagination controls
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test by importing dev db as baseline
- [x] Add transactions on top for "refund", and make sure all existing
transactions work
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Currently the execution task is not properly distributed between
executors because we need to send the execution request to the execution
server.
The execution manager now accepts the execution request from the message
queue. Thus, we can remove the synchronous RPC system from this service,
let the system focus on executing the agent, and not spare any process
for the HTTP API interface.
This will also reduce the risk of the execution service being too busy
and not able to accept any add execution requests.
### Changes 🏗️
* Remove the RPC system in Agent Executor
* Allow the cancellation of the execution that is still waiting in the
queue (by avoiding it from being executed).
* Make a unified helper for adding an execution request to the system
and move other execution-related helper functions into
`executor/utils.py`.
* Remove non-db connections (redis / rabbitmq) in Database Manager and
let the client manage this by themselves.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing CI, some agent runs