mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-30 03:00:41 -04:00
cf2dccd29c1d593e0870fbb8ce259451fbd44f20
7049 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
cf2dccd29c | part-1 | ||
|
|
c5d1a96790 |
feat(frontend): implement infinite scrolling for LibraryAgentList component
- Added InfiniteScroll component to LibraryAgentList for improved agent loading experience. - Removed deprecated scroll threshold logic from useLibraryAgentList. - Updated data fetching logic to handle pagination and loading states more effectively. - Cleaned up unused variables and improved code readability. This enhancement allows for a smoother user experience when browsing agents, as more agents are loaded dynamically as the user scrolls. |
||
|
|
1c3fa804d4 |
feat(backend): add timeout guard for locked_transaction used for credit transactions (#10528)
## Summary This PR adds a timeout guard to the `locked_transaction` function used for credit transactions to prevent indefinite blocking and improve reliability. ## Changes - Modified `locked_transaction` in `/backend/backend/data/db.py` to add proper timeout handling - Set `lock_timeout` and `statement_timeout` to prevent indefinite blocking - Updated function signature to use default timeout parameter - Added comprehensive docstring explaining the locking mechanism ## Motivation The previous implementation could potentially block indefinitely if a lock couldn't be acquired, which could cause issues in production environments, especially for critical credit transactions. ## Testing - Existing tests pass - The timeout mechanism ensures transactions won't hang indefinitely - Advisory locks are properly released on commit/rollback 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
69d873debc |
fix(backend): improve executor reliability and error handling (#10526)
This PR improves the reliability of the executor system by addressing several race conditions and improving error handling throughout the execution pipeline. ### Changes 🏗️ - **Consolidated exception handling**: Now using `BaseException` to properly catch all types of interruptions including `CancelledError` and `SystemExit` - **Atomic stats updates**: Moved node execution stats updates to be atomic with graph stats updates to prevent race conditions - **Improved cleanup handling**: Added proper timeout handling (3600s) for stuck executions during cleanup - **Fixed concurrent update race conditions**: Node execution updates are now properly synchronized with graph execution updates - **Better error propagation**: Improved error type preservation and status management throughout the execution chain - **Graph resumption support**: Added proper handling for resuming terminated and failed graph executions - **Removed deprecated methods**: Removed `update_node_execution_stats` in favor of atomic updates ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Execute a graph with multiple nodes and verify stats are updated correctly - [x] Cancel a running graph execution and verify proper cleanup - [x] Simulate node failures and verify error propagation - [x] Test graph resumption after termination/failure - [x] Verify no race conditions in concurrent node execution updates #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
4283798dc2 |
feat: Avoid Rest & DatabaseManager service serving traffic when the db is not yet connected (#10522)
Sometimes we receive an error where the service is not connected to the DB, but we have started receiving traffic, making the request fail. ### Changes 🏗️ Make the `/health_check` endpoint also check the database connection. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Existing CI, manual test |
||
|
|
326c4a9e0c |
feat(frontend): Add marketplace creator page tests (#10429)
- Resolves - https://github.com/Significant-Gravitas/AutoGPT/issues/10428 - Depends on - https://github.com/Significant-Gravitas/AutoGPT/pull/10427 - Need to review this pr, once this issue is fixed - https://github.com/Significant-Gravitas/AutoGPT/issues/10404 I’ve created additional tests for the creators marketplace page Tests that I have added - User can access creator's page when logged out. - User can access creator's page when logged in. - Creator page details are visible. - Agents in agent by sections navigation works. #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] I have done all the tests and they are working perfectly |
||
|
|
7705cf243c |
refactor(frontend): Update data fetching strategy in marketplace main page (#10520)
With this PR, we’re changing the data fetching strategy on the
marketplace page. We’re now using autogenerated React queries.
### Changes
- Splits separate render logic and hook logic.
- Update the data fetching strategy.
- Currently, we’re seeing agents in the featured section and creators in
the featured creators section, even if they’re not set to “isFeatured”
true. I’ve fixed that also.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All marketplace E2E tests are working.
- [x] I’ve tested all the links and checked if everything renders
perfectly on the marketplace page.
|
||
|
|
8331dabf6a |
feat(backend): Make agent graph execution retriable and its failure visible (#10518)
Make agent graph execution durable by making it retriable. When it fails to retry, we should make the error visible to the UI. <img width="900" height="495" alt="image" src="https://github.com/user-attachments/assets/70e3e117-31e7-4704-8bdf-1802c6afc70b" /> <img width="900" height="407" alt="image" src="https://github.com/user-attachments/assets/78ca6c28-6cc2-4aff-bfa9-9f94b7f89f77" /> ### Changes 🏗️ * Make _on_graph_execution retriable * Increase retry count for failing db-manager RPC * Add test coverage for RPC failure retry ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Allow graph execution retry |
||
|
|
e632549175 |
feat(backend): Add AI-generated activity status for agent executions (#10487)
## Summary - Adds AI-generated activity status summaries for agent execution results - Provides users with conversational, non-technical summaries of what their agents accomplished - Includes comprehensive execution data analysis with honest failure reporting ## Changes Made - **Backend**: Added `ActivityStatusGenerator` module with async LLM integration - **Database**: Extended `GraphExecutionStats` and `Stats` models with `activity_status` field - **Frontend**: Added "Smart Agent Execution Summary" display with disclaimer tooltip - **Settings**: Added `execution_enable_ai_activity_status` toggle (disabled by default) - **Testing**: Comprehensive test suite with 12 test cases covering all scenarios ## Key Features - Collects execution data including graph structure, node relations, errors, and I/O samples - Generates user-friendly summaries from first-person perspective - Honest reporting of failures and invalid inputs (no sugar-coating) - Payload optimization for LLM context limits - Full async implementation with proper error handling ## Test Plan - [x] All existing tests pass - [x] New comprehensive test suite covers success/failure scenarios - [x] Feature toggle testing (enabled/disabled states) - [x] Frontend integration displays correctly - [x] Error handling and edge cases covered 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
878f61aaf4 |
fix(test): Enhance E2E test data script to include featured creators and agents (#10517)
This PR updates the existing E2E test data script to support the
creation of featured creators and featured agents. Previously, these
entities were not included, which limited our ability to fully test
certain flows during Playwright E2E testing.
### Changes
- Added logic to create featured creators
- Added logic to create featured agents
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tests are passing locally after updating the data script.
|
||
|
|
e371ef853a |
feat(frontend): Add main marketplace page tests and page object structure (#10427)
- Resolves - https://github.com/Significant-Gravitas/AutoGPT/issues/10426 - Need to review this pr, once this issue is fixed - https://github.com/Significant-Gravitas/AutoGPT/issues/10404 I’ve created additional tests for the main page, divided into two parts: one for basic functionality and the other for edge cases. **Basic functionality:** - Users can access the marketplace page when logged out. - Users can access the marketplace page when logged in. - Featured agents, top agents, and featured creators are visible. - Users can navigate and interact with marketplace elements. - The complete search flow works correctly. **Edge cases:** - Searching for a non-existent item shows no results. ### Changes - Introduced a new test suite for the marketplace, covering basic functionality and edge cases. - Implemented the MarketplacePage class to encapsulate interactions with the marketplace page. - Added utility functions for assertions, including visibility checks and URL matching. - Enhanced the LoginPage class with a goto method for navigation. - Established a comprehensive search flow test to validate search functionality. #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] I have done all the tests and they are working perfectly --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> Co-authored-by: Lluis Agusti <hi@llu.lu> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Ubbe <hi@ubbe.dev> |
||
|
|
d323dc2821 |
feat(backend): optimize processing of queues in notif service (#10513)
The notification service was running an inefficient polling loop that constantly checked each queue sequentially with 1-second timeouts, even when queues were empty. This caused: - High CPU usage from continuous polling - Sequential processing that blocked queues from being processed in parallel - Unnecessary delays from timeout-based polling instead of event-driven consumption - Poor throughput (500-2,000 messages/second) compared to potential (8,000-12,000 messages/second) ## Changes 🏗️ - Replaced polling-based _run_queue() with event-driven _consume_queue() using async iterators - Implemented concurrent queue consumption using asyncio.gather() instead of sequential processing - Added QoS settings (prefetch_count=10) to control memory usage - Improved error handling with message.process() context manager for automatic ack/nack - Added graceful shutdown that properly cancels all consumer tasks - Removed unused QueueEmpty import ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: - [ ] Deploy to test environment and monitor CPU usage - [ ] Verify all queue types (immediate, admin, batch, summary) process messages correctly - [ ] Test graceful shutdown with messages in flight - [ ] Monitor that database management service remains stable - [ ] Check logs for proper consumer startup messages - [ ] Verify messages are properly acked/nacked on success/failure --------- Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
686d811062 |
feat(backend): Agent Executor reliability; make RPC to DB manager durable (#10516)
Some failure on DB RPC can cause agent execution failure. This change makes sure the error chance is minimized. ### Changes 🏗️ * Enable request retry * Increase transaction timeout * Use better typing on the DB query * Gracefully handles insufficient balance ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Manual tests |
||
|
|
216762575c |
fix(frontend): publish agent improvements (#10515)
## Changes 🏗️ - Moved API call from `usePublishAgentModal` to `useAgentInfoStep` for better encapsulation - overall cleaner state management + [state colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster) - Added loading states with a spinner to the submit button during API call - Removed redundant validation: now relies entirely on zod schema validation - All thumbnails now use 16:9 (`aspect-video`) aspect ratio for consistency - Highlight selected thumbnails with blue border - Table alignment fixes - Rename `Edit` action to `View` to better reflect the content of the modal that appears when clicked... ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] API calls work with loading states in Agent Info Step - [x] Image aspect ratios are consistent across all components - [x] Form validation works through zod schema only ### For configuration changes: None |
||
|
|
054c20abdc |
feat(backend): Add new LLM models (#10512)
## Summary
This adds 10 new LLM's to the platform!
I have added the model names, metadata like max input and output and the
price for each model!
- GROK_4
- KIMI_K2
- QWEN3_235B_A22B_THINKING
- QWEN3_CODER
- GEMINI_2_5_FLASH
- GEMINI_2_0_FLASH
- GEMINI_2_5_FLASH_LITE_PREVIEW
- GEMINI_2_0_FLASH_LITE
- DEEPSEEK_R1_0528
- GPT41_MINI
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test and verify all the models work!
|
||
|
|
f542995a15 |
fix(frontend): publish agent modal refactor + improvements (#10479)
## Changes 🏗️ ### Why these changes We have a high-priority bug where the publish agent modal wouldn't open when clicking `Edit` on the Dashboard Creator page table. The create form was also buggy. When looking into the code, I noticed it was pretty messy. I went ahead and refactored it: - [x] separation of concerns ( _split render / hook logic_ ) - [x] split into sub-components ( `PublishAgentModal/components` ) - [x] colocated state ( moved state to the modal steps rather than having everything top-level ) - [x] used the new Design System components Overall, we end up with a cleaner and stable experience ✨ ### E2E tests I also added E2E tests 🤖 to make sure we catch regressions in the future in this modal. For now, it tests the first 2 steps. It does not do image upload and publish as that wasn't working locally ( _might iterate on that later_ ) ### Step 1 – Select Agent <img width="1161" height="859" alt="Screenshot 2025-07-29 at 16 12 46" src="https://github.com/user-attachments/assets/a4949fb0-1a44-4926-a374-51eefadef063" /> ### Step 2 – Agent Info Form <img width="1061" height="804" alt="Screenshot 2025-07-29 at 16 03 11" src="https://github.com/user-attachments/assets/b9a45bda-18ea-4844-b52c-db499f45193e" /> ### Step 3 – Agent Review <img width="1480" height="867" alt="Screenshot 2025-07-29 at 16 11 07" src="https://github.com/user-attachments/assets/248bdf58-886d-43f3-a37a-35fd1a83e566" /> ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] open the modal through the Account menu → ( `Publish Agent` ) - [x] complete the form and check validation errors - [x] add images and generate image - [x] publish the agent - [x] the agent shows up on the table - [x] Open an agent under review in the table ( _click `Edit` on the actions_ ) - [x] it opens the modal on the 3rd step ( _review step_ ) ### For configuration changes: None --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> |
||
|
|
b56da4586d |
chore(backend): Remove grok-beta llm (#10510)
In my last PR https://github.com/Significant-Gravitas/AutoGPT/pull/10508 i forgot to remove the Grok-beta LLM, it has been deprecated so needs removing from the platform. |
||
|
|
9b94a7d39a |
chore(backend): Remove deprecated LLM models (#10508)
I have gone through and tested all 59 llm's on the platform, found 5 where deprecated/aren't available any more so i removed them. I made a agent with 59 llm call blocks, set each llm and ran it, i got several returned replies saying that models where deprecated so i removed those models. <img width="1804" height="887" alt="image" src="https://github.com/user-attachments/assets/907776e1-b491-465d-8219-e86c98559e41" /> Models removed: - O1_PREVIEW - MIXTRAL_8X7B - EVA_QWEN_2_5_32B - PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE - QWEN_QWQ_32B_PREVIEW |
||
|
|
df399e5c51 |
feat(blocks): Add Firecrawl Integration for Web Scraping and Data Extraction (#10494)
### Changes 🏗️ This PR adds Firecrawl integration to AutoGPT, providing powerful web scraping and data extraction capabilities: **New Blocks Added:** ⚠️ All these blocks are synchronous so take a while to finish, this allows a simpler agent workflow - **Firecrawl Scrape Block**: Scrapes single web pages with various output formats (Markdown, HTML, JSON, screenshots) - **Firecrawl Crawl Block**: Crawls entire websites following links with customizable depth and filters - **Firecrawl Extract Block**: Extracts structured data from web pages using AI-powered prompts - **Firecrawl Map Block**: Maps website structure and returns a list of all discovered URLs - **Firecrawl Search Block**: Searches Google and scrapes the results **Key Features:** - Advanced anti-blocking technology to bypass scraping protections - Multiple output formats including Markdown, HTML, JSON, and screenshots - AI-powered data extraction with custom prompts and schemas - Configurable crawling depth and URL filtering - Built-in caching and rate limiting - Google search integration for discovering relevant content **Use Cases:** - Web data extraction for research and analysis - Content monitoring and change tracking - Competitive intelligence gathering - SEO analysis and website mapping - Automated data collection workflows ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <\!-- Put your test plan here: --> - [x] Verified all Firecrawl blocks appear in the UI - [x] Tested scraping various websites with different formats - [x] Tested crawling with depth limits and URL filters - [x] Tested data extraction with custom prompts - [x] Verified error handling for invalid URLs and API failures - [x] Tested authentication with Firecrawl API key - [x] Confirmed proper rate limiting and caching behavior <img width="1025" height="1027" alt="Screenshot 2025-07-30 at 15 20 28" src="https://github.com/user-attachments/assets/7b94d3cf-7a0e-4d09-a9c5-24c4e8a3b660" /> # Example Agent [FC Testing_v12.json](https://github.com/user-attachments/files/21510608/FC.Testing_v12.json) |
||
|
|
b429505c14 |
feat(backend): Enable Ayrshare Instagram support (#10504)
## Summary - Enabled the Instagram posting block that was previously disabled - The block provides comprehensive Instagram-specific posting options including stories, reels, posts, user tagging, and location tagging - Improved parameter types and validation for better user experience ## Changes 🏗️ - Removed `disabled=True` from Instagram posting block to enable functionality - Updated parameter types from required to optional with proper None defaults for better flexibility - Added validation for Instagram reel options to ensure all required fields are provided together - Improved user tag validation with better error messages - Added support for: - Instagram Stories (24-hour expiration) - Instagram Reels with audio, thumbnails, and feed sharing options - Alt text for accessibility - Location tagging via Facebook Page ID - User tagging with coordinate support - Collaborator tagging - Auto-resize functionality ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified Instagram block is now available in the block list |
||
|
|
68974dc6da |
feat(backend): Enable Ayrshare YouTube support (#10505)
## Summary - Enabled the YouTube posting block that was previously disabled - The block provides comprehensive YouTube-specific posting options including titles, visibility settings, thumbnails, playlists, tags, and more ## Changes 🏗️ - Removed `disabled=True` from YouTube posting block to enable functionality - Added full YouTube API integration with all supported options: - Video title and description - Visibility settings (private/public/unlisted) - Thumbnail support - Playlist management - Video tags and categories - YouTube Shorts support - Subtitle/caption support - Country-based targeting - Synthetic media disclosure ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified YouTube block is now available in the block list https://github.com/user-attachments/assets/d4459f15-fe57-47bf-8459-f06f1af45ad6 <img width="374" height="593" alt="Screenshot 2025-07-31 at 11 26 29" src="https://github.com/user-attachments/assets/4dcf30dd-439c-4a44-b56a-640832d6c550" /> |
||
|
|
903a3b80b4 | Merge branch 'master' into dev autogpt-platform-beta-v0.6.18 | ||
|
|
e3fa8f6ce9 |
fix(frontend): agent activity links (2) (#10488)
## Changes 🏗️ My previous PR, https://github.com/Significant-Gravitas/AutoGPT/pull/10480, didn't fully resolve the issue of broken links sometimes appearing for some runs in the Agent Activity dropdown. - Fixed the logic ( verified with a deployment in dev... ) - Simplified logic, making less API calls - If we have an execution without a clear agent ID, we display it but don't link to it - Re-generated API types ( _had to update call in dashboard agents because of it_ ) ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run agents - [x] Runs appear correctly in the activity dropdown without broken links ### For configuration changes: None |
||
|
|
9ca75d93f7 |
Revert "fix(backend): Fix Google OAuth token revocation" (#10493)
Reverts Significant-Gravitas/AutoGPT#10491 as revoking one token also expires any/all others associated with the same account |
||
|
|
48f756136e |
fix(backend): Fix Google OAuth token revocation (#10491)
- Resolves #10489 ### Changes 🏗️ - Fix Google OAuth token revocation - Fix credentials object conversion in `GoogleOAuthHandler` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Google OAuth flow still works - [x] Deleting Google OAuth credentials works, token revocation doesn't error |
||
|
|
bb71492c8f |
feat(blocks): Add Wolfram Alpha LLM API Block (#10486)
### Changes 🏗️ This PR adds a new Wolfram Alpha block that integrates with Wolfram's LLM API endpoint: - **Ask Wolfram Block**: Allows users to ask questions to Wolfram Alpha and get structured answers - **API Integration**: Implements the Wolfram LLM API endpoint (`/api/v1/llm-api`) for natural language queries - **Simple Authentication**: Uses App ID based authentication via API key credentials - **Error Handling**: Proper error handling for API failures with descriptive error messages The block enables users to leverage Wolfram Alpha's computational knowledge engine for: - Mathematical calculations and explanations - Scientific data and facts - Unit conversions - Historical information - And many other knowledge-based queries ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <\!-- Put your test plan here: --> - [x] Verified the block appears in the UI and can be added to workflows - [x] Tested API authentication with valid Wolfram App ID - [x] Tested various query types (math, science, general knowledge) - [x] Verified error handling for invalid credentials - [x] Confirmed proper response formatting **Configuration changes:** - Users need to add `WOLFRAM_APP_ID` to their environment variables or provide it through the UI credentials field |
||
|
|
02f5e92167 |
feat(blocks): Add Airtable Integration with Base Management (#10485)
### Changes 🏗️ This PR adds Airtable integration to AutoGPT with the following blocks: - **List Bases Block**: Lists all Airtable bases accessible to the authenticated user - **Create Base Block**: Creates new Airtable bases with specified workspace and name <img width="1294" height="879" alt="Screenshot 2025-07-30 at 11 03 43" src="https://github.com/user-attachments/assets/0729e2e8-b254-4ed6-9481-1c87a09fb1c8" /> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] Tested create base block - [x] Tested list base block |
||
|
|
b08761816a |
feat(backend): add getting user profile, drafts, update send email to use mulitple to, cc, bcc (#10482)
Need: The Gmail integration had several parsing issues that were causing data loss and workflow incompatibilities: 1. Email recipient parsing only captured the first recipient, losing CC/BCC and multiple TO recipients 2. Email body parsing was inconsistent between blocks, sometimes showing "This email does not contain a readable body" for valid emails 3. Type mismatches between blocks caused serialization issues when connecting them in workflows (lists being converted to string representations like "[\"email@example.com\"]") # Changes 🏗️ 1. Enhanced Email Model: - Added cc and bcc fields to capture all recipients - Changed to field from string to list for consistency - Now captures all recipients instead of just the first one 2. Improved Email Parsing: - Updated GmailReadBlock and GmailGetThreadBlock to parse all recipients using getaddresses() - Unified email body parsing logic across blocks with robust multipart handling - Added support for HTML to plain text conversion - Fixed handling of emails with attachments as body content 3. Fixed Block Compatibility: - Updated GmailSendBlock and GmailCreateDraftBlock to accept lists for recipient fields - Added validation to ensure at least one recipient is provided - All blocks now consistently use lists for recipient fields, preventing serialization issues 4. Updated Test Data: - Modified all test inputs/outputs to use the new list format for recipients - Ensures tests reflect the new data structure # Checklist 📋 For code changes: - I have clearly listed my changes in the PR description - I have made a test plan - I have tested my changes according to the test plan: - Run existing Gmail block unit tests with poetry run test - Create a workflow that reads emails with multiple recipients and verify all TO, CC, BCC recipients are captured - Test email body parsing with plain text, HTML, and multipart emails - Connect GmailReadBlock → GmailSendBlock in a workflow and verify recipient data flows correctly - Connect GmailReplyBlock → GmailSendBlock and verify no serialization errors occur - Test sending emails with multiple recipients via GmailSendBlock - Test creating drafts with multiple recipients via GmailCreateDraftBlock - Verify backwards compatibility by testing with single recipient strings (should now require lists) - Create from scratch and execute an agent with at least 3 blocks - Import an agent from file upload, and confirm it executes correctly - Upload agent to marketplace - Import an agent from marketplace and confirm it executes correctly - Edit an agent from monitor, and confirm it executes correctly # Breaking Change Note: The to field in GmailSendBlock and GmailCreateDraftBlock now requires a list instead of accepting both string and list. Existing workflows using strings will need to be updated to use lists (e.g., ["email@example.com"] instead of "email@example.com"). --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
7373b472de |
fix(frontend): agent activity sometimes broken links (#10480)
## Changes 🏗️ Fix the issue where sometimes the agent activity would show a link to agent runs that are not available in the library. So only show runs that can be verified in the library. Improve the display of the agent name as well 🤔 ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run agents - [x] There are no empty links on the activity dropdown - [x] Agent names look good 💅🏽 ### For configuration changes: None |
||
|
|
a37fac31b5 |
fix(backend): Fix LLM blocks call tracking (#10483)
### Changes 🏗️ This PR fixes an issue where LLM blocks (particularly AITextSummarizerBlock) were not properly tracking `llm_call_count` in their execution statistics, despite correctly tracking token counts. **Root Cause**: The `finally` block in `AIStructuredResponseGeneratorBlock.run()` that sets `llm_call_count` was executing after the generator returned, meaning the stats weren't available when `merge_llm_stats()` was called by dependent blocks. **Changes made**: - **Fixed stats tracking timing**: Moved `llm_call_count` and `llm_retry_count` tracking to execute before successful return statements in `AIStructuredResponseGeneratorBlock.run()` - **Removed problematic finally block**: Eliminated the finally block that was setting stats after function return - **Added comprehensive tests**: Created extensive test suite for LLM stats tracking across all AI blocks - **Added SmartDecisionMaker stats tracking**: Fixed missing LLM stats tracking in SmartDecisionMakerBlock - **Fixed type errors**: Added appropriate type ignore comments for test mock objects **Files affected**: - `backend/blocks/llm.py`: Fixed stats tracking timing in AIStructuredResponseGeneratorBlock - `backend/blocks/smart_decision_maker.py`: Added missing LLM stats tracking - `backend/blocks/test/test_llm.py`: Added comprehensive LLM stats tracking tests - `backend/blocks/test/test_smart_decision_maker.py`: Added LLM stats tracking test and fixed circular imports ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Created comprehensive unit tests for all LLM blocks stats tracking - [x] Verified AITextSummarizerBlock now correctly tracks llm_call_count (was 0, now shows actual call count) - [x] Verified AIStructuredResponseGeneratorBlock properly tracks stats with retries - [x] Verified SmartDecisionMakerBlock now tracks LLM usage stats - [x] Verified all existing tests still pass - [x] Ran `poetry run format` to ensure code formatting - [x] All 11 LLM and SmartDecisionMaker tests pass #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) **Note**: No configuration changes were needed for this fix. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
b9c7642cfc |
feat(backend): Introduce http client refresh on repeated error (#10481)
HTTP requests can fail when the DNS is messed up. Sometimes this kind of issue requires a client reset. ### Changes 🏗️ Introduce HTTP client refresh on repeated error ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Manual run, added tests |
||
|
|
83f96b75c7 |
fix(backend): Ensure we only present working auth options on blocks (#10454)
To allow for a simpler dev experience, the new SDK now auto discovers providers and registers them. However, the OAuth system was still requiring these credentials to be hardcoded in the settings object. This PR changes that to verify the env var is present during registration and then allows the OAuth system to load them from the env. ### Changes 🏗️ - **OAuth Registration**: Modified `ProviderBuilder.with_oauth(..)` to check OAuth env vars exist during registration - **OAuth Loading**: Updated OAuth system to load credentials from env vars if not using secrets - **Block Filtering**: Added `is_block_auth_configured()` function to check if a block has valid authorization options configured at runtime - **Test Updates**: Fixed failing SDK registry tests to properly mock environment variables for OAuth registration ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified OAuth system checks that env vars exist during provider registration - [x] Confirmed OAuth system can use env vars directly without requiring hardcoded secrets - [x] Tested that blocks with unconfigured OAuth providers are filtered out - [x] All SDK registry tests pass with proper env var mocking #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) - OAuth providers now require their client ID and secret env vars to be set for registration - No changes required to `.env.example` or `docker-compose.yml` --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
a60abe5cfe |
feat(frontend): agent activity improvements (#10462)
## Changes 🏗️ There is a bug where the agent activity dropdown bubble only shows up to `6` even if there are `50 running agents`. We only display the last 6 runs in the dropdown, but the bubble badge count should show the correct all running agents. On top of that we added the option to search runs by agent name when you have more than 6 recent runs: https://github.com/user-attachments/assets/931e3db7-5715-48d1-b4df-22490fae9de0 - Also make the dropdown items a link ( `a` ) so that you can command click them to open runs in new tabs. - Keep up to `400` executions on the state ( worse case load test ) - Each execution object is relatively small (ID, status, timestamps, agent info) - 400 objects × ~`1KB` each = negligible memory footprint `400kb` - Always display running agents at the top - Only display runs from the last week on the dropdown - the agent library page contains the historical runs, this is just to show the recent ones ### Code changes - **Added count tracking** - the `NotificationState` interface now includes separate count fields (`activeCount`, `recentCompletionsCount`, `recentFailuresCount`) to track the actual numbers independent of display limits. - **Dual array system:** - the `categorizeExecutions` function now creates: - unlimited arrays for counting all executions - limited arrays (sliced to 6 items) for dropdown display - Updated all helper functions to properly maintain both the display arrays and the count fields. - Component uses actual counts - `<AgentActivityDropdown />` component now uses `activeCount` for the badge and hover hint instead of `activeExecutions.length` ## Checklist 📋 ### For code changes - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Login and navigate to library or build - [x] Start running agents like there is no tomorrow - [x] The badge shows the correct agent execution count ( .i.e 10 ) - [x] The dropdown only displays the 6 most recent - [x] You can command click on the runs and they open in new tabs ### For configuration changes None |
||
|
|
55af487589 |
fix(backend): Revert RabbitMQ consumer heartbeat mechanism (#10477)
The heartbeat mechanism doesn't seem to work at the moment. ### Changes 🏗️ Revert the RabbitMQ consumer heartbeat mechanism ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Run agents |
||
|
|
04f8cd60d7 |
feat(blocks): Add WordPress integration with OAuth and create post block (#10464)
This PR adds WordPress integration to AutoGPT platform, enabling users to create posts on WordPress.com and Jetpack-enabled sites. ### Changes 🏗️ **OAuth Implementation:** - Added WordPress OAuth2 handler (`_oauth.py`) supporting both single blog and global access tokens - Implemented OAuth flow without PKCE (as WordPress doesn't require it) - Added token validation endpoint support - Server-side tokens don't expire, eliminating the need for refresh in most cases **API Integration:** - Created WordPress API client (`_api.py`) with Pydantic models for type safety - Implemented `create_post` function with full support for WordPress post features - Added helper functions for token validation and generic API requests - Fixed response models to handle WordPress API's mixed data types **WordPress Block:** - Created `WordPressCreatePostBlock` in `blog.py` with minimal user-facing options - Exposed fields: site, title, content, excerpt, slug, author, categories, tags, featured_image, media_urls - Posts are published immediately by default - Integrated with platform's OAuth credential system ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] OAuth URL generation works correctly for single blog and global access - [x] Token exchange and validation functions handle WordPress API responses - [x] Create post block properly transforms input data to API format - [x] Response models handle mixed data types from WordPress API The WordPress OAuth provider needs to be configured with client ID and secret from WordPress.com application settings. |
||
|
|
95650ee346 |
feat(backend): improve gmail blocks (#10475)
<!-- Clearly explain the need for these changes: --> I'm working with these blocks and found some much needed improvements ### Changes 🏗️ - Outputs labels from emails - Types the outputs - reorder the yielding of the smart decision blokc <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Build a test agent with labels, sending, reading emails + smart decision maker |
||
|
|
4d05a27388 |
feat(backend): Avoid executor over-consuming messages when it's fully occupied (#10449)
When we run multiple instances of the executor, some of the executors can oversubscribe the messages and end up queuing the agent execution request instead of letting another executor handle the job. This change solves the problem. ### Changes 🏗️ * Reject execution request when the executor is full. * Improve `active_graph_runs` tracking for better horizontal scaling heuristics. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Manual graph execution & CI |
||
|
|
b71d0ec805 |
feat(backend): Ensure json column value serializable & remove excessive transaction and locking (#10441)
### Changes 🏗️ 1. Json columns have to be json serializable, but sometimes the data is not, so `SafeJson` is introduced to make sure that the data being loaded can be string serialized and back before persisting into the database. 2. Locks & transactions seem to be used in the case where it's not needed, this reduces database & redis performance. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] CI tests |
||
|
|
f7c1906364 |
feat(frontend, backend): Publish Agent Dialog Agent List Pagination (#10023)
We want scrolling for agent dialog list - Based on #9833 ### Changes 🏗️ - adds backend support for paginating this content - adds frontend support for scrolling pagination <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] test UI for this --------- Co-authored-by: Venkat Sai Kedari Nath Gandham <154089422+Kedarinath1502@users.noreply.github.com> Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
9171c6d984 |
fix(frontend/builder): Prevent bad graph reloads (#10459)
- Resolves #10458 Improve logic in `useAgentGraph`: - Correctly handle unset `flowVersion` in checks in hooks - Prevent unnecessary WebSocket re-connects - Remove redundant WebSocket connection management logic - Untangle hooks for initial load and set-up - Simplify block filtering logic - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Edit an agent in the builder - [x] WebSocket doesn't re-connect unnecessarily - [x] Graph doesn't reset on WebSocket re-connect - [x] Graph doesn't reset on LaunchDarkly re-connect |
||
|
|
7ea4077dc6 |
fix(frontend/builder): Prevent bad graph reloads (#10459)
- Resolves #10458 ### Changes 🏗️ Improve logic in `useAgentGraph`: - Correctly handle unset `flowVersion` in checks in hooks - Prevent unnecessary WebSocket re-connects - Remove redundant WebSocket connection management logic - Untangle hooks for initial load and set-up - Simplify block filtering logic ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Edit an agent in the builder - [x] WebSocket doesn't re-connect unnecessarily - [x] Graph doesn't reset on WebSocket re-connect - [x] Graph doesn't reset on LaunchDarkly re-connect |
||
|
|
3c62ca23df | Improve clarity | ||
|
|
8321677a43 |
chore(frontend/deps): Update 12 dependencies (#10451)
Bumps the production-dependencies group with 12 updates in the /autogpt_platform/frontend directory: | Package | From | To | | --- | --- | --- | | [@hookform/resolvers](https://github.com/react-hook-form/resolvers) | `5.1.1` | `5.2.0` | | [@next/third-parties](https://github.com/vercel/next.js/tree/HEAD/packages/third-parties) | `15.3.5` | `15.4.4` | | [@sentry/nextjs](https://github.com/getsentry/sentry-javascript) | `9.35.0` | `9.42.0` | | [@supabase/supabase-js](https://github.com/supabase/supabase-js) | `2.50.3` | `2.52.1` | | [@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query) | `5.81.5` | `5.83.0` | | [@xyflow/react](https://github.com/xyflow/xyflow/tree/HEAD/packages/react) | `12.8.1` | `12.8.2` | | [dotenv](https://github.com/motdotla/dotenv) | `17.2.0` | `17.2.1` | | [framer-motion](https://github.com/motiondivision/motion) | `12.23.0` | `12.23.9` | | [next](https://github.com/vercel/next.js) | `15.3.5` | `15.4.4` | | [react-hook-form](https://github.com/react-hook-form/react-hook-form) | `7.60.0` | `7.61.1` | | [react-shepherd](https://github.com/shepherd-pro/shepherd) | `6.1.8` | `6.1.9` | | [shepherd.js](https://github.com/shepherd-pro/shepherd) | `14.5.0` | `14.5.1` | Updates `@hookform/resolvers` from 5.1.1 to 5.2.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/react-hook-form/resolvers/releases"><code>@hookform/resolvers</code>'s releases</a>.</em></p> <blockquote> <h2>v5.2.0</h2> <h1><a href="https://github.com/react-hook-form/resolvers/compare/v5.1.1...v5.2.0">5.2.0</a> (2025-07-25)</h1> <h3>Features</h3> <ul> <li><strong>ajv:</strong> add ajv-formats for ajvResolver (<a href="https://redirect.github.com/react-hook-form/resolvers/issues/797">#797</a>) (<a href=" |
||
|
|
d991b4fb8c | Update README.md | ||
|
|
079d7c2c8e | Update README.md | ||
|
|
f44920ca25 | Update README.md | ||
|
|
03cf392f05 |
chore(backend/deps, libs/deps): Bump redis from 5.2.x to 6.2.0 (#10177)
Bumps [redis](https://github.com/redis/redis-py) from 5.2.1 to 6.2.0, for both `autogpt_libs` and `backend`. Also, additional fixes in `autogpt_libs/pyproject.toml`: - Move `redis` from dev dependencies to prod dependencies - Fix author info - Sort dependencies > [!NOTE] > Of course dependabot wouldn't do this on its own; this PR has been taken over and augmented by @Pwuts <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/redis/redis-py/releases">redis's releases</a>.</em></p> <blockquote> <h2>6.2.0</h2> <h1>Changes</h1> <h2>🚀 New Features</h2> <ul> <li>Add <code>dynamic_startup_nodes</code> parameter to async RedisCluster (<a href="https://redirect.github.com/redis/redis-py/issues/3646">#3646</a>)</li> <li>Support RESP3 with <code>hiredis-py</code> parser (<a href="https://redirect.github.com/redis/redis-py/issues/3648">#3648</a>)</li> <li>[Async] Support for transactions in async <code>RedisCluster</code> client (<a href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li> </ul> <h2>🐛 Bug Fixes</h2> <ul> <li>Revert wrongly changed default value for <code>check_hostname</code> when instantiating <code>RedisSSLContext</code> (<a href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li> <li>Fixed potential deadlock from unexpected <code>__del__</code> call (<a href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li> </ul> <h2>🧰 Maintenance</h2> <ul> <li>Update <code>search_json_examples.ipynb</code>: Fix the old import <code>indexDefinition</code> -> <code>index_definition</code> (<a href="https://redirect.github.com/redis/redis-py/issues/3652">#3652</a>)</li> <li>Remove mandatory update of the CHANGES file for new PRs. Changes file will be kept for history for versions < 4.0.0 (<a href="https://redirect.github.com/redis/redis-py/issues/3645">#3645</a>)</li> <li>Dropping <code>Python 3.8</code> support as it has reached end of life (<a href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li> <li>fix(doc): update Python print output in json doctests (<a href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li> <li>Update redis-entraid dependency (<a href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li> </ul> <h2></h2> <p>We'd like to thank all the contributors who worked on this release! <a href="https://github.com/JCornat"><code>@JCornat</code></a> <a href="https://github.com/ShubhamKaudewar"><code>@ShubhamKaudewar</code></a> <a href="https://github.com/uglide"><code>@uglide</code></a> <a href="https://github.com/petyaslavova"><code>@petyaslavova</code></a> <a href="https://github.com/vladvildanov"><code>@vladvildanov</code></a></p> <h2>v6.1.1</h2> <h1>Changes</h1> <h2>🐛 Bug Fixes</h2> <ul> <li>Revert wrongly changed default value for <code>check_hostname</code> when instantiating <code>RedisSSLContext</code> (<a href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li> <li>Fixed potential deadlock from unexpected <code>__del__</code> call (<a href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li> </ul> <h2></h2> <p>We'd like to thank all the contributors who worked on this release! <a href="https://github.com/vladvildanov"><code>@vladvildanov</code></a> <a href="https://github.com/petyaslavova"><code>@petyaslavova</code></a></p> <h2>6.1.0</h2> <h1>Changes</h1> <h2>🚀 New Features</h2> <ul> <li>Support for transactions in <code>RedisCluster</code> client (<a href="https://redirect.github.com/redis/redis-py/issues/3611">#3611</a>)</li> <li>Add equality and hashability to <code>Retry</code> and backoff classes (<a href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)</li> </ul> <h2>🐛 Bug Fixes</h2> <ul> <li>Fix RedisCluster <code>ssl_check_hostname</code> not set to connections. For SSL verification with <code>ssl_cert_reqs="none"</code>, check_hostname is set to <code>False</code> (<a href="https://redirect.github.com/redis/redis-py/issues/3637">#3637</a>) <strong>Important</strong>: The default value for the <code>check_hostname</code> field of <code>RedisSSLContext</code> has been changed as part of this PR - this is a breaking change and should not be introduced in minor versions - unfortunately, it is part of the current release. The breaking change is reverted in the next release to fix the behavior --> 6.2.0</li> <li>Prevent RuntimeError while reinitializing clusters - sync and async (<a href="https://redirect.github.com/redis/redis-py/issues/3633">#3633</a>)</li> <li>Add equality and hashability to <code>Retry</code> and backoff classes (<a href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>) - fixes integration with Django RQ</li> <li>Fix <code>AttributeError</code> on <code>ClusterPipeline</code> (<a href="https://redirect.github.com/redis/redis-py/issues/3634">#3634</a>)</li> </ul> <h2>🧰 Maintenance</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
29d4b4f347 |
fix(frontend): socket logout handling (#10445)
## Changes 🏗️ - Close websocket connections gracefully during logout ( _whether from another tab or not_ ) - Also fixed an error on `HeroSection` that shows when the onboarding is disabled locally ( `null` ) - Uncomment legit tests about connecting/saving agents - Centralise local storage usage through a single service with typed keys ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Login in 3 tabs ( 1 builder, 1 marketplace, 1 agent run view ) - [x] Logout from the marketplace tab - [x] The other tabs show logout state gracefully without toasts or errors - [x] Websocket connections are closed ( _devtools console shows that_ ) ### For configuration changes: None |
||
|
|
39fe22f7e7 |
feat(block): Add Ayrshare integration for social media posting (#9946)
This PR implements a comprehensive Ayrshare social media integration for AutoGPT Platform, enabling users to post content across multiple social media platforms through a unified interface. Ayrshare provides a single API to manage posts across Facebook, Twitter/X, LinkedIn, Instagram, YouTube, TikTok, Pinterest, Reddit, Telegram, Google My Business, Bluesky, Snapchat, and Threads. The integration addresses the need for social media automation and content distribution workflows within AutoGPT agents, allowing users to: - Connect their social media accounts via SSO - Post content with platform-specific options and constraints - Schedule posts across multiple platforms simultaneously - Handle platform-specific media requirements and validation ⚠️ To simplify the review process all except the twitter post block has been commented out, future pr's will uncomment other platfroms so we can test them in isolation. ### Changes 🏗️ #### Backend Integration (`backend/integrations/ayrshare.py`) - **AyrshareClient**: Complete API client implementation with post creation, profile management, and JWT generation - **SocialPlatform enum**: Comprehensive platform definitions for all supported social networks - **Response models**: PostResponse, ProfileResponse, JWTResponse for type-safe API interactions - **Error handling**: Custom AyrshareAPIException with proper HTTP status code handling #### Social Media Posting Blocks (`backend/blocks/ayrshare/post.py`) - **BaseAyrshareInput**: Shared input schema with common fields (post text, media URLs, scheduling, etc.) - **Platform-specific blocks**: 13 dedicated posting blocks, each with platform-specific validation and options: - PostToFacebookBlock: Carousel, Reels, Stories, targeting, alt text - PostToXBlock: Threads, polls, long posts, premium features, subtitles - PostToLinkedInBlock: Document support, visibility controls, audience targeting - PostToInstagramBlock: Stories, Reels, user tags, collaborators - PostToYouTubeBlock: Video uploads, playlists, visibility, country targeting - PostToPinterestBlock: Pins, carousels, board management - PostToTikTokBlock: Video/image posts, AI labeling, brand content - PostToRedditBlock: Basic posting functionality - PostToTelegramBlock: GIF handling, mentions - PostToGMBBlock: Event/offer posts, call-to-action buttons - PostToBlueskyBlock: Character limit validation, alt text - PostToSnapchatBlock: Story types, video thumbnails - PostToThreadsBlock: Hashtag restrictions, carousel support #### Helper Models - **CarouselItem**: Facebook carousel configuration - **CallToAction, EventDetails, OfferDetails**: Google My Business post types - **InstagramUserTag**: Instagram user tagging with coordinates - **LinkedInTargeting**: LinkedIn audience targeting options - **PinterestCarouselOption**: Pinterest carousel image options - **YouTubeTargeting**: YouTube country blocking/allowing #### Authentication & SSO (`backend/server/integrations/router.py`) - **SSO endpoint**: `/integrations/ayrshare/sso_url` for account linking - **Profile management**: Automatic profile creation and key management - **JWT generation**: Secure token generation for social media account linking - **Platform allowlist**: Configured access to all supported social platforms #### Frontend Integration (`frontend/src/components/CustomNode.tsx`) - **AYRSHARE block type**: New BlockUIType.AYRSHARE for Ayrshare-specific nodes - **SSO button**: "Connect Social Media Accounts" with loading states - **Handle generation**: Special handling for Ayrshare blocks with SSO integration #### Configuration - **Environment variables**: Added AYRSHARE_API_KEY and AYRSHARE_JWT_KEY to .env.example - **Block registration**: All Ayrshare blocks registered in AYRSHARE_NODE_IDS array #### Type Safety & Error Handling - **Modern typing**: Updated to use `list`, `dict`, `Any` instead of legacy typing - **Comprehensive validation**: Platform-specific constraints (character limits, media counts, file types) - **User-friendly errors**: Clear error messages for validation failures and API errors ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: **Test Plan:** **Backend API Testing:** - [x] Verify AyrshareClient initializes correctly with API key - [x] Test JWT generation for SSO authentication - [x] Test profile creation and management - [x] Verify all 13 posting blocks are properly registered - [x] Test platform-specific validation rules for each block - [x] Verify error handling for missing credentials and API failures **Frontend Integration Testing:** - [x] Verify AYRSHARE block type renders correctly in flow editor - [x] Test SSO button functionality and popup window behavior - [x] Confirm loading states work properly during authentication - [x] Verify input handles generate correctly for Ayrshare blocks - [x] Test platform-specific input fields and validation **End-to-End Workflow Testing:** - [x] Create agent with Ayrshare posting blocks - [x] Test SSO flow: click "Connect Social Media Accounts" button - [x] Verify popup opens with Ayrshare authentication page - [x] Test social media account linking process - [x] Create posts with various platform-specific options: - [ X ] X (Twitter) - tested basic posting with image - [] Test scheduling functionality across platforms - [x] Verify media upload constraints and validation - [] Test error handling for invalid inputs and failed posts **Error Case Testing:** - [] Test behavior with missing AYRSHARE_API_KEY configuration - [] Test invalid social media credentials handling - [] Test network failure scenarios - [] Verify platform-specific validation error messages - [] Test character limit enforcement per platform - [] Test media file type and size restrictions **Security Testing:** - [ x ] Verify JWT tokens are properly generated and validated - [x] Test profile key isolation between users - [x] Confirm sensitive credentials are not logged - [x] Verify SSO popup prevents XSS attacks #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) **Configuration Changes:** - Added `AYRSHARE_API_KEY` environment variable for Ayrshare API authentication - Added `AYRSHARE_JWT_KEY` environment variable for SSO token generation - No docker-compose.yml changes required (uses existing backend services) --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
c955e9a4d7 |
feat(blocks): Add Airtable Integration (#10338)
## Overview This PR adds comprehensive Airtable integration to the AutoGPT platform, enabling users to seamlessly connect their Airtable bases with AutoGPT workflows for powerful no-code automation capabilities. ## Why Airtable Integration? Airtable is one of the most popular no-code databases used by teams for project management, CRMs, inventory tracking, and countless other use cases. This integration brings significant value: - **Data Automation**: Automate data entry, updates, and synchronization between Airtable and other services - **Workflow Triggers**: React to changes in Airtable bases with webhook-based triggers - **Schema Management**: Programmatically create and manage Airtable table structures - **Bulk Operations**: Efficiently process large amounts of data with batch create/update/delete operations ## Key Features ### 🔌 Webhook Trigger - **AirtableWebhookTriggerBlock**: Listens for changes in Airtable bases and triggers workflows - Supports filtering by table, view, and specific fields - Includes webhook signature validation for security ### 📊 Record Operations - **AirtableCreateRecordsBlock**: Create single or multiple records (up to 10 at once) - **AirtableUpdateRecordsBlock**: Update existing records with upsert support - **AirtableDeleteRecordsBlock**: Delete single or multiple records - **AirtableGetRecordBlock**: Retrieve specific record details - **AirtableListRecordsBlock**: Query records with filtering, sorting, and pagination ### 🏗️ Schema Management - **AirtableCreateTableBlock**: Create new tables with custom field definitions - **AirtableUpdateTableBlock**: Modify table properties - **AirtableAddFieldBlock**: Add new fields to existing tables - **AirtableUpdateFieldBlock**: Update field properties ## Technical Implementation Details ### Authentication - Supports both API Key and OAuth authentication methods - OAuth implementation includes proper token refresh handling - Credentials are securely managed through the platform's credential system ### Webhook Security - Added `credentials` parameter to WebhooksManager interface for proper signature validation - HMAC-SHA256 signature verification ensures webhook authenticity - Webhook cursor tracking prevents duplicate event processing ### API Integration - Comprehensive API client (`_api.py`) with full type safety - Proper error handling and response validation - Support for all Airtable field types and operations ## Changes 🏗️ ### Added Blocks: - AirtableWebhookTriggerBlock - AirtableCreateRecordsBlock - AirtableDeleteRecordsBlock - AirtableGetRecordBlock - AirtableListRecordsBlock - AirtableUpdateRecordsBlock - AirtableAddFieldBlock - AirtableCreateTableBlock - AirtableUpdateFieldBlock - AirtableUpdateTableBlock ### Modified Files: - Updated WebhooksManager interface to support credential-based validation - Modified all webhook handlers to support the new interface ## Test Plan 📋 ### Manual Testing Performed: 1. **Authentication Testing** - ✅ Verified API key authentication works correctly - ✅ Tested OAuth flow including token refresh - ✅ Confirmed credentials are properly encrypted and stored 2. **Webhook Testing** - ✅ Created webhook subscriptions for different table events - ✅ Verified signature validation prevents unauthorized requests - ✅ Tested cursor tracking to ensure no duplicate events - ✅ Confirmed webhook cleanup on block deletion 3. **Record Operations Testing** - ✅ Created single and batch records with various field types - ✅ Updated records with and without upsert functionality - ✅ Listed records with filtering, sorting, and pagination - ✅ Deleted single and multiple records - ✅ Retrieved individual record details 4. **Schema Management Testing** - ✅ Created tables with multiple field types - ✅ Added fields to existing tables - ✅ Updated table and field properties - ✅ Verified proper error handling for invalid field types 5. **Error Handling Testing** - ✅ Tested with invalid credentials - ✅ Verified proper error messages for API limits - ✅ Confirmed graceful handling of network errors ### Security Considerations 🔒 1. **API Key Management** - API keys are stored encrypted in the credential system - Keys are never logged or exposed in error messages - Credentials are passed securely through the execution context 2. **Webhook Security** - HMAC-SHA256 signature validation on all incoming webhooks - Webhook URLs use secure ingress endpoints - Proper cleanup of webhooks when blocks are deleted 3. **OAuth Security** - OAuth tokens are securely stored and refreshed - Scopes are limited to necessary permissions - Token refresh happens automatically before expiration ## Configuration Requirements No additional environment variables or configuration changes are required. The integration uses the existing credential management system. ## Checklist 📋 #### For code changes: - [x] I have read the [contributing instructions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/.github/CONTRIBUTING.md) - [x] Confirmed that `make lint` passes - [x] Confirmed that `make test` passes - [x] Updated documentation where needed - [x] Added/updated tests for new functionality - [x] Manually tested all blocks with real Airtable bases - [x] Verified backwards compatibility of webhook interface changes #### Security: - [x] No hard-coded secrets or sensitive information - [x] Proper input validation on all user inputs - [x] Secure credential handling throughout |