mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-10 07:38:04 -05:00
9c6cc5b29d8eee21e6bdf1839c1978ca49989d79
7288 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
9c6cc5b29d | Merge branch 'dev' autogpt-platform-beta-v0.6.30 | ||
|
|
b34973ca47 |
feat: Add 'depth' parameter to DataForSEO Related Keywords block (#10983)
Fixes #10982 <!-- Clearly explain the need for these changes: --> The DataForSEO Related Keywords block was missing the `depth` parameter, which is a critical parameter that controls the comprehensiveness of keyword research. The depth parameter determines the number of related keywords returned by the API, ranging from 1 keyword at depth 0 to approximately 4680 keywords at depth 4. ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> - Added `depth` parameter to the DataForSEO Related Keywords block as an integer input field (range 0-4) - Added `depth` parameter to the `related_keywords` method signature in the API client - Updated the API client to include the depth parameter in the request payload when provided - Added documentation explaining the depth parameter's effect on the number of returned keywords - Fixed missing parameter in function signature that was causing runtime errors ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Verified the depth parameter appears correctly in the block UI with appropriate range validation (0-4) - [x] Confirmed the parameter is passed correctly to the API client - [x] Tested that omitting the depth parameter doesn't break existing functionality (defaults to None) - [x] Verified the implementation follows the existing pattern for optional parameters in the DataForSEO blocks #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [ ] I have included a list of my configuration changes in the PR description (under **Changes**) Note: No configuration changes were required for this feature addition. --------- Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com> |
||
|
|
2bc6a56877 |
fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary - Fixed "Timeout context manager should be used inside a task" error occurring intermittently in FileInput blocks when downloading files from Google Cloud Storage - Implemented proper async session management for GCS client to ensure operations run within correct task context - Added comprehensive logging to help diagnose and monitor the issue in production ## Changes ### Core Fix - Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh GCS client and session for each download operation - This ensures the aiohttp session is always created within the proper async task context, preventing the timeout error - The fix trades a small amount of efficiency for reliability, but only affects download operations ### Logging Enhancements - Added detailed logging in `store_media_file()` to track execution context and async task state - Enhanced `scan_content_safe()` to specifically catch and log timeout errors with CRITICAL level - Added context logging in virus scanner around `asyncio.create_task()` calls - Upgraded key debug logs to info level for visibility in production ### Code Quality - Fixed unbound variable issue where `async_client` could be referenced before initialization - Replaced bare `except:` clauses with proper exception handling - Fixed unused parameters warning in `__aexit__` method ## Testing - The timeout error was occurring intermittently in production when FileInput blocks processed GCS files - With these changes, the error should be eliminated as the session is always created in the correct context - Comprehensive logging allows monitoring of the fix effectiveness in production ## Context The root cause was that `gcloud-aio-storage` was creating its internal aiohttp session/timeout context outside of an async task context when called by the executor. This happened intermittently depending on how the executor scheduled block execution. ## Related Issues - Addresses timeout errors reported in FileInput block execution - Improves reliability of file uploads from the platform ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test a multiple file input agent and it works - [x] Test the agent that is causing the failure and it works 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
87c773d03a |
fix(backend): Fix GCS timeout error in FileInput blocks (#10976)
## Summary - Fixed "Timeout context manager should be used inside a task" error occurring intermittently in FileInput blocks when downloading files from Google Cloud Storage - Implemented proper async session management for GCS client to ensure operations run within correct task context - Added comprehensive logging to help diagnose and monitor the issue in production ## Changes ### Core Fix - Modified `CloudStorageHandler._retrieve_file_gcs()` to create a fresh GCS client and session for each download operation - This ensures the aiohttp session is always created within the proper async task context, preventing the timeout error - The fix trades a small amount of efficiency for reliability, but only affects download operations ### Logging Enhancements - Added detailed logging in `store_media_file()` to track execution context and async task state - Enhanced `scan_content_safe()` to specifically catch and log timeout errors with CRITICAL level - Added context logging in virus scanner around `asyncio.create_task()` calls - Upgraded key debug logs to info level for visibility in production ### Code Quality - Fixed unbound variable issue where `async_client` could be referenced before initialization - Replaced bare `except:` clauses with proper exception handling - Fixed unused parameters warning in `__aexit__` method ## Testing - The timeout error was occurring intermittently in production when FileInput blocks processed GCS files - With these changes, the error should be eliminated as the session is always created in the correct context - Comprehensive logging allows monitoring of the fix effectiveness in production ## Context The root cause was that `gcloud-aio-storage` was creating its internal aiohttp session/timeout context outside of an async task context when called by the executor. This happened intermittently depending on how the executor scheduled block execution. ## Related Issues - Addresses timeout errors reported in FileInput block execution - Improves reliability of file uploads from the platform ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test a multiple file input agent and it works - [x] Test the agent that is causing the failure and it works 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
ebeefc96e8 |
feat(backend): implement caching layer for store API endpoints (Part 1) (#10975)
## Summary
This PR introduces comprehensive caching for the Store API endpoints to
improve performance and reduce database load. This is **Part 1** in a
series of PRs to add comprehensive caching across our entire API.
### Key improvements:
- Implements caching layer using the existing `@cached` decorator from
`autogpt_libs.utils.cache`
- Reduces database queries by 80-90% for frequently accessed public data
- Built-in thundering herd protection prevents database overload during
cache expiry
- Selective cache invalidation ensures data freshness when mutations
occur
## Details
### Cached endpoints with TTLs:
- **Public data (5-10 min TTL):**
- `/agents` - Store agents list (2 min)
- `/agents/{username}/{agent_name}` - Agent details (5 min)
- `/graph/{store_listing_version_id}` - Agent graphs (10 min)
- `/agents/{store_listing_version_id}` - Agent by version (10 min)
- `/creators` - Creators list (5 min)
- `/creator/{username}` - Creator details (5 min)
- **User-specific data (1 min TTL):**
- `/profile` - User profiles (5 min)
- `/myagents` - User's own agents (1 min)
- `/submissions` - User's submissions (1 min)
### Cache invalidation strategy:
- Profile updates → clear user's profile cache
- New reviews → clear specific agent cache + agents list
- New submissions → clear agents list + user's caches
- Submission edits → clear related version caches
### Cache management endpoints:
- `GET /cache/info` - Monitor cache statistics
- `POST /cache/clear` - Clear all caches
- `POST /cache/clear/{cache_name}` - Clear specific cache
## Changes
<!-- REQUIRED: Bullet point summary of changes -->
- Added caching decorators to all suitable GET endpoints in store routes
- Implemented cache invalidation on data mutations (POST/PUT/DELETE)
- Added cache management endpoints for monitoring and manual clearing
- Created comprehensive test suite for cache_delete functionality
- Verified thundering herd protection works correctly
## Testing
<!-- How to test your changes -->
- ✅ Created comprehensive test suite (`test_cache_delete.py`)
validating:
- Selective cache deletion works correctly
- Cache entries are properly invalidated on mutations
- Other cache entries remain unaffected
- cache_info() accurately reflects state
- ✅ Tested thundering herd protection with concurrent requests
- ✅ Verified all endpoints return correct data with and without cache
## Checklist
<!-- REQUIRED: Be sure to check these off before marking the PR ready
for review. -->
- [x] I have self-reviewed this PR's diff, line by line
- [x] I have updated and tested the software architecture documentation
(if applicable)
- [x] I have run the agent to verify that it still works (if applicable)
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
|
||
|
|
83fe8d5b94 |
fix(backend): make preset migration not crash the system (#10966)
<!-- Clearly explain the need for these changes: --> For those who develop blocks, they may or may not exist in the code at the same time as the database. > Create block in one branch, test, then move to another branch the block is not in This migration will prevent startup in that case. ### Changes 🏗️ Adds a try except around the migration <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test that startup actually works --------- Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
50689218ed | feat(backend): implement comprehensive load testing performance fixes + database health improvements (#10965) | ||
|
|
ddff09a8e4 |
feat(blocks): add NotionReadPage block (#10760)
Introduces a Notion Read Page block that fetches a page by ID via the Notion REST API. This is a first step toward Notion integration in the AutoGPT Platform. Motivation - Notion was not integrated yet. Im starting with a small block to add capability incrementally. ### Notes - I referred to the Todoist block implementation as a reference since I’m a beginner. - This is my first PR here - The block passed `docker compose run --rm rest_server pytest -q` successfully <!-- Clearly explain the need for these changes: --> <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: ### Test plan - [x] Ran `docker compose run --rm rest_server pytest -q backend/blocks/test/test_block.py -k notion` - [x] Confirmed tests passed (2 passed, 652 deselected, warnings only). - [x] Ran poetry run format to fix linters and tests --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> Co-authored-by: Nicholas Tindle <nicktindle@outlook.com> |
||
|
|
0c363a1cea |
fix(frontend): force dynamic rendering on marketplace (#10957)
## Changes 🏗️ When building on Vercel: ``` at Object.start (.next/server/chunks/2744.js:1:312830) { description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error", digest: 'DYNAMIC_SERVER_USAGE' } Failed to get server auth token: Error: Dynamic server usage: Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error at r (.next/server/chunks/8450.js:22:7298) at n (.next/server/chunks/4735.js:1:37020) at g (.next/server/chunks/555.js:1:31925) at m (.next/server/chunks/555.js:1:87056) at h (.next/server/chunks/555.js:1:932) at k (.next/server/chunks/555.js:1:25195) at queryFn (.next/server/chunks/555.js:1:25590) at Object.f [as fn] (.next/server/chunks/2744.js:1:316625) at q (.next/server/chunks/2744.js:1:312288) at Object.start (.next/server/chunks/2744.js:1:312830) { description: "Route /marketplace couldn't be rendered statically because it used `cookies`. See more info here: https://nextjs.org/docs/messages/dynamic-server-error", digest: 'DYNAMIC_SERVER_USAGE' } ``` That's because the `/marketplace` page prefetches the store agents data on the server, and that query uses `cookies` for Auth. In theory, those endpoints can be called without auth, but I think if you are logged that affects the results. The simpler for now is to tell Next.js to not try to statically render it and render on the fly with caching. According to AI we shouldn't see much difference performance wise: > Short answer: Usually no noticeable slowdown. You’ll trade a small TTFB increase (server renders per request) for correct behavior with cookies. Overall interactivity stays the same since we still dehydrate React Query data. Why it’s fine: Server already had to fetch marketplace data; doing it at request-time vs build-time is roughly the same cost for users. Hydration uses the prefetched data, avoiding extra client round-trips. If you want extra speed: If those endpoints don’t need auth, we can skip reading cookies during server prefetch and enable ISR (e.g., revalidate=60) for partial caching. Or move the cookie-dependent parts to the client and keep the page static. ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app - [x] Page load marketplace is fine and not slow - [x] No build cookies errors ### For configuration changes: None |
||
|
|
e5d870a348 |
refactor(frontend): move old components to __legacy__ (#10953)
## Changes 🏗️ Moving non-design-system components ( old ) to a `components/__legacy__` folder 📁 so like this is more obvious for developers that they should not import them or use them on new features. What is now top-level in `/components` is what it is actively maintained. Document some existing components like `<Alert />`. More on this coming on follow-up PRs. ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Test and types pass on the CI - [x] Run app locally, click around, looks good ### For configuration changes: None |
||
|
|
3f19cba28f |
fix(frontend/builder): Fix moved blocks disappearing on save (#10951)
- Resolves #10926 - Fixes a bug introduced in #10779 ### Changes 🏗️ - Fix `.metadata.position` in graph save payload - Make node reconciliation after graph save more robust ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Moved nodes don't disappear on graph saveautogpt-platform-beta-v0.6.29 |
||
|
|
a978e91271 |
fix(ci, backend): Update Redis image & amend config to work with it (#10952)
CI is currently broken because Bitnami has pulled all `bitnami/redis` images. The current official Redis image on Docker Hub is `redis`. ### Changes 🏗️ - Replace `bitnami/redis:6.2` by `redis:latest` in Backend CI workflow file - Make `REDIS_PASSWORD` optional in the backend settings ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] CI no longer broken |
||
|
|
f283e6c514 |
refactor(frontend): cleanup of components folder (2/3) (#10942)
## Changes 🏗️ Following up my initial PR to tidy up the `components` folder https://github.com/Significant-Gravitas/AutoGPT/pull/10940. This is mostly moving files around and renaming some + documenting them on the design system as needed. Should be pretty safe as long as types on the CI pass. ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally - [x] Click around, looks ok - [x] Test and types pass on the CI ### For configuration changes: None |
||
|
|
9fc2101e7e |
refactor(frontend): tidy up on components folder (#10940)
## Changes 🏗️ Re-organise the `components` folder, moving things which are not re-used across screens or part of the design system out of it. ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally - [x] It works and test/types pass CI wise ### For configuration changes: None |
||
|
|
634f826d82 | Merge branch 'master' into dev autogpt-platform-beta-v0.6.28 | ||
|
|
6d6bf308fc |
fix(frontend): marketplace page load and caching (#10934)
## Changes 🏗️ ### **Server-Side:** - ✅ **ISR Cache**: Page cached for 60 seconds, served instantly - ✅ **Prefetch**: All API calls made on server, not client - ✅ **Static Generation**: HTML pre-rendered with data - ✅ **Streaming**: Loading states show immediately ### **Client-Side:** - ✅ **No API Calls**: Data hydrated from server cache - ✅ **Fast Hydration**: React Query uses prefetched data - ✅ **Smart Caching**: 60s stale time prevents unnecessary requests - ✅ **Progressive Loading**: Suspense boundaries for better UX ### **🔄 Caching Strategy:** 1. **Server**: ISR cache (60s) → API calls → Static HTML 2. **CDN**: Cached HTML served instantly 3. **Client**: Hydrated data from server → No additional API calls 4. **Background**: ISR regenerates stale pages automatically ### **🎯 Result:** - **First Visit**: Instant HTML + hydrated data (no client API calls) - **Subsequent Visits**: Instant cached page - **Background Updates**: Automatic revalidation every 60s - **Optimal Performance**: Server-side rendering + client-side caching ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally - [x] Marketplace page loads are faster ### For configuration changes: None |
||
|
|
dd84fb5c66 |
feat(platform): Add public share links for agent run results (#10938)
<!-- Clearly explain the need for these changes: --> This PR adds the ability for users to share their agent run results publicly via shareable links. Users can generate a public link that allows anyone to view the outputs of a specific agent execution without requiring authentication. This feature enables users to share their agent results with clients, colleagues, or the community. https://github.com/user-attachments/assets/5508f430-07d0-4cd3-87bc-301b0b005cce ### Changes 🏗️ #### Backend Changes - **Database Schema**: Added share tracking fields to `AgentGraphExecution` model in Prisma schema: - `isShared`: Boolean flag to track if execution is shared - `shareToken`: Unique token for the share URL - `sharedAt`: Timestamp when sharing was enabled - **API Endpoints**: Added three new REST endpoints in `/backend/backend/server/routers/v1.py`: - `POST /graphs/{graph_id}/executions/{graph_exec_id}/share`: Enable sharing for an execution - `DELETE /graphs/{graph_id}/executions/{graph_exec_id}/share`: Disable sharing - `GET /share/{share_token}`: Retrieve shared execution data (public endpoint) - **Data Models**: - Created `SharedExecutionResponse` model for public-safe execution data - Added `ShareRequest` and `ShareResponse` Pydantic models for type-safe API responses - Updated `GraphExecutionMeta` to include share status fields - **Security**: - All share management endpoints verify user ownership before allowing changes - Public endpoint only exposes OUTPUT block data, no intermediate execution details - Share tokens are UUIDs for security #### Frontend Changes - **ShareButton Component** (`/frontend/src/components/ShareButton.tsx`): - Modal dialog for managing share settings - Copy-to-clipboard functionality for share links - Clear warnings about public accessibility - Uses Orval-generated API hooks for enable/disable operations - **Share Page** (`/frontend/src/app/(no-navbar)/share/[token]/page.tsx`): - Clean, navigation-free page for viewing shared executions - Reuses existing `RunOutputs` component for consistent output rendering - Proper error handling for invalid/disabled share links - Loading states during data fetch - **API Integration**: - Fixed custom mutator to properly set Content-Type headers for POST requests with empty bodies - Generated TypeScript types via Orval for type-safe API calls ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Test plan: --> - [x] Enable sharing for an agent execution and verify share link is generated - [x] Copy share link and verify it copies to clipboard - [x] Open share link in incognito/private browser and verify outputs are displayed - [x] Disable sharing and verify share link returns 404 - [x] Try to enable/disable sharing for another user's execution (should fail with 404) - [x] Verify share page shows proper loading and error states - [x] Test that only OUTPUT blocks are shown in shared view, no intermediate data = |
||
|
|
33679f3ffe |
feat(platform): Add instructions field to agent submissions (#10931)
## Summary Added an optional "Instructions" field for agent submissions to help users understand how to run agents and what to expect. <img width="1000" alt="image" src="https://github.com/user-attachments/assets/015c4f0b-4bdd-48df-af30-9e52ad283e8b" /> <img width="1000" alt="image" src="https://github.com/user-attachments/assets/3242cee8-a4ad-4536-bc12-64b491a8ef68" /> <img width="1000" alt="image" src="https://github.com/user-attachments/assets/a9b63e1c-94c0-41a4-a44f-b9f98e446793" /> ### Changes Made **Backend:** - Added `instructions` field to `AgentGraph` and `StoreListingVersion` database models - Updated `StoreSubmission`, `LibraryAgent`, and related Pydantic models - Modified store submission API routes to handle instructions parameter - Updated all database functions to properly save/retrieve instructions field - Added graceful handling for cases where database doesn't yet have the field **Frontend:** - Added instructions field to agent submission flow (PublishAgentModal) - Positioned below "Recommended Schedule" section as specified - Added instructions display in library/run flow (RunAgentModal) - Positioned above credentials section with informative blue styling - Added proper form validation with 2000 character limit - Updated all TypeScript types and API client interfaces ### Key Features - ✅ Optional field - fully backward compatible - ✅ Proper positioning in both submission and run flows - ✅ Character limit validation (2000 chars) - ✅ User-friendly display with "How to use this agent" styling - ✅ Only shows when instructions are provided ### Testing - Verified Pydantic model validation works correctly - Confirmed schema validation enforces character limits - Tested graceful handling of missing database fields - Code formatting and linting completed ## Test plan - [ ] Test agent submission with instructions field - [ ] Test agent submission without instructions (backward compatibility) - [ ] Verify instructions display correctly in run modal - [ ] Test character limit validation - [ ] Verify database migrations work properly 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
fc8c5ccbb6 |
feat(backend): enhance agent retrieval logic in store agent page (#10933)
This PR enhances the agent retrieval logic in the store database to ensure accurate fetching of the latest approved agent versions. The changes address scenarios where agents may have multiple versions with different approval statuses. ## 🔧 Changes Made ### Enhanced Agent Retrieval Logic (`get_store_agent_details`) - **Active Version Priority**: Added logic to prioritize fetching agents based on the `activeVersionId` when available - **Fallback to Latest Approved**: When no active version is set, the system now falls back to the latest approved version (sorted by version number descending) - **Improved Accuracy**: Ensures users always see the most relevant agent version based on the current store listing state ### Improved Agent Filtering (`get_my_agents`) - **Enhanced Store Listing Filter**: Modified the filter to only include store listings that have at least one available version - **Nested Version Check**: Added nested filtering to check for `isAvailable: true` in the versions, preventing empty or unavailable listings from appearing ## ✅ Testing Checklist - [x] Test fetching agent details with an active version set - [x] Test fetching agent details without an active version (should fall back to latest approved) - [x] Test `get_my_agents` returns only agents with available store listing versions - [x] Verify no agents with only unavailable versions appear in results - [x] Test with agents having multiple versions with different approval statuses |
||
|
|
7d2ab61546 |
feat(platform): Disable Trigger Setup through Builder (#10418)
We want users to set up triggers through the Library rather than the Builder. - Resolves #10413 https://github.com/user-attachments/assets/515ed80d-6569-4e26-862f-2a663115218c ### Changes 🏗️ - Update node UI to push users to Library for trigger set-up and management - Add note redirecting to Library for trigger set-up - Remove webhook status indicator and webhook URL section - Add `libraryAgent: LibraryAgent` to `BuilderContext` for access inside `CustomNode` - Move library agent loader from `FlowEditor` to `useAgentGraph` - Implement `migrate_legacy_triggered_graphs` migrator function - Remove `on_node_activate` hook (which previously handled webhook setup) - Propagate `created_at` from DB to `GraphModel` and `LibraryAgentPreset` models ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Existing node triggers are converted to triggered presets (visible in the Library) - [x] Converted triggered presets work - [x] Trigger node inputs are disabled and handles are hidden - [x] Trigger node message links to the correct Library Agent when saved |
||
|
|
c2f11dbcfa |
fix(blocks): Fix feedback loops in AI Structured Response Generator (#10932)
Improve the overall reliability of the AI Structured Response Generator block from ~40% to ~100%. This block has been giving me a lot of hassle over the past week and this improvement is an easy win. - Resolves #10916 ### Changes 🏗️ - Improve reliability of AI Structured Response Generator block - Fix feedback loops (total success rate ~40% -> 100%) - Improve system prompt (one-shot success rate ~40% -> ~76%) ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] JSON decode errors are turned into a useful feedback message - [x] LLM effectively corrects itself based on the feedback message |
||
|
|
f82adeb959 |
feat(library): Add agent favoriting functionality (#10828)
### Need 💡 This PR introduces the ability for users to "favorite" agents in the library view, enhancing agent discoverability and organization. Favorited agents will be visually marked with a heart icon and prioritized in the library list, appearing at the top. This feature is distinct from pinning specific agent runs. ### Changes 🏗️ * **Backend:** * Updated `LibraryAgent` model in `backend/server/v2/library/model.py` to include the `is_favorite` field when fetching from the database. * **Frontend:** * Updated `LibraryAgent` TypeScript type in `autogpt-server-api/types.ts` to include `is_favorite`. * Modified `LibraryAgentCard.tsx` to display a clickable heart icon, indicating the favorite status. * Implemented a click handler on the heart icon to toggle the `is_favorite` status via an API call, including loading states and toast notifications. * Updated `useLibraryAgentList.ts` to implement client-side sorting, ensuring favorited agents appear at the top of the list. * Updated `openapi.json` to include `is_favorite` in the `LibraryAgent` schema and regenerated frontend API types. * Installed `@orval/core` for API generation. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verify that the heart icon is displayed correctly on `LibraryAgentCard` for both favorited (filled red) and unfavorited (outlined gray) agents. - [x] Click the heart icon on an unfavorited agent: - [x] Confirm the icon changes to filled red. - [x] Verify a "Added to favorites" toast notification appears. - [x] Confirm the agent moves to the top of the library list. - [x] Check that the agent card does not navigate to the agent details page. - [x] Click the heart icon on a favorited agent: - [x] Confirm the icon changes to outlined gray. - [x] Verify a "Removed from favorites" toast notification appears. - [x] Confirm the agent's position adjusts in the list (no longer at the very top unless other sorting criteria apply). - [x] Check that the agent card does not navigate to the agent details page. - [x] Test the loading state: rapidly click the heart icon and observe the `opacity-50 cursor-not-allowed` styling. - [x] Verify that the sorting correctly places all favorited agents at the top, maintaining their original relative order within the favorited group, and the same for unfavorited agents. #### For configuration changes: - [ ] `.env.default` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) --- <a href="https://cursor.com/background-agent?bcId=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/open-in-cursor-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://cursor.com/open-in-cursor-light.svg"> <img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg"> </picture> </a> <a href="https://cursor.com/agents?id=bc-43e8f98c-e4ea-4149-afc8-5eea3d1ab439"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/open-in-web-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://cursor.com/open-in-web-light.svg"> <img alt="Open in Web" src="https://cursor.com/open-in-web.svg"> </picture> </a> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Reinier van der Leer <pwuts@agpt.co> |
||
|
|
6f08a1cca7 | fix: the api key credentials weren't registering correctly (#10936) | ||
|
|
1ddf92eed4 |
fix(frontend): new agent run page design refinements (#10924)
## Changes 🏗️ Implements all the following changes... 1. The margins between the runs, on the left hand side.. reduced them around `6px` ? 2. Make agent inputs full width 3. Make "Schedule setup" section displayed in a second modal 4. When an agent is running, we should not show the "Delete agent" button 5. Copy changes around the actions for agent/runs 6. Large button height should be `52px` 7. Fix margins between + New Run button and the runs & scheduled menu 8. Make border white on cards Also... - improve the naming of some components to reflect better their context/usage - show on the inputs section when an agent is using already API keys or credentials - fix runs/schedules not auto-selecting once created ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally with the new agent runs page enabled - [x] Test the above ### For configuration changes: None |
||
|
|
4c0dd27157 |
dx(platform): Add manual dispatch to deploy workflows (#10918)
When deploying from the infra repo, migrations aren't run which can cause issues. We need to be able to manually dispatch deployment from this repo so that the migrations are run as well. ### Changes 🏗️ - add manual dispatch to deploy workflows ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Either it works or it doesn't but this PR won't break anything existing |
||
|
|
17fcf68f2e |
feat: Separate OpenAI key for smart agent execution summary and other internal AI calls (#10930)
### Changes 🏗️ Separate the API key for internal usage (smart agent execution summary) and block usage. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Manual test after deployment |
||
|
|
381558342a |
fix(frontend/builder): Fix moved blocks disappearing on no-op save (#10927)
- Resolves #10926 ### Changes 🏗️ - Fix save no-op if graph has no changes ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Saving a graph after only moving nodes doesn't make those nodes disappear |
||
|
|
1fdc02467b |
feat(backend): Add comprehensive Prometheus instrumentation for observability (#10923)
## Summary - Implement comprehensive Prometheus metrics instrumentation for all FastAPI services - Add custom business metrics for graph/block executions - Enable dual publishing to both Grafana Cloud and internal Prometheus ## Related Infrastructure PR - https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/214 ## Changes ### 📊 Metrics Infrastructure - Added `prometheus-fastapi-instrumentator` dependency for automatic HTTP metrics - Created centralized `instrumentation.py` module for consistent metrics across services - Instrumented REST API, WebSocket, and External API services ### 📈 Automatic HTTP Metrics All FastAPI services now automatically collect: - **Request latency**: Histogram with custom buckets (10ms to 60s) - **Request/response size**: Track payload sizes - **Request counts**: By method, endpoint, and status code - **Active requests**: Real-time count of in-progress requests - **Error rates**: 4xx and 5xx responses ### 🎯 Custom Business Metrics Added domain-specific metrics: - **Graph executions**: Count by status (success/error/validation_error) - **Block executions**: Count and duration by block_type and status - **WebSocket connections**: Active connection gauge - **Database queries**: Duration histogram by operation and table - **RabbitMQ messages**: Count by queue and status - **Authentication**: Attempts by method and status - **API key usage**: By provider and block type - **Rate limiting**: Hit count by endpoint ### 🔌 Service Endpoints Each service exposes metrics at `/metrics`: - REST API (port 8006): `/metrics` - WebSocket (port 8001): `/metrics` - External API: `/external-api/metrics` - Executor (port 8002): Already had metrics, now enhanced ### 🏷️ Kubernetes Integration Updated Helm charts with pod annotations: ```yaml prometheus.io/scrape: "true" prometheus.io/port: "8006" # or appropriate port prometheus.io/path: "/metrics" ``` ## Testing - [x] Install dependencies: `poetry install` - [x] Run services: `poetry run serve` - [x] Check metrics endpoints are accessible - [x] Verify metrics are being collected - [x] Confirm Grafana Agent can scrape metrics - [x] Test graph/block execution tracking - [x] Verify WebSocket connection metrics ## Performance Impact - Minimal overhead (~1-2ms per request) - Metrics are collected asynchronously - Can be disabled via `ENABLE_METRICS=false` env var ## Next Steps 1. Deploy to dev environment 2. Configure Grafana Cloud dashboards 3. Set up alerting rules based on metrics 4. Add more custom business metrics as needed 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
f262bb9307 |
fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️ This PR restores and improves timezone awareness in the scheduler service to correctly handle daylight savings time (DST) transitions. The changes ensure that scheduled agents run at the correct local time even when crossing DST boundaries. #### Backend Changes: - **Scheduler Service (`scheduler.py`):** - Added `user_timezone` parameter to `add_graph_execution_schedule()` method - CronTrigger now uses the user's timezone instead of hardcoded UTC - Added timezone field to `GraphExecutionJobInfo` for visibility - Falls back to UTC with a warning if no timezone is provided - Extracts and includes timezone information from job triggers - **API Router (`v1.py`):** - Added optional `timezone` field to `ScheduleCreationRequest` - Fetches user's saved timezone from profile if not provided in request - Passes timezone to scheduler client when creating schedules - Converts `next_run_time` back to user timezone for display #### Frontend Changes: - **Schedule Creation Modal:** - Now sends user's timezone with schedule creation requests - Uses browser's local timezone if user hasn't set one in their profile - **Schedule Display Components:** - Updated to show timezone information in schedule details - Improved formatting of schedule information in monitoring views - Fixed schedule table display to properly show timezone-aware times - **Cron Expression Utils:** - Removed UTC conversion logic from `formatTime()` function - Cron expressions are now stored in the schedule's timezone - Simplified humanization logic since no conversion is needed - **API Types & OpenAPI:** - Added `timezone` field to schedule-related types - Updated OpenAPI schema to include timezone parameter ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: ### Test Plan 🧪 #### 1. Schedule Creation Tests - [ ] Create a new schedule and verify the timezone is correctly saved - [ ] Create a schedule without specifying timezone - should use user's profile timezone - [ ] Create a schedule when user has no profile timezone - should default to UTC with warning #### 2. Daylight Savings Time Tests - [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone (e.g., America/New_York) - [ ] Verify the schedule runs at 2:00 PM local time before DST transition - [ ] Verify the schedule still runs at 2:00 PM local time after DST transition - [ ] Check that the next_run_time adjusts correctly across DST boundaries #### 3. Display and UI Tests - [ ] Verify timezone is displayed in schedule details view - [ ] Verify schedule times are shown in user's local timezone in monitoring page - [ ] Verify cron expression humanization shows correct local times - [ ] Check that schedule table shows timezone information #### 4. API Tests - [ ] Test schedule creation API with timezone parameter - [ ] Test schedule creation API without timezone parameter - [ ] Verify GET schedules endpoint returns timezone information - [ ] Verify next_run_time is converted to user timezone in responses #### 5. Edge Cases - [ ] Test with various timezones (UTC, EST, PST, Europe/London, Asia/Tokyo) - [ ] Test with invalid timezone strings - should handle gracefully - [ ] Test scheduling at DST transition times (2:00 AM during spring forward) - [ ] Verify existing schedules without timezone info default to UTC #### 6. Regression Tests - [ ] Verify existing schedules continue to work - [ ] Verify schedule deletion still works - [ ] Verify schedule listing endpoints work correctly - [ ] Check that scheduled graph executions trigger as expected --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
5a6978b07d |
feat(frontend): Add expandable view for block output (#10773)
### Need for these changes 💥 https://github.com/user-attachments/assets/5b9007a1-0c49-44c6-9e8b-52bf23eec72c Users currently cannot view the full output result from a block when inspecting the Output Data History panel or node previews, as the content is clipped. This makes debugging and analysis of complex outputs difficult, forcing users to copy data to external editors. This feature improves developer efficiency and user experience, especially for blocks with large or nested responses, and reintroduces a highly requested functionality that existed previously. ### Changes 🏗️ * **New `ExpandableOutputDialog` component:** Introduced a reusable modal dialog (`ExpandableOutputDialog.tsx`) designed to display complete, untruncated output data. * **`DataTable.tsx` enhancement:** Added an "Expand" button (Maximize2 icon) to each data entry in the Output Data History panel. This button appears on hover and opens the `ExpandableOutputDialog` for a full view of the data. * **`NodeOutputs.tsx` enhancement:** Integrated the "Expand" button into node output previews, allowing users to view full output data directly from the node details. * The `ExpandableOutputDialog` provides a large, scrollable content area, displaying individual items in organized cards, with options to copy individual items or all data, along with execution ID and pin name metadata. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Navigate to an agent session with executed blocks. - [x] Open the Output Data History panel. - [x] Hover over a data entry to reveal the "Expand" button. - [x] Click the "Expand" button and verify the `ExpandableOutputDialog` opens, displaying the full, untruncated content. - [x] Verify scrolling works for large outputs within the dialog. - [x] Test "Copy Item" and "Copy All" buttons within the dialog. - [x] Navigate to a custom node in the graph. - [x] Inspect a node's output (if applicable). - [x] Hover over the output data to reveal the "Expand" button. - [x] Click the "Expand" button and verify the `ExpandableOutputDialog` opens, displaying the full content. --- Linear Issue: [OPEN-2593](https://linear.app/autogpt/issue/OPEN-2593/add-expandable-view-for-full-block-output-preview) <a href="https://cursor.com/background-agent?bcId=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/open-in-cursor-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://cursor.com/open-in-cursor-light.svg"> <img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg"> </picture> </a> <a href="https://cursor.com/agents?id=bc-27badeb8-2b49-4286-aa16-8245dfd33bfc"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://cursor.com/open-in-web-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://cursor.com/open-in-web-light.svg"> <img alt="Open in Web" src="https://cursor.com/open-in-web.svg"> </picture> </a> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com> |
||
|
|
339ec733cb |
fix(platform): add timezone awareness to scheduler (#10921)
### Changes 🏗️ This PR restores and improves timezone awareness in the scheduler service to correctly handle daylight savings time (DST) transitions. The changes ensure that scheduled agents run at the correct local time even when crossing DST boundaries. #### Backend Changes: - **Scheduler Service (`scheduler.py`):** - Added `user_timezone` parameter to `add_graph_execution_schedule()` method - CronTrigger now uses the user's timezone instead of hardcoded UTC - Added timezone field to `GraphExecutionJobInfo` for visibility - Falls back to UTC with a warning if no timezone is provided - Extracts and includes timezone information from job triggers - **API Router (`v1.py`):** - Added optional `timezone` field to `ScheduleCreationRequest` - Fetches user's saved timezone from profile if not provided in request - Passes timezone to scheduler client when creating schedules - Converts `next_run_time` back to user timezone for display #### Frontend Changes: - **Schedule Creation Modal:** - Now sends user's timezone with schedule creation requests - Uses browser's local timezone if user hasn't set one in their profile - **Schedule Display Components:** - Updated to show timezone information in schedule details - Improved formatting of schedule information in monitoring views - Fixed schedule table display to properly show timezone-aware times - **Cron Expression Utils:** - Removed UTC conversion logic from `formatTime()` function - Cron expressions are now stored in the schedule's timezone - Simplified humanization logic since no conversion is needed - **API Types & OpenAPI:** - Added `timezone` field to schedule-related types - Updated OpenAPI schema to include timezone parameter ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [ ] I have tested my changes according to the test plan: ### Test Plan 🧪 #### 1. Schedule Creation Tests - [ ] Create a new schedule and verify the timezone is correctly saved - [ ] Create a schedule without specifying timezone - should use user's profile timezone - [ ] Create a schedule when user has no profile timezone - should default to UTC with warning #### 2. Daylight Savings Time Tests - [ ] Create a schedule for a daily task at 2:00 PM in a DST timezone (e.g., America/New_York) - [ ] Verify the schedule runs at 2:00 PM local time before DST transition - [ ] Verify the schedule still runs at 2:00 PM local time after DST transition - [ ] Check that the next_run_time adjusts correctly across DST boundaries #### 3. Display and UI Tests - [ ] Verify timezone is displayed in schedule details view - [ ] Verify schedule times are shown in user's local timezone in monitoring page - [ ] Verify cron expression humanization shows correct local times - [ ] Check that schedule table shows timezone information #### 4. API Tests - [ ] Test schedule creation API with timezone parameter - [ ] Test schedule creation API without timezone parameter - [ ] Verify GET schedules endpoint returns timezone information - [ ] Verify next_run_time is converted to user timezone in responses #### 5. Edge Cases - [ ] Test with various timezones (UTC, EST, PST, Europe/London, Asia/Tokyo) - [ ] Test with invalid timezone strings - should handle gracefully - [ ] Test scheduling at DST transition times (2:00 AM during spring forward) - [ ] Verify existing schedules without timezone info default to UTC #### 6. Regression Tests - [ ] Verify existing schedules continue to work - [ ] Verify schedule deletion still works - [ ] Verify schedule listing endpoints work correctly - [ ] Check that scheduled graph executions trigger as expected --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
6575b655f0 |
fix(frontend): improve agent runs page loading state (#10914)
## Changes 🏗️ https://github.com/user-attachments/assets/356e5364-45be-4f6e-bd1c-cc8e42bf294d And also tidy up the some of the logic around hooks. I also added a `okData` helper to avoid having to type case ( `as` ) so much with the generated types ( given the `response` is a union depending on `status: 200 | 400 | 401` ... ) ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run PR locally with the `new-agent-runs` flag enabled - [x] Check the nice loading state ### For configuration changes: None |
||
|
|
7c2df24d7c |
fix(frontend): delete actions behind dialogs in agent runs view (#10915)
## Changes 🏗️ <img width="800" height="630" alt="Screenshot 2025-09-12 at 17 38 34" src="https://github.com/user-attachments/assets/103d7e10-e924-4831-b0e7-b7df608a205f" /> <img width="800" height="524" alt="Screenshot 2025-09-12 at 17 38 30" src="https://github.com/user-attachments/assets/aeec2ac7-4bea-4ec9-be0c-4491104733cb" /> <img width="800" height="750" alt="Screenshot 2025-09-12 at 17 38 26" src="https://github.com/user-attachments/assets/e0b28097-8352-4431-ae4a-9dc3e3bcf9eb" /> - All the `Delete` actions on the new Agent Library Runs page should be behind confirmation dialogs - Re-arrange the file structure a bit 💆🏽 - Make the buttons min-width a bit more generous ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally - [x] Test the above #### For configuration changes: None |
||
|
|
23eafa178c |
fix(backend/db): Unbreak store materialized views refresh job (#10906)
- Resolves #10898 ### Changes 🏗️ - Fix and re-create `refresh_store_materialized_views` DB function and its pg_cron job ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Migration applies without issues (locally) - [x] Refresh function can be run without issues (locally) |
||
|
|
27fccdbf31 |
fix(backend/executor): Make graph execution status transitions atomic and enforce state machine (#10863)
## Summary - Fixed race condition issues in `update_graph_execution_stats` function - Implemented atomic status transitions using database-level constraints - Added state machine enforcement to prevent invalid status transitions - Eliminated code duplication and improved error handling ## Problem The `update_graph_execution_stats` function had race condition vulnerabilities where concurrent status updates could cause invalid transitions like RUNNING → QUEUED. The function was not durable and could result in executions moving backwards in their lifecycle, causing confusion and potential system inconsistencies. ## Root Cause Analysis 1. **Race Conditions**: The function used a broad OR clause that allowed updates from multiple source statuses without validating the specific transition 2. **No Atomicity**: No atomic check to ensure the status hadn't changed between read and write operations 3. **Missing State Machine**: No enforcement of valid state transitions according to execution lifecycle rules ## Solution Implementation ### 1. Atomic Status Transitions - Use database-level atomicity by including the current allowed source statuses in the WHERE clause during updates - This ensures only valid transitions can occur at the database level ### 2. State Machine Enforcement Define valid transitions as a module constant `VALID_STATUS_TRANSITIONS`: - `INCOMPLETE` → `QUEUED`, `RUNNING`, `FAILED`, `TERMINATED` - `QUEUED` → `RUNNING`, `FAILED`, `TERMINATED` - `RUNNING` → `COMPLETED`, `TERMINATED`, `FAILED` - `TERMINATED` → `RUNNING` (for resuming halted execution) - `COMPLETED` and `FAILED` are terminal states with no allowed transitions ### 3. Improved Error Handling - Early validation with clear error messages for invalid parameters - Graceful handling when transitions fail - return current state instead of None - Proper logging of invalid transition attempts ### 4. Code Quality Improvements - Eliminated code duplication in fetch logic - Added proper type hints and casting - Made status transitions constant for better maintainability ## Benefits ✅ **Prevents Invalid Regressions**: No more RUNNING → QUEUED transitions ✅ **Atomic Operations**: Database-level consistency guarantees ✅ **Clear Error Messages**: Better debugging and monitoring ✅ **Maintainable Code**: Clean logic flow without duplication ✅ **Race Condition Safe**: Handles concurrent updates gracefully ## Test Plan - [x] Function imports and basic structure validation - [x] Code formatting and linting checks pass - [x] Type checking passes for modified files - [x] Pre-commit hooks validation ## Technical Details The key insight is using the database query itself to enforce valid transitions by filtering on allowed source statuses in the WHERE clause. This makes the operation truly atomic and eliminates the race condition window that existed in the previous implementation. 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
fb8fbc9d1f |
fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes. - Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION` so that `CreditTransactions` are not automatically deleted when we delete a user's data. - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Migration applies without problemsautogpt-platform-beta-v0.6.27 |
||
|
|
6a86e70fd6 |
fix(backend/db): Keep CreditTransaction entries on User delete (#10917)
This is a non-critical improvement for bookkeeping purposes. ### Changes 🏗️ - Change `CreditTransaction` <- `User` relation to `ON DELETE NO ACTION` so that `CreditTransactions` are not automatically deleted when we delete a user's data. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Migration applies without problems |
||
|
|
6a2d7e0fb0 |
fix(frontend): handle avatar missing images better (#10903)
## Changes 🏗️ I think this helps `next/image` being more tolerant when optimising images from certain origins according to Claude. ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Deploy preview to dev - [x] Verify avatar images load better ### For configuration changes: None |
||
|
|
3d6ea3088e |
fix(backend): Add Airtable record normalization and upsert features (#10908)
Introduces normalization of Airtable record outputs to include all fields with appropriate empty values and optional field metadata. Enhances record creation to support finding existing records by specified fields and updating them if found, enabling upsert-like behavior. Updates block schemas and logic for list, get, and create operations to support these new features.<!-- Clearly explain the need for these changes: --> ### Changes 🏗️ Allows normalization of the response of the airtable blocks Allows you to use create base to find ones already made <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test that it doesn't break existing agents - [x] Test that the results for checkboxes are returned |
||
|
|
64b4480b1e | Merge branch 'master' into dev | ||
|
|
f490b01abb |
feat(frontend): Add Vercel Analytics and Speed Insights (#10904)
## Summary - Added Vercel Analytics for tracking page views and user interactions - Added Vercel Speed Insights for monitoring Web Vitals and performance metrics - Fixed incorrect placement of SpeedInsights component (was between html and head tags) ## Changes - Import Analytics and SpeedInsights components from Vercel packages - Place both components correctly within the body tag - Ensure proper HTML structure and Next.js best practices ## Test plan - [x] Verify components are imported correctly - [x] Confirm no HTML validation errors - [x] Test that analytics work when deployed to Vercel - [x] Verify Speed Insights metrics are being collected |
||
|
|
e56a4a135d |
Revert "fix(backend): Add Airtable record normalization + find/create base (#10891)"
This reverts commit
|
||
|
|
e70c970ab6 |
feat(frontend): new <Avatar /> component using next/image (#10897)
## Changes 🏗️ <img width="800" height="648" alt="Screenshot 2025-09-10 at 22 00 01" src="https://github.com/user-attachments/assets/eb396d62-01f2-45e5-9150-4e01dfcb71d0" /><br /> Adds a new `<Avatar />` component and uses that across the app. Is a copy of [shadcn/avatar](https://duckduckgo.com/?q=shadcn+avatar&t=brave&ia=web) with the following modifications: - renders images with [`next/image`](https://duckduckgo.com/?q=next+image&t=brave&ia=web) by default - this ensures avatars rendered on the app are optimised and resized ✔️ - it will work as long as all the domains are white-listed in `nextjs.config.mjs` - allows to bypass and use a normal `<img />` tag via an `as` prop if needed - sometimes we might need to render images from a dynamic cdn 🤷🏽♂️ ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] ... ### For configuration changes: None |
||
|
|
3bbce71678 |
feat(builder): Block menu redesign - part 3 (#10864)
### Changes 🏗️ #### Block Menu Redesign - Part 3 This PR continues the block menu redesign effort, implementing the new content sections and improving the overall user experience. The changes focus on better organization, pagination, error handling, and visual consistency. #### Key Features Implemented: **1. New Content Organization** - **All Blocks Content**: Complete listing of all available blocks with category-based organization and infinite scroll support (`AllBlocksContent/`) - **My Agents Content**: Display and manage user's own agents with pagination (`MyAgentsContent/`) - **Marketplace Agents Content**: Browse and add marketplace agents with improved loading states (`MarketplaceAgentsContent/`) - **Integration Blocks**: Dedicated view for integration-specific blocks with better filtering (`IntegrationBlocks/`) - **Suggestion Content**: Smart suggestions based on user context and search history (`SuggestionContent/`) - **Integrations Content**: Browse available integrations in a dedicated view (`IntegrationsContent/`) **2. Enhanced UI Components** - **Paginated Lists**: New pagination components for blocks and integrations (`PaginatedBlocksContent/`, `PaginatedIntegrationList/`) - **Block List**: Reusable block list component with consistent styling (`BlockList/`) - **Improved Error Handling**: Comprehensive error states with retry functionality across all content types - **Loading States**: Skeleton loaders for better perceived performance **3. Infrastructure Improvements** - **Centralized Styles**: New `style.ts` file for consistent styling across components - **Better State Management**: Enhanced context provider with improved menu state handling - **Mock Flag Support**: Added feature flags for testing new block features - **Default State Enum**: Refactored to use enums for menu default states **4. Visual Assets** - Added 50+ new integration icons/logos for better visual representation - Updated existing integration images for consistency **5. Code Quality** - Improved error handling with proper error cards and retry mechanisms - Consistent formatting and import organization - Enhanced TypeScript types and interfaces - Better separation of concerns with dedicated hooks for each content type #### Technical Details: - **Files Changed**: 96 files - **Additions**: 1,380 lines - **Deletions**: 162 lines - **New Components**: 10+ new React components with dedicated hooks - **Integration Icons**: 50+ new PNG images for various integrations #### Breaking Changes: None - All changes are backwards compatible --- ### Test Plan 📋 - [x] Create a new agent and verify all blocks are accessible - [x] Test infinite scroll in "All Blocks" view - [x] Verify pagination works correctly in marketplace agents view - [x] Test error states by simulating network failures - [x] Check that all new integration icons display correctly - [x] Test adding agents from marketplace view - [x] Ensure skeleton loaders appear during data fetching > Generated by claude |
||
|
|
34fbf4377f |
fix(frontend): allow lazy loading of images (#10895)
The `next/image` component has inbuilt lazy loading enabled, but in some
components, we are bypassing it using a priority flag. So, I have
reverted this in this PR.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Lazy loading is working perfectly locally.
|
||
|
|
f682ef885a |
chore(frontend/deps-dev): Bump 16 dev dependencies to newer minor versions (#10837)
Bumps the development-dependencies group with 16 updates in the /autogpt_platform/frontend directory: | Package | From | To | | --- | --- | --- | | [@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests) | `4.1.0` | `4.1.1` | | [@playwright/test](https://github.com/microsoft/playwright) | `1.54.2` | `1.55.0` | | [@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y) | `9.1.2` | `9.1.4` | | [@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs) | `9.1.2` | `9.1.4` | | [@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links) | `9.1.2` | `9.1.4` | | [@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding) | `9.1.2` | `9.1.4` | | [@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs) | `9.1.2` | `9.1.4` | | [@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query) | `5.83.1` | `5.86.0` | | [@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools) | `5.84.2` | `5.86.0` | | [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) | `24.2.1` | `24.3.0` | | [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.3` | `13.1.4` | | [concurrently](https://github.com/open-cli-tools/concurrently) | `9.2.0` | `9.2.1` | | [eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next) | `15.4.6` | `15.5.2` | | [eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin) | `9.1.2` | `9.1.4` | | [msw](https://github.com/mswjs/msw) | `2.10.4` | `2.11.1` | | [storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core) | `9.1.2` | `9.1.4` | Updates `@chromatic-com/storybook` from 4.1.0 to 4.1.1 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/chromaui/addon-visual-tests/releases"><code>@chromatic-com/storybook</code>'s releases</a>.</em></p> <blockquote> <h2>v4.1.1</h2> <h4>🐛 Bug Fix</h4> <ul> <li>Broaden version-range for storybook peerDependency <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a> (<a href="https://github.com/ndelangen"><code>@ndelangen</code></a>)</li> </ul> <h4>Authors: 1</h4> <ul> <li>Norbert de Langen (<a href="https://github.com/ndelangen"><code>@ndelangen</code></a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.1/CHANGELOG.md"><code>@chromatic-com/storybook</code>'s changelog</a>.</em></p> <blockquote> <h1>v4.1.1 (Wed Aug 20 2025)</h1> <h4>🐛 Bug Fix</h4> <ul> <li>Broaden version-range for storybook peerDependency <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/389">#389</a> (<a href="https://github.com/ndelangen"><code>@ndelangen</code></a>)</li> </ul> <h4>Authors: 1</h4> <ul> <li>Norbert de Langen (<a href="https://github.com/ndelangen"><code>@ndelangen</code></a>)</li> </ul> <hr /> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
2ffd249aac |
fix(backend/external-api): Improve security & reliability of API key storage (#10796)
Our API key generation, storage, and verification system has a couple of issues that need to be ironed out before full-scale deployment. ### Changes 🏗️ - Move from unsalted SHA256 to salted Scrypt hashing for API keys - Avoid false-negative API key validation due to prefix collision - Refactor API key management code for clarity - [refactor(backend): Clean up API key DB & API code (#10797)](https://github.com/Significant-Gravitas/AutoGPT/pull/10797) - Rename models and properties in `backend.data.api_key` for clarity - Eliminate redundant/custom/boilerplate error handling/wrapping in API key endpoint call stack - Remove redundant/inaccurate `response_model` declarations from API key endpoints Dependencies for `autogpt_libs`: - Add `cryptography` as a dependency - Add `pyright` as a dev dependency ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - Performing these actions through the UI (still) works: - [x] Creating an API key - [x] Listing owned API keys - [x] Deleting an owned API key - [x] Newly created API key can be used in Swagger UI - [x] Existing API key can be used in Swagger UI - [x] Existing API key is re-encrypted with salt on use |
||
|
|
986245ec43 |
feat(frontend): run agent page improvements (#10879)
## Changes 🏗️ - Add all the cron scheduling options ( _yearly, monthly, weekly, custom, etc..._ ) using the new Design System components - Add missing agent/run actions: export agent + delete agent ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally with `new-agent-runs` enabled - [x] Test the above ### For configuration changes: Noneautogpt-platform-beta-v0.6.26 |
||
|
|
f89717153f | Merge branch 'master' into dev | ||
|
|
5da41e0753 |
fix(backend): Add Airtable record normalization + find/create base (#10891)
## Summary Fixes critical issue with Airtable API where empty/false fields are completely omitted from responses, causing inconsistent data structures. Also improves the create base block to prevent duplicate bases. <!-- Clearly explain the need for these changes: --> The Airtable API has a problematic behavior where it omits fields with "empty" values from responses: - Unchecked checkboxes are missing entirely instead of returning `false` - Empty number fields are missing instead of returning `0` - This makes it impossible to distinguish between "field doesn't exist" and "field is false/empty" - Users were getting inconsistent record structures that broke their workflows ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> #### 1. **Added Record Normalization** (`backend/blocks/airtable/_api.py`) - New `get_table_schema()` function to fetch table field definitions - New `get_empty_value_for_field()` to determine appropriate empty values per field type - New `normalize_records()` to fill in missing fields with proper defaults: - Checkbox → `false` - Number/Currency/Percent/Duration/Rating → `0` - Text fields → `""` - Multiple selects/attachments/collaborators → `[]` - Dates/Single selects → `null` - New `get_base_tables()` to fetch tables for a base #### 2. **Enhanced List and Get Record Blocks** (`backend/blocks/airtable/records.py`) - Added `normalize_output` parameter (defaults to `true`) - ensures all fields are present - Added `include_field_metadata` parameter to optionally include field type information - When normalization is enabled, fetches schema once and normalizes all records - Can be disabled by setting `normalize_output=false` for raw Airtable response #### 3. **Simplified Create Records Block** - Added `skip_normalization` parameter (default `false`) - normalized output by default - Records now always include all fields with proper empty values #### 4. **Enhanced Create Base Block** (`backend/blocks/airtable/bases.py`) - Added `find_existing` parameter (defaults to `true`) to prevent duplicate bases - When finding an existing base, now fetches and returns table information - Added `was_created` output field to indicate whether base was created or found ### Testing 📋 - ✅ All Airtable block tests pass - ✅ Tested normalization with records containing missing checkbox fields - ✅ Verified all field types get appropriate empty values - ✅ Tested create base find-or-create functionality - ✅ Ran `poetry run format` and `poetry run lint` ### Migration Guide This update makes the blocks behave more predictably: - **List/Get Records**: All fields are now included by default. Set `normalize_output: false` if you need the raw Airtable response - **Create Records**: Simply creates records, no more upsert confusion - **Create Base**: Prevents duplicate bases by default. Set `find_existing: false` to force creation ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) No configuration changes were required - all changes are code-only. |