## Changes 🏗️
Setup for the new Agent Runs page:
<img width="900" height="521" alt="Screenshot 2025-08-15 at 14 36 34"
src="https://github.com/user-attachments/assets/460d6611-4b15-4878-92d3-b477dc4453a9"
/>
It is behind a feature flag in Launch Darkly, `new-agent-runs`, so we
can progressively enable in staging and later on production.
### Other improvements
<img width="350" height="291" alt="Screenshot_2025-08-15_at_14 28 08"
src="https://github.com/user-attachments/assets/972d2a1a-a4cd-4e92-b6d7-2dcf7f57c2db"
/>
- Added a new `<ErrorCard />` component to paint gracefully API errors
when fetching data
- Moved some sub-components of the old library page to a nested
`/components` folder 📁
Behind a feature flag
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested with the feature flag ON and OFF
### For configuration changes:
None
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
In this project, I’ve added all the reusable, non-reactive components
that will be used in the new block menu. I’ve also included a new
library called `react-timeago` that helps us find related times.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Everything works perfectly locally
## Changes 🏗️
- Run the API query generation as part of the `dev` command
- update the `README` to reflect so
- Add CI job to generate queries and type-check to make sure we are not
out of sync
- the job is run both in Front-end and Back-end changes
- Generate the files via script to load the BE URL dynamically from the
env
- Remove generated files from Git
- rename the `type-check` command to `types`
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI passes
- [x] `README` updates make sense
#### For configuration changes:
None
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
We had 2 flaky end-to-end tests:
- Build page → user can add two blocks and connect them
- this was failing sometimes because the `Run` button on the builder
does not work well, sometimes you need to click it twice for it to
work...
- Agent dashboard → edit actions
- some flaky tests asserting agent submissions not being there, pulled
the fixes from Abhi here on this PR
https://github.com/Significant-Gravitas/AutoGPT/pull/10545
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] E2E pass on the CI
- [x] Changes make sense
### For configuration changes:
None
## Changes 🏗️
Add the following caps to the **Agent Activity Dropdown**:
- display activity only from the last 72h
- display up to 1000 items
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open the agent activity with a big amount of times locally
- [x] It displays up to a 1000 and with 72h cap
### For configuration changes
None
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
When the FirecrawlExtractBlock receives an output_schema, we currently
declare the field as a str.
Pydantic therefore serialises the JSON‐looking value into a string and
the Firecrawl API rejects the request with:
`400 Bad Request – Invalid JSON schema. path: ['schema']`
Direct curl requests work because the same structure is sent as a proper
JSON object.
### Changes 🏗️
- Changed the output_schema to dict instead of str
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test firebase.extract(..., schema) works with dict rather than str
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
- Resolves#10444
Sometimes, the order of nodes and/or links isn't consistent between
frontend and backend, which currently can result in unnecessary
re-saving of the graph when the user tries to run it.
Also, `sub_graphs` was not included in the frontend `Graph` type, which
can cause unchecked code issues when the object is propragated using
spread operators.
### Changes 🏗️
- fix(frontend/builder): Make `graphsEquivalent` insensitive to link and
node order
- dx(frontend): Fix typing of `Graph.sub_graphs` (and its variants)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Import an agent and open it in the builder
- Run it without making any changes to the graph itself
- [x] -> graph shouldn't re-save
<!-- Clearly explain the need for these changes: -->
## 🚨 CRITICAL: Double Transaction Bug
**Critical Issue:** Top-up transactions were being applied TWICE to user
balances, causing severe accounting errors.
**Example:**
- User with $160 balance tops up $50
- Expected: $210 balance
- Actual: $260 balance (extra $50 incorrectly credited)
This compromises the financial integrity of our credit system and
requires immediate fix.
### Changes 🏗️
1. **Added double-checked locking pattern in `_enable_transaction`**
(backend/data/credit.py)
- Added transaction re-check INSIDE the locked transaction block (lines
294-298)
- Prevents race condition when concurrent requests try to activate the
same transaction
- Ensures transaction can only be activated once, even with webhook
retries
2. **Enhanced error messages in Stripe webhook handler**
(backend/server/routers/v1.py)
- Added detailed error messages for better debugging of webhook failures
- Helps identify issues with payload validation or signature
verification
### Root Cause Analysis 🔍
**TOCTOU (Time-of-Check to Time-of-Use) Race Condition:**
The original code checked `transaction.isActive` outside the database
lock. Between this check and acquiring the lock, another concurrent
request (webhook retry or duplicate) could enter, causing both to
proceed with activation.
**Sequence:**
1. Request A: Checks `isActive=False` ✅
2. Request B: Checks `isActive=False` ✅ (webhook retry)
3. Request A: Acquires lock, activates transaction, adds $50
4. Request B: Waits for lock, then ALSO adds $50 ❌
**Contributing Factors:**
- Stripe webhook retry mechanism
- `@func_retry` decorator (up to 5 attempts)
- No database-level unique constraint on active transactions
- Missing atomicity between check and update
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verified the double-check prevents duplicate transaction
activation
- [x] Tested concurrent webhook calls - only one succeeds in activating
transaction
- [x] Confirmed balance is only incremented once per transaction
- [x] Verified idempotency - multiple calls with same transaction_key
are safe
- [x] All existing credit system tests pass
- [x] Tested webhook error handling with invalid payloads/signatures
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes required - this is a code-only fix*
I’ve added a new launch darkly flag to toggle between the new and old
block menu in the builder.
### Changes 🏗️
- A new flag name `NEW_BLOCK_MENU` has been added.
- A new block menu block has been created, which is a normal component.
It will be expanded with more components in the future. Currently, it’s
just a one-line component.
- A new control panel has been created, which improves state
localisation and has a new design according to the design files.
<img width="1512" height="981" alt="Screenshot 2025-08-18 at 2 49 54 PM"
src="https://github.com/user-attachments/assets/3deeefe3-9e42-4178-9cf9-77773ed7e172"
/>
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Everything works perfectly on local.
This should fix sub-agent execution issues with passed-in credentials after a crucial data path was removed in #10568.
Additionally, some of the changes are to ensure the `credentials_input_schema` gets refreshed correctly when saving a new version of a graph in the builder.
### Changes 🏗️
- Include `graph_credentials_inputs` in `nodes_input_masks` passed into sub-agent execution
- Fix credentials input schema in `update_graph` and `get_library_agent_by_graph_id` return
- Improve error message on sub-graph validation failure
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Import agent with sub-agent(s) with required credentials inputs & run it -> should work
Added language selection links to the README for easier access to
translated versions: German, Spanish, French, Japanese, Korean,
Portuguese, Russian, and Chinese.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Summary
- Fixes scheduler pod crashes during peak scheduling periods (e.g.,
03:00:00)
- Reduces APScheduler ThreadPoolExecutor max_workers from 10 to 3
(matching scheduler_db_pool_size)
- Prevents event loop saturation that blocks health checks and causes
pod restarts
## Root Cause Analysis
During peak scheduling periods, multiple jobs execute simultaneously and
compete for the shared event loop through `run_async()`. This creates a
resource bottleneck where:
1. **ThreadPoolExecutor** runs up to 10 jobs concurrently
2. Each job calls `run_async()` which submits to the **same event loop**
that FastAPI health check needs
3. **Health check blocks** waiting for event loop availability
4. **Liveness probe fails** after 5 consecutive timeouts (50s)
5. **Pod gets killed** with SIGKILL (exit code 137)
6. **Executions orphaned** - created in DB but never published to
RabbitMQ
## Solution
Match `max_workers` to `scheduler_db_pool_size` (3) to prevent more
concurrent jobs than the system can handle without blocking critical
health checks.
## Evidence
- Pod restart at exactly 03:05:48 when executions
e47cd564-ed87-4a52-999b-40804c41537a and
eae69811-4c7c-4cd5-b084-41872293185b were created
- 7 scheduled jobs triggered simultaneously at 03:00:00
- Health check normally responds in 0.007s but times out during high
concurrency
- Exit code 137 indicates SIGKILL from liveness probe failure
## Test Plan
- [ ] Monitor scheduler pod stability during peak scheduling periods
- [ ] Verify no executions remain QUEUED without being published to
RabbitMQ
- [ ] Confirm health checks remain responsive under load
- [ ] Check that job execution still works correctly with reduced
concurrency
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
- Generate API client for orval v7.11.2
- Fix type error in `useAgentSelectStep.ts`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Platform works
- [x] Updated codepath in `useAgentSelectStep.ts` works
## Summary
**HOTFIX for production** - Fixes executor being stuck in infinite retry
loop when RabbitMQ channels are closed
- Ensures proper reconnection by checking channel state before
attempting to consume messages
- Prevents accumulation of thousands of retry attempts (was seeing 7000+
retries)
## Changes
The executor was stuck repeatedly failing with "Channel is closed"
errors because the `continuous_retry` decorator was attempting to reuse
closed channels instead of creating new ones.
Added channel state checks (`is_ready`) before connecting in both:
- `_consume_execution_run()`
- `_consume_execution_cancel()`
When a channel is not ready (closed), the code now:
1. Disconnects the client (safe operation, checks if already
disconnected)
2. Establishes a fresh connection with new channel
3. Proceeds with message consumption
## Test plan
- [x] Verified the disconnect() method is safe to call on already
disconnected clients
- [x] Confirmed is_ready property checks both connection and channel
state
- [ ] Deploy to environment and verify executors reconnect properly
after channel failures
- [ ] Monitor logs to ensure no more "Channel is closed" retry loops
## Related Issues
Fixes critical production issue where:
- Executor pods show repeated "Channel is closed" errors
- 757 messages stuck in `graph_execution_queue`
- 102,286 messages in `failed_notifications` queue
- RabbitMQ logs show connections being closed due to missed heartbeats
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixes three critical issues in the admin dashboard spending page
(SECRT-1438):
- Fixed user search not working (P1) - query parameters weren't being
passed to backend
- Fixed broken pagination (P1) - server-side GET requests missing query
parameters
- Added visual feedback for credit updates (P3) - toast notifications,
loading states, auto-dismiss modal
## Root Cause
Server-side API requests weren't appending query parameters for
GET/DELETE requests in the `makeAuthenticatedRequest` function in
`helpers.ts`.
## Changes
- Added missing `transaction_filter` parameter to API client's
`getUsersHistory` method
- Fixed server-side GET request query parameter handling by updating
`makeAuthenticatedRequest` to use `buildUrlWithQuery`
- Added Suspense key to force re-render on URL parameter changes
- Added toast notifications for success/error states when adding credits
- Modal now closes automatically after successful submission
- Added loading state with disabled buttons during credit submission
- Page refreshes automatically to show updated balances
- Added debug logging to help diagnose parameter passing issues
## Test Plan
- [x] Search for users by email in admin spending dashboard
- [x] Navigate through pagination (Next/Previous buttons)
- [x] Filter by transaction type (Grant, Usage, etc.)
- [x] Add credits to a user account
- [x] Verify toast notification appears
- [x] Verify modal closes after successful submission
- [x] Verify balance updates without manual refresh
## Linear Issue
Closes [SECRT-1438](https://linear.app/autogpt/issue/SECRT-1438)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Update the Front-end `README` to clarify how to run the Front-end and
Back-end separately or together via Docker.
You can [preview the README
here](8f607ca852/autogpt_platform/frontend/README.md).
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `README` makes sense and looks good formatting wise
### For configuration changes:
None
## Changes 🏗️
Not a helpful console log to land in production... We should disallow
console logs all together on the Front-end code, but that is a separate,
bigger PR...
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Go to the signup page
- [x] Play with the password inputs
- [x] Password is not printed in the console
#### For configuration changes:
None
## Summary
This PR adds support for v0 by Vercel's Model API to the AutoGPT
platform, enabling users to leverage v0's framework-aware AI models
optimized for React and Next.js code generation.
v0 provides OpenAI-compatible endpoints with models specifically trained
for frontend development, making them ideal for generating UI components
and web applications.
### Changes 🏗️
#### Backend Changes
- **Added v0 Provider**: Added `V0 = "v0"` to `ProviderName` enum in
`/backend/backend/integrations/providers.py`
- **Added v0 Models**: Added three v0 models to `LlmModel` enum in
`/backend/backend/blocks/llm.py`:
- `V0_1_5_MD = "v0-1.5-md"` - Everyday tasks and UI generation (128K
context, 64K output)
- `V0_1_5_LG = "v0-1.5-lg"` - Advanced reasoning (512K context, 64K
output)
- `V0_1_0_MD = "v0-1.0-md"` - Legacy model (128K context, 64K output)
- **Implemented v0 Provider**: Added v0 support in `llm_call()` function
using OpenAI-compatible client with base URL `https://api.v0.dev/v1`
- **Added Credentials Support**: Created `v0_credentials` in
`/backend/backend/integrations/credentials_store.py` with UUID
`c4e6d1a0-3b5f-4789-a8e2-9b123456789f`
- **Cost Configuration**: Added model costs in
`/backend/backend/data/block_cost_config.py`:
- v0-1.5-md: 1 credit
- v0-1.5-lg: 2 credits
- v0-1.0-md: 1 credit
#### Configuration Changes
- **Settings**: Added `v0_api_key` field to `Secrets` class in
`/backend/backend/util/settings.py`
- **Environment Variables**: Added `V0_API_KEY=` to
`/backend/.env.default`
### Features
- ✅ Full OpenAI-compatible API support
- ✅ Tool/function calling support
- ✅ JSON response format support
- ✅ Framework-aware completions optimized for React/Next.js
- ✅ Large context windows (up to 512K tokens)
- ✅ Integrated with platform credit system
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run existing block tests to ensure no regressions: `poetry run
pytest backend/blocks/test/test_block.py`
- [x] Verify AITextGeneratorBlock works with v0 models
- [x] Confirm all model metadata is correctly configured
- [x] Validate cost configuration is properly set up
- [x] Check that v0_credentials has a valid UUID4
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- Added `V0_API_KEY=` to `/backend/.env.default`
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- No changes needed - uses existing environment variable patterns
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
### Configuration Requirements
Users need to:
1. Obtain a v0 API key from [v0.app](https://v0.app) (requires Premium
or Team plan)
2. Add `V0_API_KEY=your-api-key` to their `.env` file
### API Documentation
- v0 API Docs: https://v0.app/docs/api
- Model API Docs: https://v0.app/docs/api/model
### Testing
All existing tests pass with the new v0 integration:
```bash
poetry run pytest backend/blocks/test/test_block.py::test_available_blocks -k "AITextGeneratorBlock" -xvs
# Result: PASSED
```
<!-- Clearly explain the need for these changes: -->
We want to support ~~proxy curl~~ enrichlayer as an integration, and
this is a baseline way to get there
### Changes 🏗️
- Adds some subset of proxycurl blocks based on the API docs:
~~https://nubela.co/proxycurl/docs#people-api-person-profile-endpoint~~https://enrichlayer.com/docs/pc/#people-api
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] manually test the blocks with an API key
- [x] make sure the automated tests pass
---------
Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: majdyz <zamil@agpt.co>
<!-- Clearly explain the need for these changes: -->
Our weekly summary emails are currently broken, hard-coded, and so ugly.
### Changes 🏗️
Update the email template to look better
Update the way we queue messages to work after other changes have
occurred
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test by sending a self email with the cron job set to every
minute, so you can see what it would look like
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Summary
Streamline Supabase stack from 13 services to 3 core services for faster
startup and lower resource usage while maintaining full API
compatibility.
## Changes Made
### Core Services (Always Running)
- **Kong**: API gateway providing standard `/auth/v1/` endpoints and API
key validation
- **Auth**: GoTrue authentication service for user management
- **Database**: PostgreSQL with pgvector support for data persistence
### Removed Services (9 services eliminated)
- `rest` (PostgREST API) - not needed for auth-only usage
- `realtime` (real-time subscriptions) - not used by platform
- `storage` (file storage) - platform uses separate file handling
- `imgproxy` (image processing) - not required for core functionality
- `meta` (database metadata) - not needed for runtime operations
- `functions` (edge functions) - not utilized
- `analytics` (Logflare) - monitoring overhead not needed locally
- `vector` (log collection) - not required for basic operation
- `supavisor` (connection pooler) - direct DB access sufficient for
local dev
### Studio (Development Only)
- Moved to `local` profile: `docker compose --profile local up`
- Available for database management during development
- Excluded from normal startup for cleaner production-like environment
## Benefits
- **80% faster startup**: 3 services vs 13 services
- **Lower resource usage**: Significant reduction in memory/CPU
consumption
- **Simpler debugging**: Fewer moving parts, cleaner logs, easier
troubleshooting
- **Maintained compatibility**: All auth functionality preserved through
Kong
## Backwards Compatibility
✅ **No breaking changes**
- All existing auth endpoints (`/auth/v1/*`) work unchanged
- API key authentication (`anon`/`service_role`) preserved
- CORS and security policies maintained via Kong
- No application code changes required
## Testing
- [x] Docker compose starts successfully with minimal services
- [x] Auth endpoints accessible via Kong at `/auth/v1/`
- [x] Database connectivity maintained
- [x] Studio accessible with `--profile local` flag
- [x] All existing environment variables preserved
## File Changes
- `autogpt_platform/docker-compose.yml`: Removed unnecessary Supabase
services, moved studio to local profile
- `autogpt_platform/db/docker/docker-compose.yml`: Cleaned up service
dependencies on analytics/vector
🤖 Generated with [Claude Code](https://claude.ai/code)
## Summary
This PR adds the frontend service to the Docker Compose configuration,
enabling `docker compose up` to run the complete stack, including the
frontend. It also implements comprehensive environment variable
improvements, unified .env file support, and fixes Docker networking
issues.
## Key Changes
### 🐳 Docker Compose Improvements
- **Added frontend service** to `docker-compose.yml` and
`docker-compose.platform.yml`
- **Production build**: Uses `pnpm build + serve` instead of dev server
for better stability and lower memory usage
- **Service dependencies**: Frontend now waits for backend services
(`rest_server`, `websocket_server`) to be ready
- **YAML anchors**: Implemented DRY configuration to avoid duplicating
environment values
### 📁 Unified .env File Support
- **Frontend .env loading**: Automatically loads `.env` file during
Docker build and runtime
- **Backend .env loading**: Optional `.env` file support with fallback
to sensible defaults in `settings.py`
- **Single source of truth**: All `NEXT_PUBLIC_*` and API keys can be
defined in respective `.env` files
- **Docker integration**: Updated `.dockerignore` to include `.env`
files in build context
- **Git tracking**: Frontend and backend `.env` files are now trackable
(removed from gitignore)
### 🔧 Environment Variable Architecture
- **Dual environment strategy**:
- Server-side code uses Docker service names
(`http://rest_server:8006/api`)
- Client-side code uses localhost URLs (`http://localhost:8006/api`)
- **Comprehensive config**: Added build args and runtime environment
variables
- **Network compatibility**: Fixes connection issues between frontend
and backend containers
- **Shared backend variables**: Common environment variables (service
hosts, auth settings) centralized using YAML anchors
### 🛠️ Code Improvements
- **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`)
with server-side priority
- **Updated all frontend code** to use shared environment helpers
instead of direct `process.env` access
- **Consistent API**: All environment variable access now goes through
helper functions
- **Settings.py improvements**: Better defaults for CORS origins and
optional .env file loading
### 🔗 Files Changed
- `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend
service and shared backend env vars
- `frontend/Dockerfile` - Simplified build process to use .env files
directly
- `backend/settings.py` - Optional .env loading and better defaults
- `frontend/src/lib/env-config.ts` - New centralized environment
configuration
- `.dockerignore` - Allow .env files in build context
- `.gitignore` - Updated to allow frontend/backend .env files
- Multiple frontend files - Updated to use env helpers
- Updates to both auto installer scripts to work with the latest setup!
## Benefits
- ✅ **Single command deployment**: `docker compose up` now runs
everything
- ✅ **Better reliability**: Production build reduces memory usage and
crashes
- ✅ **Network compatibility**: Proper container-to-container
communication
- ✅ **Maintainable config**: Centralized environment variable management
with .env files
- ✅ **Development friendly**: Works in both Docker and local development
- ✅ **API key management**: Easy configuration through .env files for
all services
- ✅ **No more manual env vars**: Frontend and backend automatically load
their respective .env files
## Testing
- ✅ Verified Docker service communication works correctly
- ✅ Frontend responds and serves content properly
- ✅ Environment variables are correctly resolved in both server and
client contexts
- ✅ No connection errors after implementing service dependencies
- ✅ .env file loading works correctly in both build and runtime phases
- ✅ Backend services work with and without .env files present
### Checklist 📋
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
## Problem
After applying the CloudLoggingHandler fix to use
BackgroundThreadTransport (#10634), scheduler pods entered a new
deadlock during startup when uvicorn reconfigures logging.
## Root Cause
When uvicorn starts with a log_config parameter, it calls
`logging.config.dictConfig()` which:
1. Calls `_clearExistingHandlers()`
2. Which calls `logging.shutdown()`
3. Which tries to `flush()` all handlers including CloudLoggingHandler
4. CloudLoggingHandler with BackgroundThreadTransport tries to flush its
queue
5. The background worker thread tries to acquire the logging module lock
to check log levels
6. **Deadlock**: shutdown holds lock waiting for flush to complete,
worker thread needs lock to continue
## Thread Dump Evidence
From py-spy analysis of the stuck pod:
- **Thread 21 (FastAPI)**: Stuck in `flush()` waiting for background
thread to drain queue
- **Thread 13 (google.cloud.logging.Worker)**: Waiting for logging lock
in `isEnabledFor()`
- **Thread 1 (MainThread)**: Waiting for logging lock in `getLogger()`
during SQLAlchemy import
- **Threads 30, 31 (Sentry)**: Also waiting for logging lock
## Solution
Set `log_config=None` for all uvicorn servers. This prevents uvicorn
from calling `dictConfig()` and avoids the deadlock entirely.
**Trade-off**: Uvicorn will use its default logging configuration which
may produce duplicate log entries (one from uvicorn, one from the app),
but the application will start successfully without deadlocks.
## Changes
- Set `log_config=None` in all uvicorn.Config() calls
- Remove unused `generate_uvicorn_config` imports
## Testing
- [x] Verified scheduler pods can start and become healthy
- [x] Health checks respond properly
- [x] No deadlocks during startup
- [x] Application logs still appear (though may be duplicated)
## Related Issues
- Fixes the startup deadlock introduced after #10634
### Summary
Added a new documentation page and images for integrating AI/ML API with
AutoGPT, including step-by-step instructions. Updated LLM block to send
additional headers for requests to aimlapi.com. Improved provider
listing in index.md and added the new guide to mkdocs navigation. Builds
on and extends the integration work from
https://github.com/Significant-Gravitas/AutoGPT/pull/9996
### Changes 🏗️
This PR introduces official support and documentation for using **AI/ML
API** with the **AutoGPT platform**:
* 📄 **Added a new documentation page** `platform/aimlapi.md` with a
detailed step-by-step integration guide.
* 🖼️ **Added 12+ reference images** to `docs/content/imgs/aimlapi/` for
clear visual walkthrough.
* 🧠 **Updated the LLM block** (`llm.py`) to send extra headers
(`X-Project`, `X-Title`, `Referer`) in requests to `aimlapi.com` for
analytics and source attribution.
* 📚 **Improved provider listing** in `index.md` — added section about
AI/ML API models and benefits.
* 🧭 **Added the new guide to the mkdocs navigation** via `mkdocs.yml`.
---
### Checklist 📋
#### For code changes:
* [x] I have clearly listed my changes in the PR description
* [x] I have made a test plan
* [x] I have tested my changes according to the test plan:
* [x] Successfully authenticated against `api.aimlapi.com`
* [x] Verified requests use correct headers
* [x] Confirmed `AI Text Generator` block returns completions for all
supported models
* [x] End-to-end tested: created, saved, and ran agent with AI/ML API
successfully
* [x] Verified outputs render correctly in the Output panel
No breaking changes introduced. Let me know if you'd like this guide
cross-referenced from other onboarding pages. ✅
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Refactored Windows and Linux setup scripts to streamline prerequisite checks, repository detection, and service startup. Removed Sentry configuration and related prompts for a simpler setup experience. Updated user messaging and improved error handling for common Docker issues.
## Changes 🏗️
<img width="800" height="687" alt="Screenshot 2025-08-12 at 15 52 41"
src="https://github.com/user-attachments/assets/0d2d70b8-e727-428b-915e-d4c108ab7245"
/>
<img width="800" height="772" alt="Screenshot 2025-08-12 at 15 52 53"
src="https://github.com/user-attachments/assets/b9790616-3754-455e-b8f6-58cd7f6b5a18"
/>
Update the Account Settings ( `profile/settings` ) form so that:
- it uses the new Design System components
- it is split into 2 forms ( update email & notifications )
- the change password inputs have been removed instead we link to the
`/reset-password` page
- uses a normal API route and client query to update the email
This might fix as well an error we are seeing when updating email
preferences on dev. My guess is it is failing because previously it was
using a server action + supabase and it didn't have access to the
cookies auth 🍪
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Navigate to `/profile/settings`
- [x] Can update the email
- [x] Can change notification preferences
- [x] New E2E tests pass on the CI and make sense
### For configuration changes:
None
### Changes 🏗️
Calls to the moderation API now strip whitespace from the API key before
including it in the 'X-API-Key' header, preventing authentication issues
due to accidental leading or trailing spaces.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Setup and run the platform with moderation and test it works
## Changes 🏗️
Use a skeleton for the martkeplace loading state, representing visually
how the place should looks. Looks a bit more stylish than the previous
`Loading...` text.
### Before
<img width="800" height="774" alt="Screenshot 2025-08-12 at 16 01 22"
src="https://github.com/user-attachments/assets/29e44a1a-2089-468c-a253-3a6b763ada5a"
/>
### After
<img width="800" height="761" alt="Screenshot 2025-08-12 at 16 01 01"
src="https://github.com/user-attachments/assets/5ad362ae-df1d-4a1b-90ae-9349a81a4d75"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Martketplace loading state looks good across screen sizes
### For configuration changes:
None
## Summary
This PR refactors all Gmail blocks to share a common base class
(`GmailBase`) and adds several improvements to email handling, including
proper HTML content support, async API calls, and fixing the
78-character line wrapping issue for plain text emails.
## Changes
### Architecture Improvements
- **Unified base class**: Created `GmailBase` abstract class that
consolidates common functionality across all Gmail blocks
- **Async API calls**: Converted all Gmail API calls to use
`asyncio.to_thread` for better performance and non-blocking operations
- **Code deduplication**: Moved shared methods like `_build_service`,
`_get_email_body`, `_get_attachments`, and `_get_label_id` to the base
class
### Email Content Handling
- **Smart content type detection**: Added automatic detection of HTML vs
plain text content
- **Fix 78-char line wrapping**: Plain text emails now use a no-wrap
policy (`max_line_length=0`) to prevent Gmail's default 78-character
hard line wrapping
- **Content type parameter**: Added optional `content_type` field to
Send, Draft, Reply, and Forward blocks allowing manual override ("auto",
"plain", or "html")
- **Proper MIME handling**: Created `_make_mime_text` helper function to
properly configure MIME types and policies
### New Features
- **Gmail Forward Block**: Added new `GmailForwardBlock` for forwarding
emails with proper thread preservation
- **Reply improvements**: Reply block now properly reads the original
email content when replying
### Bug Fixes
- Fixed issue where reply block wasn't reading the email it was replying
to
- Fixed attachment handling in multipart messages
- Improved error handling for base64 decoding
## Technical Details
The refactoring introduces:
- `NO_WRAP_POLICY = SMTP.clone(max_line_length=0)` to prevent line
wrapping in plain text emails
- UTF-8 charset support for proper Unicode/emoji handling
- Consistent async patterns using `asyncio.to_thread` for all Gmail API
calls
- Proper HTML to text conversion using html2text library when available
## Testing
All existing tests pass. The changes maintain backward compatibility
while adding new optional parameters.
## Breaking Changes
None - all changes are backward compatible. The new `content_type`
parameter is optional and defaults to "auto" detection.
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
This PR consolidates LaunchDarkly feature flag management by moving it
from autogpt_libs to backend and fixing several issues with boolean
handling and configuration management.
### Changes 🏗️
**Code Structure:**
- Move LaunchDarkly client from `autogpt_libs/feature_flag` to
`backend/util/feature_flag.py`
- Delete redundant `config.py` file and merge LaunchDarkly settings into
`backend/util/settings.py`
- Update all imports throughout the codebase to use
`backend.util.feature_flag`
- Move test file to `backend/util/feature_flag_test.py`
**Bug Fixes:**
- Fix `is_feature_enabled` function to properly return boolean values
instead of arbitrary objects that were always evaluating to `True`
- Add proper async/await handling for all `is_feature_enabled` calls
- Add better error handling when LaunchDarkly client is not initialized
**Performance & Architecture:**
- Load Settings at module level instead of creating new instances inside
functions
- Remove unnecessary `sdk_key` parameter from
`initialize_launchdarkly()` function
- Simplify initialization by using centralized settings management
**Configuration:**
- Add `launch_darkly_sdk_key` field to `Secrets` class in settings.py
with proper validation alias
- Remove environment variable fallback in favor of centralized settings
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All existing feature flag tests pass (6/6 tests passing)
- [x] LaunchDarkly initialization works correctly with settings
- [x] Boolean feature flags return correct values instead of objects
- [x] Non-boolean flag values are properly handled with warnings
- [x] Async/await calls work correctly in AutoMod and activity status
generator
- [x] Code formatting and imports are correct
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Configuration Changes:**
- LaunchDarkly SDK key is now managed through the centralized Settings
system instead of a separate config file
- Uses existing `LAUNCH_DARKLY_SDK_KEY` environment variable (no changes
needed to env files)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
Fixed Smart Decision Maker's function signature generation to properly
handle dynamic fields (e.g., `values_#_*`, `items_$_*`) when connecting
to any block as a tool.
### Context
When Smart Decision Maker calls other blocks as tools, it needs to
generate OpenAI-compatible function signatures. Previously, when
connected to blocks via dynamic fields (which get merged by the executor
at runtime), the signature generation would fail because blocks don't
inherently know about these dynamic field patterns.
### Changes 🏗️
- **Modified
`SmartDecisionMakerBlock._create_block_function_signature()`** to detect
and handle dynamic fields:
- Detects fields containing `_#_` (dict merge), `_$_` (list merge), or
`_@_` (object merge)
- Provides generic string schema for dynamic fields (OpenAI API
compatible)
- Falls back gracefully for unknown fields
- **Added comprehensive tests** for dynamic field handling with both
dictionary and list patterns
- **No changes needed to individual blocks** - this solution works
universally
### Why This Approach
Instead of modifying every block to handle dynamic fields (original PR
approach), we handle it centrally in Smart Decision Maker where the
function signatures are generated. This is cleaner and more
maintainable.
### Test Plan 📋
- [x] Created test cases for Smart Decision Maker generating function
signatures with dynamic dict fields (`_#_`)
- [x] Created test cases for Smart Decision Maker generating function
signatures with dynamic list fields (`_$_`)
- [x] Verified Smart Decision Maker can successfully call blocks like
CreateDictionaryBlock via dynamic connections
- [x] All existing Smart Decision Maker tests pass
- [x] Linting and formatting pass
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
We're forcing this note to the end of the system prompt SDM block:
Only provide EXACTLY one function call; multiple tool calls are strictly
prohibited., this is being interpreted by GPT5 as "Only call one tool
per task," which is resulting in many agent runs that only use a tool
once (i.e., useless low low-effort answers)
### Changes 🏗️
Remove parallel tool-call system prompting entirely.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] automated tests.
Add py-spy for production-safe Python profiling across all backend services:
- Add py-spy dependency to pyproject.toml
- Grant SYS_PTRACE capability to Docker services for profiling access
- Enable low-overhead performance monitoring in development and production
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This PR has added end-to-end tests for the profile form page. These
tests include:
- Redirects to the login page when the user is not authenticated.
- Can save profile changes successfully.
- Can cancel profile changes (skipped because we need to fix the form
for this test).
### Changes 🏗️
- Added test-id's inside the ProfileInfoForm.
- Created a page object for the profile form page.
- Added a test for this page in `profile-form.spec.ts`.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All test are working perfectly locally
Copy of [feat(backend/AM): Integrate AutoMod content moderation - By
Bentlybro - PR
#10490](https://github.com/Significant-Gravitas/AutoGPT/pull/10490) cos
i messed it up 🤦
Adds AutoMod input and output moderation to the execution flow.
Introduces a new AutoMod manager and models, updates settings for
moderation configuration, and modifies execution result handling to
support moderation-cleared data. Moderation failures now clear sensitive
data and mark executions as failed.
<img width="921" height="816" alt="image"
src="https://github.com/user-attachments/assets/65c0fee8-d652-42bc-9553-ff507bc067c5"
/>
### Changes 🏗️
I have made some small changes to
``autogpt_platform\backend\backend\executor\manager.py`` to send the
needed into to the AutoMod system which collects the data, combines and
makes the api call to AM and based on its reply lets it run or not!
I also had to make small changes to
``autogpt_platform\backend\backend\data\execution.py`` to add checks
that allow me to clear the content from the blocks if it was flagged
I am working on finalizing the AM repo then that will be public
To note: we will want to set this up behind launch darkly first for
testing on the team before we roll it out any more
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Setup and run the platform with ``automod_enabled`` set to False
and it works normally
- [x] Setup and run the platform with ``automod_enabled`` set to True,
set the AM URL and API Key and test it runs safe blocks normally
- [x] Test AM with content that would trigger it to flag and watch it
stop and clear all the blocks outputs
Message @Bentlybro for the URL and an API key to AM for local testing!
## Changes made to Settings.py
I have added a few new options to the settings.py for AutoMod Config!
```
# AutoMod configuration
automod_enabled: bool = Field(
default=False,
description="Whether AutoMod content moderation is enabled",
)
automod_api_url: str = Field(
default="",
description="AutoMod API base URL - Make sure it ends in /api",
)
automod_timeout: int = Field(
default=30,
description="Timeout in seconds for AutoMod API requests",
)
automod_retry_attempts: int = Field(
default=3,
description="Number of retry attempts for AutoMod API requests",
)
automod_retry_delay: float = Field(
default=1.0,
description="Delay between retries for AutoMod API requests in seconds",
)
automod_fail_open: bool = Field(
default=False,
description="If True, allow execution to continue if AutoMod fails",
)
automod_moderate_inputs: bool = Field(
default=True,
description="Whether to moderate block inputs",
)
automod_moderate_outputs: bool = Field(
default=True,
description="Whether to moderate block outputs",
)
```
and
```
automod_api_key: str = Field(default="", description="AutoMod API key")
```
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Summary
Fix critical deadlock issue where scheduler pods would freeze completely
and become unresponsive to health checks, causing pod restarts and stuck
QUEUED executions.
## Root Cause Analysis
The scheduler was using `BlockingScheduler` which blocked the main
thread, and when concurrent jobs deadlocked in the async event loop, the
entire process would freeze - unable to respond to health checks or
process any requests.
From crash analysis:
- At 01:18:00, two jobs started executing concurrently
- At 01:18:01.482, last successful health check
- Process completely froze - no more logs until pod was killed at
01:18:46
- Execution `8174c459-c975-4308-bc01-331ba67f26ab` was created in DB but
never published to RabbitMQ
## Changes Made
### Core Deadlock Fix
- **Switch from BlockingScheduler to BackgroundScheduler**: Prevents
main thread blocking, allows health checks to work even if scheduler
jobs deadlock
- **Make all health_check methods async**: Makes health checks
completely independent of thread pools and more resilient to blocking
operations
### Enhanced Monitoring & Debugging
- **Add execution timing**: Track and log how long each graph execution
takes to create and publish
- **Warn on slow operations**: Alert when operations take >10 seconds,
indicating resource contention
- **Enhanced error logging**: Include elapsed time and exception types
in error messages
- **Better APScheduler event listeners**: Add listeners for missed jobs
and max instances with actionable messages
### Files Modified
- `backend/executor/scheduler.py` - Switch to BackgroundScheduler, async
health_check, timing monitoring
- `backend/util/service.py` - Base async health_check method
- `backend/executor/database.py` - Async health_check override
- `backend/notifications/notifications.py` - Async health_check override
## Test Plan
- [x] All existing tests pass (914 passed, 1 failed unrelated connection
issue)
- [x] Scheduler starts correctly with BackgroundScheduler
- [x] Health checks respond properly under load
- [x] Enhanced logging provides visibility into execution timing
## Impact
- **Prevents pod freezes**: Scheduler remains responsive even when jobs
deadlock
- **Better observability**: Clear visibility into slow operations and
failures
- **No dropped executions**: Jobs won't get stuck in QUEUED state due to
process freezes
- **Faster incident response**: Health checks and logs provide
actionable debugging info
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
# Enhanced Discord Integration Blocks
Introduces new blocks for sending DMs, embeds, files, and replies in
Discord, as well as blocks for retrieving user and channel information.
Enhances existing message blocks with additional metadata fields and
server/channel identification. Improves test coverage and input/output
schemas for all Discord-related blocks.
Co-Authored-By: Claude <claude@users.noreply.github.com>
## Why These Changes Are Needed 🎯
The existing Discord integration was limited to basic message sending
and reading. Users needed more sophisticated Discord functionality to
build comprehensive automation workflows:
1. **Limited messaging options** - Could only send plain text to
channels, no DMs, embeds, or file attachments
2. **Poor graph connectivity** - Blocks didn't output IDs needed for
chaining operations (e.g., couldn't reply to a message after sending it)
3. **No user management** - Couldn't get user information or send direct
messages
4. **Type safety issues** - Discord.py's incomplete type hints caused
linting errors
5. **No channel resolution** - Had to manually find channel IDs instead
of using names
### Changes 🏗️
#### New Blocks Added
- **SendDiscordDMBlock** - Send direct messages to users via their
Discord ID
- **SendDiscordEmbedBlock** - Create rich embedded messages with images,
fields, and formatting
- **SendDiscordFileBlock** - Upload any file type (images, PDFs, videos,
etc.) using MediaFileType
- **ReplyToDiscordMessageBlock** - Reply to specific messages in threads
- **DiscordUserInfoBlock** - Retrieve user profile information
(username, avatar, creation date, etc.)
- **DiscordChannelInfoBlock** - Resolve channel names to IDs and get
channel metadata
#### Enhanced Existing Blocks
- **ReadDiscordMessagesBlock**:
- Now outputs: `message_id`, `channel_id`, `user_id` (previously missing
all IDs)
- Enables workflows like: read message → reply to it, or read message →
DM the author
- **SendDiscordMessageBlock**:
- Now outputs: `message_id`, `channel_id` (previously had no outputs
except status)
- Enables tracking sent messages and replying to them later
#### Technical Improvements
- **MediaFileType Support**: SendDiscordFileBlock accepts data URIs,
URLs, or local paths
- **Defensive Programming**: Added runtime type checks for Discord.py's
incomplete typing
- **ID Passthrough**: DiscordUserInfoBlock passes through user_id for
chaining
- **Better Error Messages**: Clear feedback when operations fail (e.g.,
"Channel cannot receive messages")
- **Channel Flexibility**: Blocks accept both channel names and IDs
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
#### Test Plan 🧪
- [x] **Import and initialization**: All 8 Discord blocks import and
initialize without errors
- [x] **Type checking**: `poetry run format` passes with no type errors
- [x] **Interface connectivity**: Verified blocks can chain together:
- [x] ReadDiscordMessages → ReplyToDiscordMessage (via message_id,
channel_id)
- [x] ReadDiscordMessages → SendDiscordDM (via user_id)
- [x] SendDiscordMessage → ReplyToDiscordMessage (via message_id,
channel_id)
- [x] DiscordUserInfo → SendDiscordDM (via user_id passthrough)
- [x] DiscordChannelInfo → SendDiscordEmbed/File (via channel_id)
- [x] **MediaFileType handling**: SendDiscordFileBlock correctly
processes:
- [x] Data URIs (base64 encoded files)
- [x] URLs (downloads from web)
- [x] Local paths (from other blocks)
- [x] **Defensive checks**: Verified error handling for:
- [x] Non-text channels (forums, categories)
- [x] Private/DM channels without guilds
- [x] Missing attributes on channel objects
- [x] **Mock test data**: All blocks have appropriate test
inputs/outputs defined
## Example Workflows Now Possible 🚀
1. **Auto-reply to mentions**: Read messages → Check if bot mentioned →
Reply in thread
2. **File distribution**: Generate report → Send as PDF to Discord
channel
3. **User notifications**: Get user info → Check if online → Send DM
with alert
4. **Cross-platform sync**: Receive email attachment → Forward to
Discord channel
5. **Rich notifications**: Create embed with thumbnail → Add fields →
Send to announcement channel
## Breaking Changes ⚠️
None - all changes are backward compatible. Existing workflows using
SendDiscordMessageBlock and ReadDiscordMessagesBlock will continue to
work, they just now have additional outputs available.
## Dependencies 📦
No new dependencies added. Uses existing:
- `discord.py` (already in project)
- `aiohttp` (already in project)
- Backend utilities: `MediaFileType`, `store_media_file` (already in
project)
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
## Summary
Corrects the context window for GPT5_CHAT, fixes provider for
CLAUDE_4_1_OPUS from 'openai' to 'anthropic', and adds a 600s timeout to
the Anthropic client call in llm_call.
## Changes 🏗️
- changed gpt5's context limit to be smaller, 16k
- changed claude's provider from openai to anthropic
- Adding a 600s timeout to the Anthropic client call
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] test all models and they work
### Summary
Implemented 5 additional GitHub blocks on top of the existing GitHub
Integration to enhance CI/CD workflows and code review automation
capabilities.
[New Github
Blocks_v41.json](https://github.com/user-attachments/files/21684665/New.Github.Blocks_v41.json)
<img width="902" height="1073" alt="Screenshot 2025-08-08 at 15 09 40"
src="https://github.com/user-attachments/assets/ebb6d33b-f3cd-4a56-acc6-56ace5a01274"
/>
### Changes 🏗️
- Added **GitHub CI Results Block** (`github/ci.py`): Fetch and analyze
CI/CD check runs, workflow statuses, and logs
- Added **GitHub Review Blocks** (`github/reviews.py`):
- Create PR reviews with comments
- Approve/request changes on PRs
- Add review comments to specific lines
- Fetch existing reviews and comments
- Dismiss stale reviews
### Related Tickets
- SECRT-1423: GitHub CI Results Integration
- SECRT-1426: GitHub PR Review Creation
- SECRT-1425: GitHub Review Comments
- SECRT-1424: GitHub Review Approval/Changes
- SECRT-1427: GitHub Review Management
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created and tested CI results block with various repositories
- [x] Tested PR review creation with comments
- [x] Verified review approval and change request functionality
- [x] Tested adding line-specific review comments
- [x] Confirmed fetching and dismissing reviews works correctly
## Summary
This PR fixes and enhances the Exa Websets implementation to resolve
issues with the expand_items parameter and improve the overall block
functionality. The changes address UI limitations with nested response
objects while providing a more comprehensive and user-friendly interface
for creating and managing Exa websets.
[Websets_v14.json](https://github.com/user-attachments/files/21596313/Websets_v14.json)
<img width="1335" height="949" alt="Screenshot 2025-08-05 at 11 45 07"
src="https://github.com/user-attachments/assets/3a9b3da0-3950-4388-96b2-e5dfa9df9b67"
/>
**Why these changes are necessary:**
1. **UI Compatibility**: The current implementation returns deeply
nested objects that cause the UI to crash. This PR flattens the input
parameters and returns simplified response objects to work around these
UI limitations.
2. **Expand Items Issue**: The `expand_items` toggle in the GetWebset
block was causing failures. This parameter has been removed as it's not
essential for the basic functionality.
3. **Missing SDK Integration**: The previous implementation used raw
HTTP requests instead of the official Exa SDK, making it harder to
maintain and more prone to errors.
4. **Limited Functionality**: The original implementation lacked support
for many Exa API features like imports, enrichments, and scope
configuration.
### Changes 🏗️
<\!-- Concisely describe all of the changes made in this pull request:
-->
1. **Added Pydantic models** (`model.py`):
- Created comprehensive type definitions for all Exa webset objects
- Added proper enums for status values and types
- Structured models to match the Exa API response format
2. **Refactored websets.py**:
- Replaced raw HTTP requests with the official `exa-py` SDK
- Flattened nested input parameters to avoid UI issues with complex
objects
- Enhanced `ExaCreateWebsetBlock` with support for:
- Search configuration with entity types, criteria, exclude/scope
sources
- Import functionality from existing sources
- Enrichment configuration with multiple formats
- Removed problematic `expand_items` parameter from `ExaGetWebsetBlock`
- Updated response objects to use simplified `Webset` model that returns
dicts for nested objects
3. **Updated webhook_blocks.py**:
- Disabled the webhook block temporarily (`disabled=True`) as it needs
further testing
4. **Added exa-py dependency**:
- Added official Exa Python SDK to `pyproject.toml` and `poetry.lock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<\!-- Put your test plan here: -->
- [x] Created a new webset using the ExaCreateWebsetBlock with basic
search parameters
- [x] Verified the webset was created successfully in the Exa dashboard
- [x] Listed websets using ExaListWebsetsBlock and confirmed pagination
works
- [x] Retrieved individual webset details using ExaGetWebsetBlock
without expand_items
- [x] Tested advanced features including entity types, criteria, and
exclude sources
- [x] Confirmed the UI no longer crashes when displaying webset
responses
- [x] Verified the Docker environment builds successfully with the new
exa-py dependency
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Added `exa-py` dependency to backend requirements
### Additional Notes
- The webhook functionality has been temporarily disabled pending
further testing and UI improvements
- The flattened parameter approach is a workaround for current UI
limitations with nested objects
- Future improvements could include re-enabling nested objects once the
UI supports them better
## Summary
- Enabled the TikTok posting block that was previously disabled
- The block provides comprehensive TikTok-specific posting options
## Changes 🏗️
- Removed `disabled=True` from TikTok posting block to enable
functionality
- Added full TikTok API integration with all supported options:
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified YouTube block is now available in the block list
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
This PR standardizes health check error handling across all services by
introducing and using a consistent `UnhealthyServiceError` exception
type. This improves monitoring, debugging, and service reliability by
providing uniform error reporting when services are unhealthy.
### Changes 🏗️
- **Added `UnhealthyServiceError` class** in `backend/util/service.py`:
- Custom exception for unhealthy service states
- Includes service name in error message
- Added to `EXCEPTION_MAPPING` for proper serialization
- **Updated health checks across services** to use
`UnhealthyServiceError`:
- **Database service** (`backend/executor/database.py`): Replace
`RuntimeError` with `UnhealthyServiceError` for database connection
failures
- **Scheduler service** (`backend/executor/scheduler.py`): Replace
`RuntimeError` with `UnhealthyServiceError` for scheduler initialization
and running state checks
- **Notification service** (`backend/notifications/notifications.py`):
- Replace `RuntimeError` with `UnhealthyServiceError` for RabbitMQ
configuration issues
- Added new `health_check()` method to verify RabbitMQ readiness
- **REST API** (`backend/server/rest_api.py`): Replace `RuntimeError`
with `UnhealthyServiceError` for database health checks
- **Updated imports** across all affected files to include
`UnhealthyServiceError`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified health check endpoints return appropriate errors when
services are unhealthy
- [x] Confirmed services start up properly and health checks pass when
healthy
- [x] Tested error serialization through API responses
- [x] Verified no breaking changes to existing functionality
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
No configuration changes were made in this PR - only code changes to
improve error handling consistency.
---------
Co-authored-by: Claude <noreply@anthropic.com>
I have added e2e tests for agent dashboard page
It includes, tests like
- dashboard page loads successfully
- submit agent button works correctly
- agent table displays data correctly
- agent table actions work correctly
I’ve also updated the e2e test script to include some static agent
submissions, so I can test if it loads on the frontend.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tests are working perfectly locally
<img width="469" height="177" alt="Screenshot 2025-08-08 at 12 13 42 PM"
src="https://github.com/user-attachments/assets/5e37afc3-c151-476a-84de-0a06f44a0722"
/>
## Summary
- Create dedicated notification service entry point
(backend.notification:main)
- Remove NotificationManager from scheduler service for better
separation of concerns
- Update docker-compose to run notification service on dedicated port
8007
- Configure all services to communicate with separate notification
service
This refactoring separates the notification service from the scheduler
service, allowing them to run as independent microservices instead of
two processes in the same pod.
## Changes Made
- **New notification service entry point**: Created
`backend/backend/notification.py` with dedicated main function
- **Updated pyproject.toml**: Added notification service entry point
registration
- **Modified scheduler service**: Removed NotificationManager from
`backend/backend/scheduler.py`
- **Docker Compose updates**: Added notification_server service on port
8007, updated NOTIFICATIONMANAGER_HOST references
## Test plan
- [x] Verify notification service starts correctly with new entry point
- [x] Confirm scheduler service runs without notification manager
- [x] Test docker-compose configuration with separate services
- [x] Validate service discovery between microservices
- [x] Run linting and type checking
🤖 Generated with [Claude Code](https://claude.ai/code)
## Summary
This PR resolves unclosed HTTP client session errors that were occurring
in the backend, particularly during file uploads and service-to-service
communication.
### Key Changes
- **Fixed GCS storage operations**: Convert
`gcloud.aio.storage.Storage()` to use async context managers in
`media.py` and `cloud_storage.py`
- **Enhanced service client cleanup**: Added proper cleanup methods to
`DynamicClient` class in `service.py` with `__del__` fallback and
context manager support
- **Application shutdown cleanup**: Added cloud storage handler cleanup
to FastAPI application lifespan
- **Updated test mocks**: Fixed test fixtures to properly mock async
context manager behavior
### Root Cause Analysis
The "Unclosed client session" and "Unclosed connector" errors were
caused by:
1. **GCS storage clients** not using context managers (agent image
uploads)
2. **Service HTTP clients** (`httpx.Client`/`AsyncClient`) not being
properly cleaned up in the `DynamicClient` class
### Technical Details
- All `gcloud.aio.storage.Storage()` instances now use `async with`
context managers
- `DynamicClient` class now has proper cleanup methods and context
manager support
- Application shutdown hook ensures cloud storage handlers are properly
closed
- Test fixtures updated to mock async context manager protocol
### Testing
- ✅ All media upload tests pass
- ✅ Service client tests pass
- ✅ Linting and formatting pass
## Test plan
- [ ] Deploy to staging environment
- [ ] Monitor logs for "Unclosed client session" errors (should be
eliminated)
- [ ] Verify file upload functionality works correctly
- [ ] Check service-to-service communication operates normally
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This PR addresses the recurring job validation failures by adding graph
validation before scheduling jobs. Previously, validation errors only
occurred at runtime during job execution, making it difficult to
communicate errors to users for scheduled recurring jobs.
### Changes 🏗️
- **Extract validation logic**: Created
`validate_and_construct_node_execution_input` wrapper function that
centralizes graph fetching, credential mapping, and validation logic
- **Add pre-scheduling validation**: Modified
`add_graph_execution_schedule` to validate graphs before creating
scheduled jobs
- **Make construct function private**: Renamed
`construct_node_execution_input` to `_construct_node_execution_input` to
prevent direct usage and encourage use of the wrapper
- **Reduce code duplication**: Eliminated duplicate validation logic
between scheduler and execution paths
- **Improve scheduler lifecycle management**:
- Enhanced cleanup process with proper event loop shutdown sequence
- Added graceful event loop thread termination with timeout
- Fixed thread lifecycle management to prevent resource leaks
- **Add helper utilities**:
- Created `run_async` helper to reduce
`asyncio.run_coroutine_threadsafe` boilerplate
- Added `SCHEDULER_OPERATION_TIMEOUT_SECONDS` constant for consistent
timeout handling across all scheduler operations
### Technical Details
**Validation Flow:**
The validation now happens in `add_graph_execution_schedule` before
calling `scheduler.add_job()`, ensuring that:
1. Graph exists and is accessible to the user
2. All credentials are valid and available
3. Graph structure and node configurations are valid
4. Starting nodes are present and properly configured
This uses the same validation logic as runtime execution, guaranteeing
consistency.
**Scheduler Lifecycle Improvements:**
- **Proper cleanup sequence**: Event loop is stopped before thread
termination
- **Thread management**: Added global tracking of event loop thread for
proper cleanup
- **Timeout consistency**: All scheduler operations now use the same
300-second timeout
- **Resource management**: Prevents potential memory leaks from unclosed
event loops
**Code Quality Improvements:**
- **DRY principle**: `run_async` helper eliminates repeated
`asyncio.run_coroutine_threadsafe` patterns
- **Single source of truth**: All timeout values use
`SCHEDULER_OPERATION_TIMEOUT_SECONDS` constant
- **Cleaner abstractions**: Direct utility function calls instead of
unnecessary wrapper methods
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified imports work correctly for both scheduler and utils
modules
- [x] Confirmed code passes all linting and type checking
- [x] Validated that existing functionality remains intact
- [x] Tested that validation logic is properly extracted and reused
- [x] Verified scheduler cleanup process works correctly
- [x] Confirmed thread lifecycle management improvements
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes were required for this fix.*
## Impact
- **Prevents runtime failures**: Invalid graphs are caught before
scheduling instead of failing silently during execution
- **Better error communication**: Validation errors surface immediately
when scheduling
- **Improved resource management**: Proper event loop and thread cleanup
prevents memory leaks
- **Enhanced maintainability**: Single source of truth for validation
logic and consistent timeout handling
- **Reduced code duplication**: Eliminated ~30+ lines of duplicate code
across validation and async execution patterns
- **Better developer experience**: Cleaner code with helper functions
and consistent patterns
Resolves the TODO comment: "We need to communicate this error to the
user somehow" in scheduler.py:107
Co-authored-by: Claude <noreply@anthropic.com>
Currently, we’re only seeing the top 20 agents, but we need to display
all of them until we see more call-to-action buttons.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tests are working perfectly
- [x] It's working manually as well
The RabbitMQ connection is unreliable (fixing it is a separate issue)
and sometimes get restarted. The scope of this PR is to avoid the
operation break due to executing on a stale, broken connection.
### Changes 🏗️
Fix executor running RabbitMQ operations on closed/closing connection
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manually kill rabbitmq and see how it goes while executing an
agent
Introduces discriminated unions for time, date, and date-time format
selection, supporting both strftime and ISO 8601 (with timezone and
microsecond options). Updates schemas, test cases, and block logic to
handle the new format types, improving flexibility and standards
compliance for time and date outputs.
<!-- Clearly explain the need for these changes: -->
### Why these changes are needed
Users need to output timestamps in ISO 8601/RFC 3339 format for API
integrations and standardized data exchange. The previous implementation
only supported strftime formatting, which made it difficult to generate
properly formatted timestamps with timezone information. This change
enables:
- **Standards compliance**: ISO 8601 and RFC 3339 compliant timestamps
- **Timezone support**: 38 timezone options covering all UTC offsets
globally
- **API compatibility**: Many APIs require RFC 3339 timestamps (e.g.,
"2011-06-03T10:00:00-07:00")
- **Backward compatibility**: Existing workflows continue to work with
default strftime format
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- **Added discriminated union format types** for all time/date blocks:
- `GetCurrentTimeBlock`: Now supports `TimeStrftimeFormat` and
`TimeISO8601Format`
- `GetCurrentDateBlock`: Now supports `DateStrftimeFormat` and
`DateISO8601Format`
- `GetCurrentDateAndTimeBlock`: Now supports `StrftimeFormat` and
`ISO8601Format`
- **Implemented shared timezone support**:
- Created `TimezoneLiteral` type with 38 timezone options (all UTC
offsets)
- Supports fractional offsets (e.g., India UTC+05:30, Nepal UTC+05:45)
- Deduplicated timezone lists across all format classes
- **Added ISO 8601 format features**:
- Timezone-aware timestamps with proper offset formatting
- Optional microseconds inclusion
- RFC 3339 compliance (subset of ISO 8601 with mandatory timezone)
- **Updated test cases** for all three blocks to verify:
- Default behavior unchanged (backward compatibility)
- Custom strftime formats still work
- ISO 8601 format produces correct output
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verified backward compatibility - default strftime format
unchanged
- [x] Tested ISO 8601 format with UTC timezone
- [x] Tested ISO 8601 format with various timezones (India, New York,
etc.)
- [x] Tested microseconds option for ISO formats
- [x] Verified all existing tests pass for GetCurrentTimeBlock
- [x] Verified all existing tests pass for GetCurrentDateBlock
- [x] Verified all existing tests pass for GetCurrentDateAndTimeBlock
- [x] Manually tested each block with different format configurations
- [x] Confirmed RFC 3339 compliance for timestamps with mandatory
timezone
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
Reverts Significant-Gravitas/AutoGPT#10536 to bring platform back up due
to this error:
```
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │
│ │
│ Check your Supabase project's API settings to find these values │
│ │
│ https://supabase.com/dashboard/project/_/settings/api │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │
│ at bX (.next/server/chunks/3873.js:6:90688) │
│ at <unknown> (.next/server/chunks/150.js:6:13460) │
│ at n (.next/server/chunks/150.js:6:13419) │
│ at o (.next/server/chunks/150.js:6:14187) │
│ ⨯ Error: Your project's URL and Key are required to create a Supabase client! │
│ │
│ Check your Supabase project's API settings to find these values │
│ │
│ https://supabase.com/dashboard/project/_/settings/api │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │
│ at bY (.next/server/chunks/3006.js:10:486) │
│ at g (.next/server/app/(platform)/auth/callback/route.js:1:5890) │
│ at async e (.next/server/chunks/9836.js:1:101814) │
│ at async k (.next/server/chunks/9836.js:1:15611) │
│ at async l (.next/server/chunks/9836.js:1:15817) { │
│ digest: '424987633' │
│ } │
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │
│ │
│ Check your Supabase project's API settings to find these values │
│ │
│ https://supabase.com/dashboard/project/_/settings/api │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │
│ at bX (.next/server/chunks/3873.js:6:90688) │
│ at <unknown> (.next/server/chunks/150.js:6:13460) │
│ at n (.next/server/chunks/150.js:6:13419) │
│ at j (.next/server/chunks/150.js:6:7482) │
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │
│ │
│ Check your Supabase project's API settings to find these values │
│ │
│ https://supabase.com/dashboard/project/_/settings/api │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │
│ at bX (.next/server/chunks/3873.js:6:90688) │
│ at <unknown> (.next/server/chunks/150.js:6:13460) │
│ at n (.next/server/chunks/150.js:6:13419) │
│ at h (.next/server/chunks/150.js:6:10561) │
│ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │
│ │
│ Check your Supabase project's API settings to find these values │
│ │
│ https://supabase.com/dashboard/project/_/settings/api │
│ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │
│ at bX (.next/server/chunks/3873.js:6:90688) │
│ at <unknown> (.next/server/chunks/150.js:6:13460) │
│ at n (.next/server/chunks/150.js:6:13419)
```
This adds the latest claude opus 4.1 model to the platform
This adds the following models
- claude-opus-4-1-20250805
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test claude opus 4.1 to make sure they work
This adds the latest chatGPT models, gpt 5 to the platform, this is
ahead of its release, the prices and context limits are still to be
properly set but for now i set them to be the same as gpt4.1, the price
is set at 5 for now till we know more
This adds the following models
- gpt-5
- gpt-5-mini
- gpt-5-nano
- gpt-5-chat
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test all of the models to make sure they work
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Reworked both Windows (.bat) and Unix (.sh) setup scripts to improve error handling, logging, and user prompts. The scripts now check for prerequisites, handle Sentry enablement more clearly, ensure environment files are copied with error checks, and consolidate service startup into a single docker compose command with log output. Unused or redundant code was removed for maintainability.
The setup-autogpt.bat and setup-autogpt.sh scripts no longer automatically start the frontend development server after setup. Users are now instructed to manually stop services with 'docker compose down', and the scripts prompt for exit while keeping services running.
## Summary
This PR adds the frontend service to the Docker Compose configuration,
enabling `docker compose up` to run the complete stack including the
frontend. It also implements comprehensive environment variable
improvements and fixes Docker networking issues.
## Key Changes
### 🐳 Docker Compose Improvements
- **Added frontend service** to `docker-compose.yml` and
`docker-compose.platform.yml`
- **Production build**: Uses `pnpm build + serve` instead of dev server
for better stability and lower memory usage
- **Service dependencies**: Frontend now waits for backend services
(`rest_server`, `websocket_server`) to be ready
- **YAML anchors**: Implemented DRY configuration to avoid duplicating
environment values
### 🔧 Environment Variable Architecture
- **Dual environment strategy**:
- Server-side code uses Docker service names
(`http://rest_server:8006/api`)
- Client-side code uses localhost URLs (`http://localhost:8006/api`)
- **Comprehensive config**: Added build args and runtime environment
variables
- **Network compatibility**: Fixes connection issues between frontend
and backend containers
### 🛠️ Code Improvements
- **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`)
with server-side priority
- **Updated all frontend code** to use shared environment helpers
instead of direct `process.env` access
- **Consistent API**: All environment variable access now goes through
helper functions
### 🔗 Files Changed
- `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend
service
- `frontend/Dockerfile` - Added build args for environment variables
- `frontend/src/lib/env-config.ts` - New centralized environment
configuration
- Multiple frontend files - Updated to use env helpers
## Benefits
- ✅ **Single command deployment**: `docker compose up` now runs
everything
- ✅ **Better reliability**: Production build reduces memory usage and
crashes
- ✅ **Network compatibility**: Proper container-to-container
communication
- ✅ **Maintainable config**: Centralized environment variable management
- ✅ **Development friendly**: Works in both Docker and local development
## Testing
- ✅ Verified Docker service communication works correctly
- ✅ Frontend responds and serves content properly
- ✅ Environment variables are correctly resolved in both server and
client contexts
- ✅ No connection errors after implementing service dependencies
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Migrate execution manager from ProcessPoolExecutor to
ThreadPoolExecutor for improved performance and resource efficiency
- Rename `Executor` class to `ExecutionProcessor` for better clarity
- Convert classmethods to instance methods following proper OOP design
patterns
- Implement thread-local storage using `threading.local()` for
thread-safe execution
## Technical Changes
- **Executor Pattern**: Replace process-based execution with
thread-based execution using `ThreadPoolExecutor`
- **Thread-Local Storage**: Use `threading.local()` to bind
`ExecutionProcessor` instances to worker threads
- **Initialization**: Add `init_worker()` function called once per
thread via `initializer` parameter
- **Event Handling**: Replace `multiprocessing.Manager().Event()` with
`threading.Event()`
- **Tracking**: Update from PID to TID (`threading.get_ident()`) for
thread identification
- **Method Conversion**: Convert all classmethods to instance methods
(`cls` → `self`)
- **Signal Handling**: Remove signal handling code that doesn't work in
worker threads
## Benefits
- **Performance**: Reduced overhead compared to process
creation/destruction
- **Resource Efficiency**: Lower memory footprint and faster startup
- **Simplicity**: Cleaner implementation using thread-local storage
pattern
- **Thread Safety**: Maintained through isolated ExecutionProcessor
instances per thread
## Test Plan
- [x] Code passes all linting and formatting
- [x] All executor tests pass (23/23)
- [x] Graph execution test passes successfully
- [x] Thread-local storage implementation verified
- [x] Signal handling compatibility fixed for worker threads
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- **Remove background_executor from NotificationManager** to eliminate
event loop conflicts that were causing RabbitMQ "Connection reset by
peer" errors
- **Convert all notification processing to fully async** using async
database clients
- **Optimize Settings instantiation** to prevent file descriptor leaks
by moving to module level
- **Fix scheduler event loop management** to use single shared loop
instead of thread-cached approach
## Changes 🏗️
### 1. Remove ProcessPoolExecutor from NotificationManager
- Eliminated `background_executor` entirely from notification service
- Converted `queue_weekly_summary()` and `process_existing_batches()`
from sync to async
- Fixed the root cause: `asyncio.run()` was creating new event loops,
conflicting with existing RabbitMQ connections
### 2. Full Async Conversion
- Updated `_consume_queue` to only accept async functions:
`Callable[[str], Awaitable[bool]]`
- Replaced sync `DatabaseManagerClient` with
`DatabaseManagerAsyncClient` throughout notification service
- Added missing async methods to `DatabaseManagerAsyncClient`:
- `get_active_user_ids_in_timerange`
- `get_user_email_by_id`
- `get_user_email_verification`
- `get_user_notification_preference`
- `create_or_add_to_user_notification_batch`
- `empty_user_notification_batch`
- `get_all_batches_by_type`
### 3. Settings Optimization
- Moved `Settings()` instantiation to module level in:
- `backend/util/metrics.py`
- `backend/blocks/google_calendar.py`
- `backend/blocks/gmail.py`
- `backend/blocks/slant3d.py`
- `backend/blocks/user.py`
- Prevents multiple file descriptor reads per process, reducing resource
usage
### 4. Scheduler Event Loop Fix
- **Simplified event loop initialization** in `Scheduler.run_service()`
to create single shared loop
- **Removed complex thread caching and locking** that could create
multiple connections
- **Fixed daemon thread lifecycle** by using non-daemon thread with
proper cleanup
- **Event loop runs in dedicated background thread** with graceful
shutdown handling
## Root Cause Analysis
The RabbitMQ "Connection reset by peer" errors were caused by:
1. **Event Loop Conflicts**: `asyncio.run()` in `queue_weekly_summary`
created new event loops, disrupting existing RabbitMQ heartbeat
connections
2. **Thread Resource Waste**: Thread-cached event loops in scheduler
created unnecessary connections
3. **File Descriptor Leaks**: Multiple Settings instantiations per
process increased resource pressure
## Why This Fixes the Issue
1. **Eliminates Event Loop Creation**: By using `asyncio.create_task()`
instead of `asyncio.run()`, we reuse the existing event loop
2. **Maintains Heartbeat Connections**: Async RabbitMQ connections
remain stable without event loop disruption
3. **Reduces Resource Pressure**: Settings optimization and simplified
scheduler reduce file descriptor usage
4. **Ensures Connection Stability**: Single shared event loop prevents
connection multiplexing issues
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified RabbitMQ connection stability by checking heartbeat logs
- [x] Confirmed async conversion maintains all notification
functionality
- [x] Tested scheduler job execution with simplified event loop
- [x] Validated Settings optimization reduces file descriptor usage
- [x] Ensured notification processing works end-to-end
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
Some non-node execution errors and system failures (like credentials not
found, or database failure) are not logged and exposed to the user. This
will make the node execution look like it's failed without an error
message:
<img width="804" height="1141" alt="image"
src="https://github.com/user-attachments/assets/e81314a0-b9af-4a95-bba7-8df576911e96"
/>
### Changes 🏗️
Make all non-interruption errors yielded as node execution error output.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI
This adds the latest opensource models from OpenAI to the platform, we
are using openrouter to provide api access to it!
I added
- openai/gpt-oss-20b
- openai/gpt-oss-120b
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test both of the latest models from openai, openai/gpt-oss-20b and
openai/gpt-oss-120b and they should work!
In this PR, I’ve added library page tests.
### Changes
I’ve added 9 tests: 8 for normal flows and 1 for checking edge cases.
Test names are something like:
- Library navigation is accessible from the navbar.
- The library page loads successfully.
- Agents are visible, and cards work correctly.
- Pagination works correctly.
- Sorting works correctly.
- Searching works correctly.
- Pagination while searching works correctly.
- Uploading an agent works correctly.
- Edge case: Search edge cases and error handling behave correctly.
Other than that, I’ve added a new utility that uses the build page to
help us create users at the start, which we could use to test the
library page.
- All tests are passing locally
<img width="514" height="465" alt="Screenshot 2025-07-12 at 11 13 41 AM"
src="https://github.com/user-attachments/assets/7a46c437-7db5-458b-b99a-4fa0d479866f"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All library tests are working locally and on CI perfectly.
## Summary
- Created centralized service client helpers with thread caching in
`util/clients.py`
- Refactored service client management to eliminate health checks and
improve performance
- Enhanced logging in process cleanup to include error details
- Improved retry mechanisms and resource cleanup across the platform
- Updated multiple services to use new centralized client patterns
## Key Changes
### New Centralized Client Factory (`util/clients.py`)
- Added thread-cached factory functions for all major service clients:
- Database managers (sync and async)
- Scheduler client
- Notification manager
- Execution event bus (Redis-based)
- RabbitMQ execution queue (sync and async)
- Integration credentials store
- All clients use `@thread_cached` decorator for performance
optimization
### Service Client Improvements
- **Removed health checks**: Eliminated unnecessary health check calls
from `get_service_client()` to reduce startup overhead
- **Enhanced retry support**: Database manager clients now use request
retry by default
- **Better error handling**: Improved error propagation and logging
### Enhanced Logging and Cleanup
- **Process termination logs**: Added error details to termination
messages in `util/process.py`
- **Retry mechanism updates**: Improved retry logic with better error
handling in `util/retry.py`
- **Resource cleanup**: Better resource management across executors and
monitoring services
### Updated Service Usage
- Refactored 21+ files to use new centralized client patterns
- Updated all executor, monitoring, and notification services
- Maintained backward compatibility while improving performance
## Files Changed
- **Created**: `backend/util/clients.py` - Centralized client factory
with thread caching
- **Modified**: 21 files across blocks, executor, monitoring, and
utility modules
- **Key areas**: Service client initialization, resource cleanup, retry
mechanisms
## Test Plan
- [x] Verify all existing tests pass
- [x] Validate service startup and client initialization
- [x] Test resource cleanup on process termination
- [x] Confirm retry mechanisms work correctly
- [x] Validate thread caching performance improvements
- [x] Ensure no breaking changes to existing functionality
## Breaking Changes
None - all changes maintain backward compatibility.
## Additional Notes
This refactoring centralizes client management patterns that were
scattered across the codebase, making them more consistent and
performant through thread caching. The removal of health checks reduces
startup time while maintaining reliability through improved retry
mechanisms.
🤖 Generated with [Claude Code](https://claude.ai/code)
- Resolves#10553
### Changes 🏗️
- Remove frontend graph validation in `useAgentGraph:saveAndRun(..)`
- Remove now unused `ajv` dependency
- Implement graph validation error propagation (backend->frontend)
- Add `GraphValidationError` type in frontend and backend
- Add `GraphModel.validate_graph_get_errors(..)` method
- Fix error handling & propagation in frontend API request logic
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving & running a graph with missing required inputs gives a
node-specific error
- [x] Saving & running a graph with missing node credential inputs
succeeds with passed-in credentials
There is no 100% accurate way of retrying an agent that has been
terminated. And the safest way to avoid executing an agent wrong is
minimizing the chance of an agent execution being terminated. A whole
set of mechanism to make sure the agent is retried on failure is still
in place and improved, this is used as our best-effort reliability
mechanism.
### Changes 🏗️
* Cap SIGINT & SIGTERM to be raised at most once, so the executor can
gracefully handle the stopping.
* SIGINT & SIGTERM will stop the execution request message consumption,
but not agent execution.
* Executor process will only stop if all the in-flight agent executions
are completed or terminated.
* Avoid retrying the agent stop command on AgentExecutorBlock on
timeout.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run agent, send SIGTERM to the executor pod, execution should not
be interrupted.
- [x] Run agent, send SIGKILL to the executor pod, execution should be
transferred to another pod.
<!-- Clearly explain the need for these changes: -->
Toran hit an error on reading a snippet incorrectly
### Changes 🏗️
Does fallback getting from dictionary when building email objects
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] Deploy to dev and have Toran test against his inbox
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
### Changes 🏗️
- Added `_convert_bools()` function to recursively convert string
boolean values ("true"/"false") to actual Python booleans
- Applied boolean conversion to all Airtable API endpoints that send
JSON data to ensure proper type casting
- Fixed parameters that were incorrectly converted to strings (e.g.,
`typecast`, `returnFieldsByFieldId`) to maintain their boolean types
This fix addresses an issue where the Airtable API was not properly
handling boolean values passed as strings, which could cause API calls
to fail or behave unexpectedly.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested boolean field updates with string values "true" and "false"
- [x] Verified that boolean parameters like `typecast` and
`returnFieldsByFieldId` are properly handled
- [x] Confirmed that nested boolean values in records are correctly
converted
- [x] Tested that non-boolean values remain unchanged
[Working Airtable
Example_v56.json](https://github.com/user-attachments/files/21594436/Working.Airtable.Example_v56.json)
Updated login and signup pages to display the Turnstile CAPTCHA and
require verification only when running in a cloud environment. This
prevents unnecessary CAPTCHA prompts in local or non-cloud deployments.
### Changes 🏗️
Locally when you try to login with the wrong password, and you update
and login again, you get a warning about captcha which is wrong, so this
fix makes it so the captcha will only when running in a cloud
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Try to login with the wrong password, get "Invalid login
credentials" and try to login again, you should keep getting "Invalid
login credentials" and it should not mention captcha
Graph evaluation should stop naturally once all the node execution are
stopped.
### Changes 🏗️
Avoid stopping agent node evaluation when stopping graph
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI
The node execution status can be done before the output persistence,
making the output be persisted when the node execution status is already
completed.
### Changes 🏗️
* Re-order the node execution status & output persistence logic.
* Make agent.py avoid yielding the same node_exec_id twice (that can be
caused by the above issue).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing CI
## Summary
- Removed unused metadata functions from user.py (get_user_metadata,
update_user_metadata)
- Removed unused execution and database functions from database.py and
related imports
- Added NodeExecutionStats validation in execution.py
- Updated CLAUDE.md with PR and commit conventions
## Changes Made
### `/backend/backend/data/user.py`
- Removed `get_user_metadata()` function (unused)
- Removed `update_user_metadata()` function (unused)
- Removed unused import `UserMetadataRaw`
### `/backend/backend/data/execution.py`
- Added `NodeExecutionStats` validation in `from_db()` method
### `/backend/backend/executor/database.py`
- Removed unused imports and function exposures
- Cleaned up DatabaseManagerClient to remove unused client methods
### `/CLAUDE.md`
- Added documentation for creating pull requests
- Added conventional commit types and scopes guide
## Testing
- Existing tests should pass as removed functions were not being used
- No new functionality added
## Checklist
- [x] Code follows the project's style guidelines
- [x] Self-review completed
- [x] Changes are backward compatible
- [x] No new warnings introduced
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Graph and Node execution can fail due to so many reasons, sometimes this
messes up the stats tracking, giving an inaccurate result. The scope of
this PR is to minimize such issues.
### Changes 🏗️
* Catch BaseException on time_measured decorator to catch
asyncio.CancelledError
* Make sure update node & graph stats are executed on cancellation &
exception.
* Protect graph execution stats update under the thread lock to avoid
race condition.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing automated tests.
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR refactors the Ayrshare integration to remove the centralized
`get_ayrshare_profile_key` function from the credentials store and
instead retrieve the profile key directly within each Ayrshare block.
This change improves code organization by keeping Ayrshare-specific
logic within the Ayrshare module.
### Changes 🏗️
- **Refactored Ayrshare profile key retrieval**: Moved profile key
fetching logic from the credentials store into the Ayrshare blocks
- **Added `get_profile_key` helper function** in
`autogpt_platform/backend/backend/blocks/ayrshare/_util.py` to fetch the
profile key from user integrations
- **Updated all 15 Ayrshare social media blocks** to use `user_id`
instead of `profile_key` parameter and fetch the profile key internally
- **Removed `get_ayrshare_profile_key` method** from
`autogpt_platform/backend/backend/integrations/credentials_store.py`
- **Removed Ayrshare-specific logic** from
`autogpt_platform/backend/backend/executor/manager.py` that was passing
profile keys to blocks
- **Updated router** in
`autogpt_platform/backend/backend/server/integrations/router.py` to
directly fetch user integrations instead of using the removed method
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test posting to X/Twitter to check credentials flow
- [x] Verify profile key retrieval works correctly for authenticated
users
- [x] Test Ayrshare SSO URL generation flow
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
This PR helps us bypass the proxy server in server-side requests,
allowing us to directly send requests to the backend and reduce latency.
### Changes 🏗️
- Introduced server-side detection to dynamically set the base URL for
API requests.
- Added error handling for server-side requests to log failures and
throw errors appropriately.
- Updated header management to include authentication tokens when
applicable.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All E2E tests are working.
- [x] I have manually checked the server-side and client-side
components, and both are working perfectly.
- Resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10433
- Depends on -
https://github.com/Significant-Gravitas/AutoGPT/pull/10427
- Need to review this pr, once this issue is fixed -
https://github.com/Significant-Gravitas/AutoGPT/issues/10404
I’ve created additional tests for the agents marketplace page
Tests that I have added
- Add to library button works and agent appears in library.
- Download button functionality works.
- Agent page details are visible.
- User can access agent page when logged in.
- User can access agent page when logged out
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have done all the tests and they are working perfectly
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
We currently use infinite scroll pagination in multiple places, but our
strategies vary across these locations. This repetitive code writing is
not ideal, and our current methods are also complex. We’re not utilising
React Query’s useInfiniteQuery hooks effectively.
To address these issues, we’re introducing a new component called
`InfiniteScroll` that handles pagination independently.
### How to use it?
- Use React Query’s `useInfiniteHook` to return multiple data points.
For pagination, we only need `fetchNextPage`, `hasNextPage`, and
`isFetchingNextPage`.
```ts
const {
data: agents,
fetchNextPage,
hasNextPage,
isFetchingNextPage,
isLoading: agentLoading,
} = useGetV2ListLibraryAgentsInfinite(
{
page: 1,
page_size: 8,
search_term: searchTerm || undefined,
sort_by: librarySort,
},
);
```
- Simply pass these three data points and the current data length to the
`InfiniteScroll` component. That's it
```tsx
<InfiniteScroll
dataLength={agents.length}
isFetchingNextPage={isFetchingNextPage}
fetchNextPage={fetchNextPage}
hasNextPage={hasNextPage}
loader={<LoadingSpinner />}
>
...
```
### Changes
- Add the `InfiniteScroll.tsx` component for consistency and simplicity
in pagination across the frontend.
- Update the current library page to use the `InfiniteScroll` component.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I’ve tested everything locally, and it’s working perfectly fine.
## Summary
This PR adds a timeout guard to the `locked_transaction` function used
for credit transactions to prevent indefinite blocking and improve
reliability.
## Changes
- Modified `locked_transaction` in `/backend/backend/data/db.py` to add
proper timeout handling
- Set `lock_timeout` and `statement_timeout` to prevent indefinite
blocking
- Updated function signature to use default timeout parameter
- Added comprehensive docstring explaining the locking mechanism
## Motivation
The previous implementation could potentially block indefinitely if a
lock couldn't be acquired, which could cause issues in production
environments, especially for critical credit transactions.
## Testing
- Existing tests pass
- The timeout mechanism ensures transactions won't hang indefinitely
- Advisory locks are properly released on commit/rollback
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR improves the reliability of the executor system by addressing
several race conditions and improving error handling throughout the
execution pipeline.
### Changes 🏗️
- **Consolidated exception handling**: Now using `BaseException` to
properly catch all types of interruptions including `CancelledError` and
`SystemExit`
- **Atomic stats updates**: Moved node execution stats updates to be
atomic with graph stats updates to prevent race conditions
- **Improved cleanup handling**: Added proper timeout handling (3600s)
for stuck executions during cleanup
- **Fixed concurrent update race conditions**: Node execution updates
are now properly synchronized with graph execution updates
- **Better error propagation**: Improved error type preservation and
status management throughout the execution chain
- **Graph resumption support**: Added proper handling for resuming
terminated and failed graph executions
- **Removed deprecated methods**: Removed `update_node_execution_stats`
in favor of atomic updates
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a graph with multiple nodes and verify stats are updated
correctly
- [x] Cancel a running graph execution and verify proper cleanup
- [x] Simulate node failures and verify error propagation
- [x] Test graph resumption after termination/failure
- [x] Verify no race conditions in concurrent node execution updates
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Sometimes we receive an error where the service is not connected to the
DB, but we have started receiving traffic, making the request fail.
### Changes 🏗️
Make the `/health_check` endpoint also check the database connection.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing CI, manual test
- Resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10428
- Depends on -
https://github.com/Significant-Gravitas/AutoGPT/pull/10427
- Need to review this pr, once this issue is fixed -
https://github.com/Significant-Gravitas/AutoGPT/issues/10404
I’ve created additional tests for the creators marketplace page
Tests that I have added
- User can access creator's page when logged out.
- User can access creator's page when logged in.
- Creator page details are visible.
- Agents in agent by sections navigation works.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have done all the tests and they are working perfectly
With this PR, we’re changing the data fetching strategy on the
marketplace page. We’re now using autogenerated React queries.
### Changes
- Splits separate render logic and hook logic.
- Update the data fetching strategy.
- Currently, we’re seeing agents in the featured section and creators in
the featured creators section, even if they’re not set to “isFeatured”
true. I’ve fixed that also.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All marketplace E2E tests are working.
- [x] I’ve tested all the links and checked if everything renders
perfectly on the marketplace page.
Make agent graph execution durable by making it retriable. When it fails
to retry, we should make the error visible to the UI.
<img width="900" height="495" alt="image"
src="https://github.com/user-attachments/assets/70e3e117-31e7-4704-8bdf-1802c6afc70b"
/>
<img width="900" height="407" alt="image"
src="https://github.com/user-attachments/assets/78ca6c28-6cc2-4aff-bfa9-9f94b7f89f77"
/>
### Changes 🏗️
* Make _on_graph_execution retriable
* Increase retry count for failing db-manager RPC
* Add test coverage for RPC failure retry
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Allow graph execution retry
## Summary
- Adds AI-generated activity status summaries for agent execution
results
- Provides users with conversational, non-technical summaries of what
their agents accomplished
- Includes comprehensive execution data analysis with honest failure
reporting
## Changes Made
- **Backend**: Added `ActivityStatusGenerator` module with async LLM
integration
- **Database**: Extended `GraphExecutionStats` and `Stats` models with
`activity_status` field
- **Frontend**: Added "Smart Agent Execution Summary" display with
disclaimer tooltip
- **Settings**: Added `execution_enable_ai_activity_status` toggle
(disabled by default)
- **Testing**: Comprehensive test suite with 12 test cases covering all
scenarios
## Key Features
- Collects execution data including graph structure, node relations,
errors, and I/O samples
- Generates user-friendly summaries from first-person perspective
- Honest reporting of failures and invalid inputs (no sugar-coating)
- Payload optimization for LLM context limits
- Full async implementation with proper error handling
## Test Plan
- [x] All existing tests pass
- [x] New comprehensive test suite covers success/failure scenarios
- [x] Feature toggle testing (enabled/disabled states)
- [x] Frontend integration displays correctly
- [x] Error handling and edge cases covered
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
This PR updates the existing E2E test data script to support the
creation of featured creators and featured agents. Previously, these
entities were not included, which limited our ability to fully test
certain flows during Playwright E2E testing.
### Changes
- Added logic to create featured creators
- Added logic to create featured agents
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All tests are passing locally after updating the data script.
- Resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10426
- Need to review this pr, once this issue is fixed -
https://github.com/Significant-Gravitas/AutoGPT/issues/10404
I’ve created additional tests for the main page, divided into two parts:
one for basic functionality and the other for edge cases.
**Basic functionality:**
- Users can access the marketplace page when logged out.
- Users can access the marketplace page when logged in.
- Featured agents, top agents, and featured creators are visible.
- Users can navigate and interact with marketplace elements.
- The complete search flow works correctly.
**Edge cases:**
- Searching for a non-existent item shows no results.
### Changes
- Introduced a new test suite for the marketplace, covering basic
functionality and edge cases.
- Implemented the MarketplacePage class to encapsulate interactions with
the marketplace page.
- Added utility functions for assertions, including visibility checks
and URL matching.
- Enhanced the LoginPage class with a goto method for navigation.
- Established a comprehensive search flow test to validate search
functionality.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have done all the tests and they are working perfectly
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
The notification service was running an inefficient polling loop that
constantly
checked each queue sequentially with 1-second timeouts, even when queues
were
empty. This caused:
- High CPU usage from continuous polling
- Sequential processing that blocked queues from being processed in
parallel
- Unnecessary delays from timeout-based polling instead of event-driven
consumption
- Poor throughput (500-2,000 messages/second) compared to potential
(8,000-12,000
messages/second)
## Changes 🏗️
- Replaced polling-based _run_queue() with event-driven _consume_queue()
using
async iterators
- Implemented concurrent queue consumption using asyncio.gather()
instead of
sequential processing
- Added QoS settings (prefetch_count=10) to control memory usage
- Improved error handling with message.process() context manager for
automatic
ack/nack
- Added graceful shutdown that properly cancels all consumer tasks
- Removed unused QueueEmpty import
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Deploy to test environment and monitor CPU usage
- [ ] Verify all queue types (immediate, admin, batch, summary) process
messages
correctly
- [ ] Test graceful shutdown with messages in flight
- [ ] Monitor that database management service remains stable
- [ ] Check logs for proper consumer startup messages
- [ ] Verify messages are properly acked/nacked on success/failure
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Some failure on DB RPC can cause agent execution failure. This change
makes sure the error chance is minimized.
### Changes 🏗️
* Enable request retry
* Increase transaction timeout
* Use better typing on the DB query
* Gracefully handles insufficient balance
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual tests
## Changes 🏗️
- Moved API call from `usePublishAgentModal` to `useAgentInfoStep` for
better encapsulation
- overall cleaner state management + [state
colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster)
- Added loading states with a spinner to the submit button during API
call
- Removed redundant validation: now relies entirely on zod schema
validation
- All thumbnails now use 16:9 (`aspect-video`) aspect ratio for
consistency
- Highlight selected thumbnails with blue border
- Table alignment fixes
- Rename `Edit` action to `View` to better reflect the content of the
modal that appears when clicked...
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] API calls work with loading states in Agent Info Step
- [x] Image aspect ratios are consistent across all components
- [x] Form validation works through zod schema only
### For configuration changes:
None
## Summary
This adds 10 new LLM's to the platform!
I have added the model names, metadata like max input and output and the
price for each model!
- GROK_4
- KIMI_K2
- QWEN3_235B_A22B_THINKING
- QWEN3_CODER
- GEMINI_2_5_FLASH
- GEMINI_2_0_FLASH
- GEMINI_2_5_FLASH_LITE_PREVIEW
- GEMINI_2_0_FLASH_LITE
- DEEPSEEK_R1_0528
- GPT41_MINI
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test and verify all the models work!
## Changes 🏗️
### Why these changes
We have a high-priority bug where the publish agent modal wouldn't open
when clicking `Edit` on the Dashboard Creator page table. The create
form was also buggy.
When looking into the code, I noticed it was pretty messy. I went ahead
and refactored it:
- [x] separation of concerns ( _split render / hook logic_ )
- [x] split into sub-components ( `PublishAgentModal/components` )
- [x] colocated state ( moved state to the modal steps rather than
having everything top-level )
- [x] used the new Design System components
Overall, we end up with a cleaner and stable experience ✨
### E2E tests
I also added E2E tests 🤖 to make sure we catch regressions in the future
in this modal. For now, it tests the first 2 steps. It does not do image
upload and publish as that wasn't working locally ( _might iterate on
that later_ )
### Step 1 – Select Agent
<img width="1161" height="859" alt="Screenshot 2025-07-29 at 16 12 46"
src="https://github.com/user-attachments/assets/a4949fb0-1a44-4926-a374-51eefadef063"
/>
### Step 2 – Agent Info Form
<img width="1061" height="804" alt="Screenshot 2025-07-29 at 16 03 11"
src="https://github.com/user-attachments/assets/b9a45bda-18ea-4844-b52c-db499f45193e"
/>
### Step 3 – Agent Review
<img width="1480" height="867" alt="Screenshot 2025-07-29 at 16 11 07"
src="https://github.com/user-attachments/assets/248bdf58-886d-43f3-a37a-35fd1a83e566"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] open the modal through the Account menu → ( `Publish Agent` )
- [x] complete the form and check validation errors
- [x] add images and generate image
- [x] publish the agent
- [x] the agent shows up on the table
- [x] Open an agent under review in the table ( _click `Edit` on the
actions_ )
- [x] it opens the modal on the 3rd step ( _review step_ )
### For configuration changes:
None
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
I have gone through and tested all 59 llm's on the platform, found 5
where deprecated/aren't available any more so i removed them.
I made a agent with 59 llm call blocks, set each llm and ran it, i got
several returned replies saying that models where deprecated so i
removed those models.
<img width="1804" height="887" alt="image"
src="https://github.com/user-attachments/assets/907776e1-b491-465d-8219-e86c98559e41"
/>
Models removed:
- O1_PREVIEW
- MIXTRAL_8X7B
- EVA_QWEN_2_5_32B
- PERPLEXITY_LLAMA_3_1_SONAR_LARGE_128K_ONLINE
- QWEN_QWQ_32B_PREVIEW
### Changes 🏗️
This PR adds Firecrawl integration to AutoGPT, providing powerful web
scraping and data extraction capabilities:
**New Blocks Added:**
⚠️ All these blocks are synchronous so take a while to finish, this
allows a simpler agent workflow
- **Firecrawl Scrape Block**: Scrapes single web pages with various
output formats (Markdown, HTML, JSON, screenshots)
- **Firecrawl Crawl Block**: Crawls entire websites following links with
customizable depth and filters
- **Firecrawl Extract Block**: Extracts structured data from web pages
using AI-powered prompts
- **Firecrawl Map Block**: Maps website structure and returns a list of
all discovered URLs
- **Firecrawl Search Block**: Searches Google and scrapes the results
**Key Features:**
- Advanced anti-blocking technology to bypass scraping protections
- Multiple output formats including Markdown, HTML, JSON, and
screenshots
- AI-powered data extraction with custom prompts and schemas
- Configurable crawling depth and URL filtering
- Built-in caching and rate limiting
- Google search integration for discovering relevant content
**Use Cases:**
- Web data extraction for research and analysis
- Content monitoring and change tracking
- Competitive intelligence gathering
- SEO analysis and website mapping
- Automated data collection workflows
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<\!-- Put your test plan here: -->
- [x] Verified all Firecrawl blocks appear in the UI
- [x] Tested scraping various websites with different formats
- [x] Tested crawling with depth limits and URL filters
- [x] Tested data extraction with custom prompts
- [x] Verified error handling for invalid URLs and API failures
- [x] Tested authentication with Firecrawl API key
- [x] Confirmed proper rate limiting and caching behavior
<img width="1025" height="1027" alt="Screenshot 2025-07-30 at 15 20 28"
src="https://github.com/user-attachments/assets/7b94d3cf-7a0e-4d09-a9c5-24c4e8a3b660"
/>
# Example Agent
[FC
Testing_v12.json](https://github.com/user-attachments/files/21510608/FC.Testing_v12.json)
## Summary
- Enabled the Instagram posting block that was previously disabled
- The block provides comprehensive Instagram-specific posting options
including stories, reels, posts, user tagging, and location tagging
- Improved parameter types and validation for better user experience
## Changes 🏗️
- Removed `disabled=True` from Instagram posting block to enable
functionality
- Updated parameter types from required to optional with proper None
defaults for better flexibility
- Added validation for Instagram reel options to ensure all required
fields are provided together
- Improved user tag validation with better error messages
- Added support for:
- Instagram Stories (24-hour expiration)
- Instagram Reels with audio, thumbnails, and feed sharing options
- Alt text for accessibility
- Location tagging via Facebook Page ID
- User tagging with coordinate support
- Collaborator tagging
- Auto-resize functionality
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified Instagram block is now available in the block list
## Summary
- Enabled the YouTube posting block that was previously disabled
- The block provides comprehensive YouTube-specific posting options
including titles, visibility settings, thumbnails, playlists, tags, and
more
## Changes 🏗️
- Removed `disabled=True` from YouTube posting block to enable
functionality
- Added full YouTube API integration with all supported options:
- Video title and description
- Visibility settings (private/public/unlisted)
- Thumbnail support
- Playlist management
- Video tags and categories
- YouTube Shorts support
- Subtitle/caption support
- Country-based targeting
- Synthetic media disclosure
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified YouTube block is now available in the block list
https://github.com/user-attachments/assets/d4459f15-fe57-47bf-8459-f06f1af45ad6
<img width="374" height="593" alt="Screenshot 2025-07-31 at 11 26 29"
src="https://github.com/user-attachments/assets/4dcf30dd-439c-4a44-b56a-640832d6c550"
/>
## Changes 🏗️
My previous PR,
https://github.com/Significant-Gravitas/AutoGPT/pull/10480, didn't fully
resolve the issue of broken links sometimes appearing for some runs in
the Agent Activity dropdown.
- Fixed the logic ( verified with a deployment in dev... )
- Simplified logic, making less API calls
- If we have an execution without a clear agent ID, we display it but
don't link to it
- Re-generated API types ( _had to update call in dashboard agents
because of it_ )
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run agents
- [x] Runs appear correctly in the activity dropdown without broken
links
### For configuration changes:
None
- Resolves#10489
### Changes 🏗️
- Fix Google OAuth token revocation
- Fix credentials object conversion in `GoogleOAuthHandler`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Google OAuth flow still works
- [x] Deleting Google OAuth credentials works, token revocation doesn't
error
### Changes 🏗️
This PR adds a new Wolfram Alpha block that integrates with Wolfram's
LLM API endpoint:
- **Ask Wolfram Block**: Allows users to ask questions to Wolfram Alpha
and get structured answers
- **API Integration**: Implements the Wolfram LLM API endpoint
(`/api/v1/llm-api`) for natural language queries
- **Simple Authentication**: Uses App ID based authentication via API
key credentials
- **Error Handling**: Proper error handling for API failures with
descriptive error messages
The block enables users to leverage Wolfram Alpha's computational
knowledge engine for:
- Mathematical calculations and explanations
- Scientific data and facts
- Unit conversions
- Historical information
- And many other knowledge-based queries
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<\!-- Put your test plan here: -->
- [x] Verified the block appears in the UI and can be added to workflows
- [x] Tested API authentication with valid Wolfram App ID
- [x] Tested various query types (math, science, general knowledge)
- [x] Verified error handling for invalid credentials
- [x] Confirmed proper response formatting
**Configuration changes:**
- Users need to add `WOLFRAM_APP_ID` to their environment variables or
provide it through the UI credentials field
### Changes 🏗️
This PR adds Airtable integration to AutoGPT with the following blocks:
- **List Bases Block**: Lists all Airtable bases accessible to the
authenticated user
- **Create Base Block**: Creates new Airtable bases with specified
workspace and name
<img width="1294" height="879" alt="Screenshot 2025-07-30 at 11 03 43"
src="https://github.com/user-attachments/assets/0729e2e8-b254-4ed6-9481-1c87a09fb1c8"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] Tested create base block
- [x] Tested list base block
Need: The Gmail integration had several parsing issues that were causing
data loss and
workflow incompatibilities:
1. Email recipient parsing only captured the first recipient, losing
CC/BCC and multiple TO
recipients
2. Email body parsing was inconsistent between blocks, sometimes showing
"This email does
not contain a readable body" for valid emails
3. Type mismatches between blocks caused serialization issues when
connecting them in
workflows (lists being converted to string representations like
"[\"email@example.com\"]")
# Changes 🏗️
1. Enhanced Email Model:
- Added cc and bcc fields to capture all recipients
- Changed to field from string to list for consistency
- Now captures all recipients instead of just the first one
2. Improved Email Parsing:
- Updated GmailReadBlock and GmailGetThreadBlock to parse all recipients
using
getaddresses()
- Unified email body parsing logic across blocks with robust multipart
handling
- Added support for HTML to plain text conversion
- Fixed handling of emails with attachments as body content
3. Fixed Block Compatibility:
- Updated GmailSendBlock and GmailCreateDraftBlock to accept lists for
recipient fields
- Added validation to ensure at least one recipient is provided
- All blocks now consistently use lists for recipient fields, preventing
serialization
issues
4. Updated Test Data:
- Modified all test inputs/outputs to use the new list format for
recipients
- Ensures tests reflect the new data structure
# Checklist 📋
For code changes:
- I have clearly listed my changes in the PR description
- I have made a test plan
- I have tested my changes according to the test plan:
- Run existing Gmail block unit tests with poetry run test
- Create a workflow that reads emails with multiple recipients and
verify all TO, CC, BCC
recipients are captured
- Test email body parsing with plain text, HTML, and multipart emails
- Connect GmailReadBlock → GmailSendBlock in a workflow and verify
recipient data flows
correctly
- Connect GmailReplyBlock → GmailSendBlock and verify no serialization
errors occur
- Test sending emails with multiple recipients via GmailSendBlock
- Test creating drafts with multiple recipients via
GmailCreateDraftBlock
- Verify backwards compatibility by testing with single recipient
strings (should now
require lists)
- Create from scratch and execute an agent with at least 3 blocks
- Import an agent from file upload, and confirm it executes correctly
- Upload agent to marketplace
- Import an agent from marketplace and confirm it executes correctly
- Edit an agent from monitor, and confirm it executes correctly
# Breaking Change Note: The to field in GmailSendBlock and
GmailCreateDraftBlock now requires
a list instead of accepting both string and list. Existing workflows
using strings will
need to be updated to use lists (e.g., ["email@example.com"] instead of
"email@example.com").
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
Fix the issue where sometimes the agent activity would show a link to
agent runs that are not available in the library. So only show runs that
can be verified in the library. Improve the display of the agent name as
well 🤔
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run agents
- [x] There are no empty links on the activity dropdown
- [x] Agent names look good 💅🏽
### For configuration changes:
None
### Changes 🏗️
This PR fixes an issue where LLM blocks (particularly
AITextSummarizerBlock) were not properly tracking `llm_call_count` in
their execution statistics, despite correctly tracking token counts.
**Root Cause**: The `finally` block in
`AIStructuredResponseGeneratorBlock.run()` that sets `llm_call_count`
was executing after the generator returned, meaning the stats weren't
available when `merge_llm_stats()` was called by dependent blocks.
**Changes made**:
- **Fixed stats tracking timing**: Moved `llm_call_count` and
`llm_retry_count` tracking to execute before successful return
statements in `AIStructuredResponseGeneratorBlock.run()`
- **Removed problematic finally block**: Eliminated the finally block
that was setting stats after function return
- **Added comprehensive tests**: Created extensive test suite for LLM
stats tracking across all AI blocks
- **Added SmartDecisionMaker stats tracking**: Fixed missing LLM stats
tracking in SmartDecisionMakerBlock
- **Fixed type errors**: Added appropriate type ignore comments for test
mock objects
**Files affected**:
- `backend/blocks/llm.py`: Fixed stats tracking timing in
AIStructuredResponseGeneratorBlock
- `backend/blocks/smart_decision_maker.py`: Added missing LLM stats
tracking
- `backend/blocks/test/test_llm.py`: Added comprehensive LLM stats
tracking tests
- `backend/blocks/test/test_smart_decision_maker.py`: Added LLM stats
tracking test and fixed circular imports
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created comprehensive unit tests for all LLM blocks stats tracking
- [x] Verified AITextSummarizerBlock now correctly tracks llm_call_count
(was 0, now shows actual call count)
- [x] Verified AIStructuredResponseGeneratorBlock properly tracks stats
with retries
- [x] Verified SmartDecisionMakerBlock now tracks LLM usage stats
- [x] Verified all existing tests still pass
- [x] Ran `poetry run format` to ensure code formatting
- [x] All 11 LLM and SmartDecisionMaker tests pass
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note**: No configuration changes were needed for this fix.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
HTTP requests can fail when the DNS is messed up. Sometimes this kind of
issue requires a client reset.
### Changes 🏗️
Introduce HTTP client refresh on repeated error
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual run, added tests
To allow for a simpler dev experience, the new SDK now auto discovers
providers and registers them. However, the OAuth system was still
requiring these credentials to be hardcoded in the settings object.
This PR changes that to verify the env var is present during
registration and then allows the OAuth system to load them from the env.
### Changes 🏗️
- **OAuth Registration**: Modified `ProviderBuilder.with_oauth(..)` to
check OAuth env vars exist during registration
- **OAuth Loading**: Updated OAuth system to load credentials from env
vars if not using secrets
- **Block Filtering**: Added `is_block_auth_configured()` function to
check if a block has valid authorization options configured at runtime
- **Test Updates**: Fixed failing SDK registry tests to properly mock
environment variables for OAuth registration
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified OAuth system checks that env vars exist during provider
registration
- [x] Confirmed OAuth system can use env vars directly without requiring
hardcoded secrets
- [x] Tested that blocks with unconfigured OAuth providers are filtered
out
- [x] All SDK registry tests pass with proper env var mocking
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- OAuth providers now require their client ID and secret env vars to be
set for registration
- No changes required to `.env.example` or `docker-compose.yml`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Changes 🏗️
There is a bug where the agent activity dropdown bubble only shows up to
`6` even if there are `50 running agents`. We only display the last 6
runs in the dropdown, but the bubble badge count should show the correct
all running agents.
On top of that we added the option to search runs by agent name when you
have more than 6 recent runs:
https://github.com/user-attachments/assets/931e3db7-5715-48d1-b4df-22490fae9de0
- Also make the dropdown items a link ( `a` ) so that you can command
click them to open runs in new tabs.
- Keep up to `400` executions on the state ( worse case load test )
- Each execution object is relatively small (ID, status, timestamps,
agent info)
- 400 objects × ~`1KB` each = negligible memory footprint `400kb`
- Always display running agents at the top
- Only display runs from the last week on the dropdown
- the agent library page contains the historical runs, this is just to
show the recent ones
### Code changes
- **Added count tracking**
- the `NotificationState` interface now includes separate count fields
(`activeCount`, `recentCompletionsCount`, `recentFailuresCount`) to
track the actual numbers independent of display limits.
- **Dual array system:**
- the `categorizeExecutions` function now creates:
- unlimited arrays for counting all executions
- limited arrays (sliced to 6 items) for dropdown display
- Updated all helper functions to properly maintain both the display
arrays and the count fields.
- Component uses actual counts
- `<AgentActivityDropdown />` component now uses `activeCount` for the
badge and hover hint instead of `activeExecutions.length`
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login and navigate to library or build
- [x] Start running agents like there is no tomorrow
- [x] The badge shows the correct agent execution count ( .i.e 10 )
- [x] The dropdown only displays the 6 most recent
- [x] You can command click on the runs and they open in new tabs
### For configuration changes
None
The heartbeat mechanism doesn't seem to work at the moment.
### Changes 🏗️
Revert the RabbitMQ consumer heartbeat mechanism
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run agents
This PR adds WordPress integration to AutoGPT platform, enabling users
to create posts on WordPress.com and Jetpack-enabled sites.
### Changes 🏗️
**OAuth Implementation:**
- Added WordPress OAuth2 handler (`_oauth.py`) supporting both single
blog and global access tokens
- Implemented OAuth flow without PKCE (as WordPress doesn't require it)
- Added token validation endpoint support
- Server-side tokens don't expire, eliminating the need for refresh in
most cases
**API Integration:**
- Created WordPress API client (`_api.py`) with Pydantic models for type
safety
- Implemented `create_post` function with full support for WordPress
post features
- Added helper functions for token validation and generic API requests
- Fixed response models to handle WordPress API's mixed data types
**WordPress Block:**
- Created `WordPressCreatePostBlock` in `blog.py` with minimal
user-facing options
- Exposed fields: site, title, content, excerpt, slug, author,
categories, tags, featured_image, media_urls
- Posts are published immediately by default
- Integrated with platform's OAuth credential system
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] OAuth URL generation works correctly for single blog and global
access
- [x] Token exchange and validation functions handle WordPress API
responses
- [x] Create post block properly transforms input data to API format
- [x] Response models handle mixed data types from WordPress API
The WordPress OAuth provider needs to be configured with client ID and
secret from WordPress.com application settings.
<!-- Clearly explain the need for these changes: -->
I'm working with these blocks and found some much needed improvements
### Changes 🏗️
- Outputs labels from emails
- Types the outputs
- reorder the yielding of the smart decision blokc
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Build a test agent with labels, sending, reading emails + smart
decision maker
When we run multiple instances of the executor, some of the executors
can oversubscribe the messages and end up queuing the agent execution
request instead of letting another executor handle the job. This change
solves the problem.
### Changes 🏗️
* Reject execution request when the executor is full.
* Improve `active_graph_runs` tracking for better horizontal scaling
heuristics.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph execution & CI
### Changes 🏗️
1. Json columns have to be json serializable, but sometimes the data is
not, so `SafeJson` is introduced to make sure that the data being loaded
can be string serialized and back before persisting into the database.
2. Locks & transactions seem to be used in the case where it's not
needed, this reduces database & redis performance.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI tests
We want scrolling for agent dialog list
- Based on #9833
### Changes 🏗️
- adds backend support for paginating this content
- adds frontend support for scrolling pagination
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test UI for this
---------
Co-authored-by: Venkat Sai Kedari Nath Gandham <154089422+Kedarinath1502@users.noreply.github.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10458
Improve logic in `useAgentGraph`:
- Correctly handle unset `flowVersion` in checks in hooks
- Prevent unnecessary WebSocket re-connects
- Remove redundant WebSocket connection management logic
- Untangle hooks for initial load and set-up
- Simplify block filtering logic
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Edit an agent in the builder
- [x] WebSocket doesn't re-connect unnecessarily
- [x] Graph doesn't reset on WebSocket re-connect
- [x] Graph doesn't reset on LaunchDarkly re-connect
- Resolves#10458
### Changes 🏗️
Improve logic in `useAgentGraph`:
- Correctly handle unset `flowVersion` in checks in hooks
- Prevent unnecessary WebSocket re-connects
- Remove redundant WebSocket connection management logic
- Untangle hooks for initial load and set-up
- Simplify block filtering logic
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Edit an agent in the builder
- [x] WebSocket doesn't re-connect unnecessarily
- [x] Graph doesn't reset on WebSocket re-connect
- [x] Graph doesn't reset on LaunchDarkly re-connect
Bumps [redis](https://github.com/redis/redis-py) from 5.2.1 to 6.2.0,
for both `autogpt_libs` and `backend`.
Also, additional fixes in `autogpt_libs/pyproject.toml`:
- Move `redis` from dev dependencies to prod dependencies
- Fix author info
- Sort dependencies
> [!NOTE]
> Of course dependabot wouldn't do this on its own; this PR has been
taken over and augmented by @Pwuts
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/releases">redis's
releases</a>.</em></p>
<blockquote>
<h2>6.2.0</h2>
<h1>Changes</h1>
<h2>🚀 New Features</h2>
<ul>
<li>Add <code>dynamic_startup_nodes</code> parameter to async
RedisCluster (<a
href="https://redirect.github.com/redis/redis-py/issues/3646">#3646</a>)</li>
<li>Support RESP3 with <code>hiredis-py</code> parser (<a
href="https://redirect.github.com/redis/redis-py/issues/3648">#3648</a>)</li>
<li>[Async] Support for transactions in async <code>RedisCluster</code>
client (<a
href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li>
</ul>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Revert wrongly changed default value for <code>check_hostname</code>
when instantiating <code>RedisSSLContext</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li>
<li>Fixed potential deadlock from unexpected <code>__del__</code> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Update <code>search_json_examples.ipynb</code>: Fix the old import
<code>indexDefinition</code> -> <code>index_definition</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3652">#3652</a>)</li>
<li>Remove mandatory update of the CHANGES file for new PRs. Changes
file will be kept for history for versions < 4.0.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/3645">#3645</a>)</li>
<li>Dropping <code>Python 3.8</code> support as it has reached end of
life (<a
href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li>
<li>fix(doc): update Python print output in json doctests (<a
href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li>
<li>Update redis-entraid dependency (<a
href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li>
</ul>
<h2></h2>
<p>We'd like to thank all the contributors who worked on this release!
<a href="https://github.com/JCornat"><code>@JCornat</code></a> <a
href="https://github.com/ShubhamKaudewar"><code>@ShubhamKaudewar</code></a>
<a href="https://github.com/uglide"><code>@uglide</code></a> <a
href="https://github.com/petyaslavova"><code>@petyaslavova</code></a>
<a
href="https://github.com/vladvildanov"><code>@vladvildanov</code></a></p>
<h2>v6.1.1</h2>
<h1>Changes</h1>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Revert wrongly changed default value for <code>check_hostname</code>
when instantiating <code>RedisSSLContext</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li>
<li>Fixed potential deadlock from unexpected <code>__del__</code> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
</ul>
<h2></h2>
<p>We'd like to thank all the contributors who worked on this release!
<a
href="https://github.com/vladvildanov"><code>@vladvildanov</code></a>
<a
href="https://github.com/petyaslavova"><code>@petyaslavova</code></a></p>
<h2>6.1.0</h2>
<h1>Changes</h1>
<h2>🚀 New Features</h2>
<ul>
<li>Support for transactions in <code>RedisCluster</code> client (<a
href="https://redirect.github.com/redis/redis-py/issues/3611">#3611</a>)</li>
<li>Add equality and hashability to <code>Retry</code> and backoff
classes (<a
href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)</li>
</ul>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Fix RedisCluster <code>ssl_check_hostname</code> not set to
connections. For SSL verification with
<code>ssl_cert_reqs="none"</code>, check_hostname is set to
<code>False</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3637">#3637</a>)
<strong>Important</strong>: The default value for the
<code>check_hostname</code> field of <code>RedisSSLContext</code> has
been changed as part of this PR - this is a breaking change and should
not be introduced in minor versions - unfortunately, it is part of the
current release.
The breaking change is reverted in the next release to fix the behavior
--> 6.2.0</li>
<li>Prevent RuntimeError while reinitializing clusters - sync and async
(<a
href="https://redirect.github.com/redis/redis-py/issues/3633">#3633</a>)</li>
<li>Add equality and hashability to <code>Retry</code> and backoff
classes (<a
href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)
- fixes integration with Django RQ</li>
<li>Fix <code>AttributeError</code> on <code>ClusterPipeline</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3634">#3634</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1a59471870"><code>1a59471</code></a>
Adding small change in code to trigger pipeline for the branch.</li>
<li><a
href="83cf781be6"><code>83cf781</code></a>
Adding small change in README to trigger pipeline for the branch.</li>
<li><a
href="f5cd264c40"><code>f5cd264</code></a>
maintenance: Preparation for release 6.2.0 - updating lib version. (<a
href="https://redirect.github.com/redis/redis-py/issues/3662">#3662</a>)</li>
<li><a
href="793cdc63ac"><code>793cdc6</code></a>
maintenance: Update redis-entraid dependency (<a
href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li>
<li><a
href="34c40ff82d"><code>34c40ff</code></a>
fix(doc) : update Python print output in json doctests (<a
href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li>
<li><a
href="e5756daafa"><code>e5756da</code></a>
Dropping Python 3.8 support as it has reached end of life (<a
href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li>
<li><a
href="bc7de608b8"><code>bc7de60</code></a>
[Async] Support for transactions in async RedisCluster client (<a
href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li>
<li><a
href="e226ad2b4c"><code>e226ad2</code></a>
Removing connection_pool field from the CommandProtocol definition (<a
href="https://redirect.github.com/redis/redis-py/issues/3656">#3656</a>)</li>
<li><a
href="14a6fc39bd"><code>14a6fc3</code></a>
fix: Fixed potential deadlock from unexpected <strong>del</strong> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
<li><a
href="3ebfd5b569"><code>3ebfd5b</code></a>
fix: Revert wrongly changed default value for check_hostname when
instantiati...</li>
<li>Additional commits viewable in <a
href="https://github.com/redis/redis-py/compare/v5.2.1...v6.2.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Changes 🏗️
- Close websocket connections gracefully during logout ( _whether from
another tab or not_ )
- Also fixed an error on `HeroSection` that shows when the onboarding is
disabled locally ( `null` )
- Uncomment legit tests about connecting/saving agents
- Centralise local storage usage through a single service with typed
keys
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login in 3 tabs ( 1 builder, 1 marketplace, 1 agent run view )
- [x] Logout from the marketplace tab
- [x] The other tabs show logout state gracefully without toasts or
errors
- [x] Websocket connections are closed ( _devtools console shows that_ )
### For configuration changes:
None
This PR implements a comprehensive Ayrshare social media integration for
AutoGPT Platform, enabling users to post content across multiple social
media platforms through a unified interface. Ayrshare provides a single
API to manage posts across Facebook, Twitter/X, LinkedIn, Instagram,
YouTube, TikTok, Pinterest, Reddit, Telegram, Google My Business,
Bluesky, Snapchat, and Threads.
The integration addresses the need for social media automation and
content distribution workflows within AutoGPT agents, allowing users to:
- Connect their social media accounts via SSO
- Post content with platform-specific options and constraints
- Schedule posts across multiple platforms simultaneously
- Handle platform-specific media requirements and validation
⚠️ To simplify the review process all except the twitter post block has
been commented out, future pr's will uncomment other platfroms so we can
test them in isolation.
### Changes 🏗️
#### Backend Integration (`backend/integrations/ayrshare.py`)
- **AyrshareClient**: Complete API client implementation with post
creation, profile management, and JWT generation
- **SocialPlatform enum**: Comprehensive platform definitions for all
supported social networks
- **Response models**: PostResponse, ProfileResponse, JWTResponse for
type-safe API interactions
- **Error handling**: Custom AyrshareAPIException with proper HTTP
status code handling
#### Social Media Posting Blocks (`backend/blocks/ayrshare/post.py`)
- **BaseAyrshareInput**: Shared input schema with common fields (post
text, media URLs, scheduling, etc.)
- **Platform-specific blocks**: 13 dedicated posting blocks, each with
platform-specific validation and options:
- PostToFacebookBlock: Carousel, Reels, Stories, targeting, alt text
- PostToXBlock: Threads, polls, long posts, premium features, subtitles
- PostToLinkedInBlock: Document support, visibility controls, audience
targeting
- PostToInstagramBlock: Stories, Reels, user tags, collaborators
- PostToYouTubeBlock: Video uploads, playlists, visibility, country
targeting
- PostToPinterestBlock: Pins, carousels, board management
- PostToTikTokBlock: Video/image posts, AI labeling, brand content
- PostToRedditBlock: Basic posting functionality
- PostToTelegramBlock: GIF handling, mentions
- PostToGMBBlock: Event/offer posts, call-to-action buttons
- PostToBlueskyBlock: Character limit validation, alt text
- PostToSnapchatBlock: Story types, video thumbnails
- PostToThreadsBlock: Hashtag restrictions, carousel support
#### Helper Models
- **CarouselItem**: Facebook carousel configuration
- **CallToAction, EventDetails, OfferDetails**: Google My Business post
types
- **InstagramUserTag**: Instagram user tagging with coordinates
- **LinkedInTargeting**: LinkedIn audience targeting options
- **PinterestCarouselOption**: Pinterest carousel image options
- **YouTubeTargeting**: YouTube country blocking/allowing
#### Authentication & SSO (`backend/server/integrations/router.py`)
- **SSO endpoint**: `/integrations/ayrshare/sso_url` for account linking
- **Profile management**: Automatic profile creation and key management
- **JWT generation**: Secure token generation for social media account
linking
- **Platform allowlist**: Configured access to all supported social
platforms
#### Frontend Integration (`frontend/src/components/CustomNode.tsx`)
- **AYRSHARE block type**: New BlockUIType.AYRSHARE for
Ayrshare-specific nodes
- **SSO button**: "Connect Social Media Accounts" with loading states
- **Handle generation**: Special handling for Ayrshare blocks with SSO
integration
#### Configuration
- **Environment variables**: Added AYRSHARE_API_KEY and AYRSHARE_JWT_KEY
to .env.example
- **Block registration**: All Ayrshare blocks registered in
AYRSHARE_NODE_IDS array
#### Type Safety & Error Handling
- **Modern typing**: Updated to use `list`, `dict`, `Any` instead of
legacy typing
- **Comprehensive validation**: Platform-specific constraints (character
limits, media counts, file types)
- **User-friendly errors**: Clear error messages for validation failures
and API errors
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
**Backend API Testing:**
- [x] Verify AyrshareClient initializes correctly with API key
- [x] Test JWT generation for SSO authentication
- [x] Test profile creation and management
- [x] Verify all 13 posting blocks are properly registered
- [x] Test platform-specific validation rules for each block
- [x] Verify error handling for missing credentials and API failures
**Frontend Integration Testing:**
- [x] Verify AYRSHARE block type renders correctly in flow editor
- [x] Test SSO button functionality and popup window behavior
- [x] Confirm loading states work properly during authentication
- [x] Verify input handles generate correctly for Ayrshare blocks
- [x] Test platform-specific input fields and validation
**End-to-End Workflow Testing:**
- [x] Create agent with Ayrshare posting blocks
- [x] Test SSO flow: click "Connect Social Media Accounts" button
- [x] Verify popup opens with Ayrshare authentication page
- [x] Test social media account linking process
- [x] Create posts with various platform-specific options:
- [ X ] X (Twitter) - tested basic posting with image
- [] Test scheduling functionality across platforms
- [x] Verify media upload constraints and validation
- [] Test error handling for invalid inputs and failed posts
**Error Case Testing:**
- [] Test behavior with missing AYRSHARE_API_KEY configuration
- [] Test invalid social media credentials handling
- [] Test network failure scenarios
- [] Verify platform-specific validation error messages
- [] Test character limit enforcement per platform
- [] Test media file type and size restrictions
**Security Testing:**
- [ x ] Verify JWT tokens are properly generated and validated
- [x] Test profile key isolation between users
- [x] Confirm sensitive credentials are not logged
- [x] Verify SSO popup prevents XSS attacks
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Configuration Changes:**
- Added `AYRSHARE_API_KEY` environment variable for Ayrshare API
authentication
- Added `AYRSHARE_JWT_KEY` environment variable for SSO token generation
- No docker-compose.yml changes required (uses existing backend
services)
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Overview
This PR adds comprehensive Airtable integration to the AutoGPT platform,
enabling users to seamlessly connect their Airtable bases with AutoGPT
workflows for powerful no-code automation capabilities.
## Why Airtable Integration?
Airtable is one of the most popular no-code databases used by teams for
project management, CRMs, inventory tracking, and countless other use
cases. This integration brings significant value:
- **Data Automation**: Automate data entry, updates, and synchronization
between Airtable and other services
- **Workflow Triggers**: React to changes in Airtable bases with
webhook-based triggers
- **Schema Management**: Programmatically create and manage Airtable
table structures
- **Bulk Operations**: Efficiently process large amounts of data with
batch create/update/delete operations
## Key Features
### 🔌 Webhook Trigger
- **AirtableWebhookTriggerBlock**: Listens for changes in Airtable bases
and triggers workflows
- Supports filtering by table, view, and specific fields
- Includes webhook signature validation for security
### 📊 Record Operations
- **AirtableCreateRecordsBlock**: Create single or multiple records (up
to 10 at once)
- **AirtableUpdateRecordsBlock**: Update existing records with upsert
support
- **AirtableDeleteRecordsBlock**: Delete single or multiple records
- **AirtableGetRecordBlock**: Retrieve specific record details
- **AirtableListRecordsBlock**: Query records with filtering, sorting,
and pagination
### 🏗️ Schema Management
- **AirtableCreateTableBlock**: Create new tables with custom field
definitions
- **AirtableUpdateTableBlock**: Modify table properties
- **AirtableAddFieldBlock**: Add new fields to existing tables
- **AirtableUpdateFieldBlock**: Update field properties
## Technical Implementation Details
### Authentication
- Supports both API Key and OAuth authentication methods
- OAuth implementation includes proper token refresh handling
- Credentials are securely managed through the platform's credential
system
### Webhook Security
- Added `credentials` parameter to WebhooksManager interface for proper
signature validation
- HMAC-SHA256 signature verification ensures webhook authenticity
- Webhook cursor tracking prevents duplicate event processing
### API Integration
- Comprehensive API client (`_api.py`) with full type safety
- Proper error handling and response validation
- Support for all Airtable field types and operations
## Changes 🏗️
### Added Blocks:
- AirtableWebhookTriggerBlock
- AirtableCreateRecordsBlock
- AirtableDeleteRecordsBlock
- AirtableGetRecordBlock
- AirtableListRecordsBlock
- AirtableUpdateRecordsBlock
- AirtableAddFieldBlock
- AirtableCreateTableBlock
- AirtableUpdateFieldBlock
- AirtableUpdateTableBlock
### Modified Files:
- Updated WebhooksManager interface to support credential-based
validation
- Modified all webhook handlers to support the new interface
## Test Plan 📋
### Manual Testing Performed:
1. **Authentication Testing**
- ✅ Verified API key authentication works correctly
- ✅ Tested OAuth flow including token refresh
- ✅ Confirmed credentials are properly encrypted and stored
2. **Webhook Testing**
- ✅ Created webhook subscriptions for different table events
- ✅ Verified signature validation prevents unauthorized requests
- ✅ Tested cursor tracking to ensure no duplicate events
- ✅ Confirmed webhook cleanup on block deletion
3. **Record Operations Testing**
- ✅ Created single and batch records with various field types
- ✅ Updated records with and without upsert functionality
- ✅ Listed records with filtering, sorting, and pagination
- ✅ Deleted single and multiple records
- ✅ Retrieved individual record details
4. **Schema Management Testing**
- ✅ Created tables with multiple field types
- ✅ Added fields to existing tables
- ✅ Updated table and field properties
- ✅ Verified proper error handling for invalid field types
5. **Error Handling Testing**
- ✅ Tested with invalid credentials
- ✅ Verified proper error messages for API limits
- ✅ Confirmed graceful handling of network errors
### Security Considerations 🔒
1. **API Key Management**
- API keys are stored encrypted in the credential system
- Keys are never logged or exposed in error messages
- Credentials are passed securely through the execution context
2. **Webhook Security**
- HMAC-SHA256 signature validation on all incoming webhooks
- Webhook URLs use secure ingress endpoints
- Proper cleanup of webhooks when blocks are deleted
3. **OAuth Security**
- OAuth tokens are securely stored and refreshed
- Scopes are limited to necessary permissions
- Token refresh happens automatically before expiration
## Configuration Requirements
No additional environment variables or configuration changes are
required. The integration uses the existing credential management
system.
## Checklist 📋
#### For code changes:
- [x] I have read the [contributing
instructions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/.github/CONTRIBUTING.md)
- [x] Confirmed that `make lint` passes
- [x] Confirmed that `make test` passes
- [x] Updated documentation where needed
- [x] Added/updated tests for new functionality
- [x] Manually tested all blocks with real Airtable bases
- [x] Verified backwards compatibility of webhook interface changes
#### Security:
- [x] No hard-coded secrets or sensitive information
- [x] Proper input validation on all user inputs
- [x] Secure credential handling throughout
## Changes 🏗️
- Make docker + deps cache actually work on the FE CI
- Run the E2E test data script before Playwright
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI is faster in repeated runs ( _uses cache_ )
- [x] Test data script runs successfully
### For configuration changes:
None
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
This PR contains code changes that will resolve the cursor jump issue in
the **Note** block #9252 . The changes in code affects the
NodeTextBoxInput component to transfer the state into local state and
not mutate the `value` prop directly.
Also includes change to use store admin which is from rebase issue but
approved by maintainer @ntindle
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested the note block allows to edit text normally
https://github.com/user-attachments/assets/f2800bf1-9867-4627-ac9d-44718627b263
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
https://github.com/Significant-Gravitas/AutoGPT/pull/10437 migrated
DatabaseManager away from RestApi, but this change is not yet reflected
in Docker Compose, which causes the self-hosted Docker mode to break.
### Changes 🏗️
Add DatabaseManager process in docker.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] `docker compose up`
## Changes 🏗️
Fixed WebSocket connection errors during multi-tab logout 💆🏽
<img width="1193" height="273" alt="Screenshot 2025-07-23 at 22 23 35"
src="https://github.com/user-attachments/assets/bf6f964d-bcb0-4a2a-adff-1194defe1e61"
/>
Previously, when users logged out in one browser tab, WebSocket
connections in other open tabs would continue trying to reconnect with
invalid authentication tokens, causing console errors and potential
runtime errors.
### What was fixed
- WebSocket connections are now properly disconnected when logout occurs
in any tab
- cross-tab logout mechanism now includes WebSocket cleanup to prevent
reconnection errors
- added logic to prevent automatic reconnection after intentional
disconnection
- added E2E tests to cover this case
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manual testing: Login in multiple tabs, navigate to builder
(establishes WebSocket), logout from one tab, verify no console errors
- [x] E2E test: Added automated test covering multi-tab logout with
WebSocket cleanup verification
- [x] Verified cross-tab logout still works correctly
- [x] Confirmed WebSocket connections reconnect properly after fresh
login
### For configuration changes:
None
We've been reporting agents that are stuck on the `QUEUED` status. This
change includes the one on. RUNNING status that's been stuck for more
than 24hours.
### Changes 🏗️
Report agent on RUNNING status for more than 24hours.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
It's hard to debug two processes running in the same pod. We need to
decouple the two processes into two services.
### Changes 🏗️
Move DatabaseManager away from RestAPI as a standalone serice
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️https://github.com/user-attachments/assets/42e1c896-5f3b-447c-aee9-4f5963c217d9
There is now a 🔔 icon on the Navigation bar that shows previous agent
runs and displays real-time agent running status.
If you run an agent, the bell will show on a badge how many agents are
running. If you hover over it, a hint appears. If you click on it, it
opens a dropdown and displays the executions with their status ( _which
should match what we have in library functionality, not design-wise_ ).
I leveraged the existing APIs for this purpose. Most of the run logic is
[encapsulated on this
hook](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...feat/agent-notifications?expand=1#diff-a9e7f2904d6283b094aca19b64c7168e8c66be1d5e0bb454be8978cb98526617)
and is also an independent `<AgentActivityDropdown />` component.
Clicking on an agent run opens that run in the library page.
This new functionality is covered by E2E tests 💆🏽✔️
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The navigation bar layout looks good when logged out
- [x] The navigation bar layout looks good when logged in
- [x] Open an agent in the library and click `Run`
- [x] See the real-time activity of the agent running on the navigation
bar bell icon
### For configuration changes:
_No configuration changes needed._
Some AgentInput can store empty string as the default value, this will
cause error on non string input.
Error:
```
2025-07-22 23:14:10,424 WARNING Invalid <class 'backend.blocks.io.AgentToggleInputBlock.Input'>: {'name': 'Enriched info (including email), will double the search cost', 'value': '', 'advanced': False, 'placeholder_values': []}, 1 validation error for Input
value
Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str]
For further information visit https://errors.pydantic.dev/2.11/v/bool_parsing
2025-07-22 23:14:10,424 WARNING Invalid <class 'backend.blocks.io.AgentNumberInputBlock.Input'>: {'name': 'Expected New Leads Count', 'value': '', 'advanced': False, 'placeholder_values': []}, 1 validation error for Input
value
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='', input_type=str]
For further information visit https://errors.pydantic.dev/2.11/v/int_parsing
```
### Changes 🏗️
Ignore invalid field when constructing agent input schema.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Use AgentNumberInput and AgenToggleInput with empty string value.
<!-- Clearly explain the need for these changes: -->
We setup launchdarkly so now we can dynamically control which blocks are
available on the UI
### Changes 🏗️
enables the google blocks + fixes the LD .env.example for the local env
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Deploy to test environment and verify the blocks show correctly vs
are hidden when toggling Launch darlky rules
### Changes ️
The previous implementation of the `LaunchDarklyProvider` had a race
condition where it would only initialize after the user's authentication
state was fully resolved. This caused two primary issues:
1. A delay in evaluating any feature flags, leading to a "flash of
un-styled/un-flagged content" until the user session was loaded.
2. An unreliable transition from an un-flagged state to a flagged state,
which could cause UI flicker or incorrect flag evaluations upon login.
This pull request refactors the provider to follow a more robust,
industry-standard pattern. It now initializes immediately with an
`anonymous` context, ensuring flags are available from the very start of
the application lifecycle. When the user logs in and their session
becomes available, the provider seamlessly transitions to an
authenticated context, guaranteeing that the correct flags are evaluated
consistently.
### Checklist
#### For code changes:
- I have clearly listed my changes in the PR description
- I have made a test plan
- I have tested my changes according to the test plan:
**Test Plan:**
- [x] **Anonymous User:** Load the application in an incognito window
without logging in. Verify that feature flags are evaluated correctly
for an anonymous user. Check the browser console for the
`[LaunchDarklyProvider] Using anonymous context` message.
- [x] **Login Flow:** While on the site, log in. Verify that the UI
updates with the correct feature flags for the authenticated user. Check
the console for the `[LaunchDarklyProvider] Using authenticated context`
message and confirm the LaunchDarkly client re-initializes.
- [x] **Authenticated User (Page Refresh):** As a logged-in user,
refresh the page. Verify that the application loads directly with the
authenticated user's flags, leveraging the cached session and
bootstrapped flags from `localStorage`.
- [x] **Logout Flow:** While logged in, log out. Verify that the UI
reverts to the anonymous user's state and flags. The provider `key`
should change back to "anonymous", triggering another re-mount.
<details><summary>Summary of Code Changes</summary>
- Refactored `LaunchDarklyProvider` to handle user authentication state
changes gracefully.
- The provider now initializes immediately with an `anonymous` user
context while the Supabase user session is loading.
- Once the user is authenticated, the provider's context is updated to
reflect the logged-in user's details.
- Added a `key` prop to the `<LDProvider>` component, using the user's
ID (or "anonymous"). This forces React to re-mount the provider when the
user's identity changes, ensuring a clean re-initialization of the
LaunchDarkly SDK.
- Enabled `localStorage` bootstrapping (`options={{ bootstrap:
"localStorage" }}`) to cache flags and improve performance on subsequent
page loads.
- Added `console.debug` statements for improved observability into the
provider's state (anonymous vs. authenticated).
</details>
#### For configuration changes:
- `.env.example` is updated or already compatible with my changes
- `docker-compose.yml` is updated or already compatible with my changes
- I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration Changes</summary>
- No configuration changes were made. This PR relies on existing
environment variables (`NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID` and
`NEXT_PUBLIC_LAUNCHDARKLY_ENABLED`).
</details>
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
## Summary
- Add thread safety to NodeExecutionProgress class to prevent race
conditions between graph executor and node executor threads
- Fixes potential data corruption and lost outputs during concurrent
access to shared output lists
- Uses single global lock per node for minimal performance impact
- Instead of blocking the node evaluation before adding another node
evaluation, we move on to the next node, in case another node completes
it.
## Changes
- Added `threading.Lock` to NodeExecutionProgress class
- Protected `add_output()` calls from node executor thread with lock
- Protected `pop_output()` calls from graph executor thread with lock
- Protected `_pop_done_task()` output checks with lock
## Problem Solved
The `NodeExecutionProgress.output` dictionary was being accessed
concurrently:
- `add_output()` called from node executor thread (asyncio thread)
- `pop_output()` called from graph executor thread (main thread)
- Python lists are not thread-safe for concurrent append/pop operations
- This could cause data corruption, index errors, and lost outputs
## Test Plan
- [x] Existing executor tests pass
- [x] No performance regression (operations are microsecond-level)
- [x] Thread safety verified through code analysis
## Technical Details
- Single `threading.Lock()` per NodeExecutionProgress instance (~64
bytes)
- Lock acquisition time (~100-200ns) is minimal compared to list
operations
- Maintains order guarantees for same node_execution_id processing
- No GIL contention issues as operations are very brief
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Rest-Api service is a vital service where we should minimize the amount
of external resources impacting it.
And the NotificationManager service is running as a singleton in nature
(similar to the scheduler service), so it makes sense to put it in a
single pod.
### Changes 🏗️
Move the NotificationManager service from rest-api pod to the scheduler
pod
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
Rest-Api service is a vital service where we should minimize the amount
of external resources impacting it.
And the NotificationManager service is running as a singleton in nature
(similar to the scheduler service), so it makes sense to put it in a
single pod.
### Changes 🏗️
Move the NotificationManager service from rest-api pod to the scheduler
pod
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
## Changes 🏗️
Launch Darkly is not being initialised in production, despite on paper
all env variables being well set 🧐
I tried doing production builds locally, and I noticed this provider
needs to be initialised on the client because Launch Darkly flags are
designed to work on the client side, where they can access user context
and browser environment.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Did production build locally and run it
- [x] See LD initialised on the console after the `use client` directive
was added
### For configuration changes:
None
Currently, we only create a library entry of the top-most graph when
importing the graph from an exported file.
This can cause some complications, as there is no way to remove the
library entry of it.
### Changes 🏗️
Create the library entry for all the subgraphs during the import
process.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Export an agent with subgraphs and import it back.
### Changes 🏗️
This PR adds a new utility block to the basic blocks collection. This
block provides a simple way to reverse the order of elements in any
list.
**New Features:**
- Added class in
- Block accepts any list as input and returns the same list with
elements in reversed order
- Preserves the original list (creates a copy before reversing)
- Works with lists containing any type of elements
**Technical Details:**
- Block ID:
- Category:
- Input: - The list to reverse (accepts )
- Output: - The list with elements in reversed order
- Includes test input/output for validation
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created an agent with the ReverseListOrderBlock
- [x] Tested with various list types (numbers, strings, mixed types)
- [x] Verified the block preserves the original list
- [x] Confirmed the block correctly reverses the order of elements
- [x] Tested with empty lists and single-element lists
- [x] Verified the block integrates properly with other blocks in a
workflow
#### For configuration changes:
- [x] is updated or already compatible with my changes
- [x] is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note:** No configuration changes required - this is a pure code
addition that uses the existing block framework.
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Summary
- Introduced correct health check API for AppService
- Fixed service health check mechanism to properly handle health status
monitoring
## Changes Made
- Updated health check implementation in AppService.
- Make rest service health check checks the health of DatabaseManager
too.
## Test plan
- [x] Verify health check endpoint responds correctly
- [x] Test health check mechanism under various service states
- [x] Validate monitoring and alerting integration
🤖 Generated with [Claude Code](https://claude.ai/code)
### Changes 🏗️
This PR optimizes database performance by adding missing foreign key
indexes, removing unused/redundant indexes, cleaning up all legacy
untracked indexes, and adding performance indexes for materialized views
to achieve 100% optimized database indexing.
**Foreign Key Indexes Added:**
- `AgentGraph`: `[forkedFromId, forkedFromVersion]` - For fork
relationship queries
- `AgentGraphExecution`: `[agentPresetId]` - For preset-based execution
filtering
- `AgentNodeExecution`: `[agentNodeId]` - For node execution lookups
- `AgentNodeExecutionInputOutput`: `[agentPresetId]` - For preset
input/output queries
- `AgentPreset`: `[agentGraphId, agentGraphVersion]` & `[webhookId]` -
For graph and webhook lookups
- `LibraryAgent`: `[agentGraphId, agentGraphVersion]` & `[creatorId]` -
For agent and creator queries
- `StoreListing`: `[agentGraphId, agentGraphVersion]` - For marketplace
agent lookups
- `StoreListingReview`: `[reviewByUserId]` - For user review queries
**Unused/Redundant Indexes Removed:**
- `User.email` - Unused index identified by linter
- `AnalyticsMetrics.userId` - Unused index causing write overhead
- `AnalyticsDetails.type` - Redundant (covered by composite `[userId,
type]`)
- `APIKey.key`, `APIKey.status` - Unused indexes
- Named index `"analyticsDetails"` - Converted to standard composite
index
**All Legacy Untracked Indexes Removed:**
- `idx_store_listing_version_status` - Redundant with Prisma composite
index
- `idx_slv_agent` - Redundant with Prisma-managed `[agentGraphId,
agentGraphVersion]`
- `idx_store_listing_version_approved_listing` - Redundant with unique
constraint
- `StoreListing_agentId_owningUserId_idx` - Legacy index superseded by
current strategy
- `StoreListing_isDeleted_isApproved_idx` - Replaced by optimized
composite index
- `StoreListing_isDeleted_idx` - Redundant with composite index
- `StoreListingVersion_agentId_agentVersion_isDeleted_idx` - Legacy
index replaced
- `idx_store_listing_approved` - Redundant with existing `[owningUserId,
slug]` unique constraint and `[isDeleted, hasApprovedVersion]` index
- `idx_slv_categories_gin` - Specialized array search index removed (can
be re-added if category filtering is implemented)
- `idx_profile_user` - Duplicate of Prisma-managed `Profile_userId_idx`
**Materialized View Performance Indexes Added:**
- `idx_mv_review_stats_rating` on `mv_review_stats(avg_rating DESC)` -
Optimizes sorting agents by rating
- `idx_mv_review_stats_count` on `mv_review_stats(review_count DESC)` -
Optimizes sorting agents by review count
**Result: 100% Optimized Database Indexing**
- All database indexes are now defined and managed through Prisma schema
- No more untracked indexes requiring manual SQL maintenance
- Added performance indexes for materialized views used by marketplace
views
- Improved query performance for agent sorting and filtering
- Enhanced maintainability and consistency across environments
**Schema Comments Updated:**
- Removed all references to dropped untracked indexes
- Simplified documentation to reflect Prisma-only approach for regular
tables
- Added comprehensive documentation for materialized view indexes and
their purposes
- Maintained documentation for materialized view refresh strategy
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Schema changes compile successfully with Prisma
- [x] Migration adds required FK indexes and materialized view
performance indexes
- [x] Migration drops all legacy indexes and redundant untracked indexes
- [x] All pre-commit hooks pass (linting, formatting, type checking)
- [x] No breaking changes to existing foreign key relationships
- [x] Verified existing Prisma indexes cover all query patterns
- [x] Schema comments comprehensively document all indexing strategy
- [x] Materialized view performance indexes optimize marketplace sorting
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds a database index to improve query performance based on
Supabase query performance insights and index recommendations. These
indexes target frequently queried columns and relationships to reduce
query execution time.
### Changes 🏗️
- Added index on `AgentNodeExecutionInputOutput.agentPresetId`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified schema changes are valid Prisma syntax
- [x] Confirmed indexes target frequently queried columns based on
Supabase recommendations
- [x] Ensured no duplicate indexes are created
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Changes 🏗️
Only initialise Launch Darkly if we have a user and the config is set.
### Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Once merged login into app and see if Launch Darkly initialises
### For configuration changes:
No
This PR reduces the dependency of the API layer on the database manager
service by avoiding using DatabaseManager for credentials fetch when a
direct query is possible from the API layer
### Changes 🏗️
* If Prisma is available, use the direct query.
* Otherwise, utilize the database manager.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run blocks with credentials like AiTextGeneratorBlock
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<img width="563" height="400" alt="Screenshot 2025-07-19 at 00 50 25"
src="https://github.com/user-attachments/assets/ce9b2fe3-a244-40ed-a7a9-27b983ec90f2"
/>
Introduced an error on my previous PR:
https://github.com/Significant-Gravitas/AutoGPT/pull/10397 (
`shouldRender` should be taken in account... )
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login / logout as normal completing challenge
#### For configuration changes:
None
## Changes 🏗️
Do not render the Cloudflare CAPTCHA if the user has already passed
verification.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try with Cloudflare enabled
- [x] Shows the CAPTCHA when not verified
- [x] CAPTCHA is hidden when verified
### For configuration changes
None
## Changes 🏗️https://github.com/user-attachments/assets/dd635fa1-d8ea-4e5b-b719-2c7df8e57832
Using [LaunchDarkly](https://launchdarkly.com/), introduce the concept
of "beta" blocks, which are blocks that will be disabled in production
unless enabled via a feature flag. This allows us to safely hide and
test certain blocks in production safely.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Checkout and run FE locally
- [x] With the `beta-blocks` flag `disabled` in LD
- [x] Go to the builder and see **you can't** add the blocks specified
on the flag
- [x] With the `beta-blocks` flag `enabled` in LD
- [x] Go to the builder and see **you can** add the blocks specified on
the flag
### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
🚧 We need to add the `NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID` to the dev and
prod environments.
## Summary
- Introduced correct health check API for AppService
- Fixed service health check mechanism to properly handle health status
monitoring
## Changes Made
- Updated health check implementation in AppService.
- Make rest service health check checks the health of DatabaseManager
too.
## Test plan
- [x] Verify health check endpoint responds correctly
- [x] Test health check mechanism under various service states
- [x] Validate monitoring and alerting integration
🤖 Generated with [Claude Code](https://claude.ai/code)
This PR adds two new blocks for running Replicate models in AutoGPT:
ReplicateModelBlock: Synchronous execution of any Replicate model with
custom inputs
Key Features:
Support for any public Replicate model via model name (e.g.,
"stability-ai/stable-diffusion")
Custom input parameters via dictionary (e.g., {"prompt": "a beautiful
landscape"})
Optional model version specification for reproducible results
Proper credentials handling using ProviderName.REPLICATE
Comprehensive test suite with 12 test cases covering success, error, and
edge cases
Type-safe implementation with full pyright compliance
Mock methods for testing and development
# Checklist 📋
## For code changes:
- [X] I have clearly listed my changes in the PR description
- [X] I have made a test plan
- [X] I have tested my changes according to the test plan:
- Unit tests for ReplicateModelBlock (6 test cases)
- Test block initialization and configuration
- Test mock methods for development
- Test error handling and edge cases
- Verify type safety with pyright (0 errors)
- Verify code formatting with Black and isort
- Verify linting with Ruff (0 errors)
- Test credentials handling with ProviderName.REPLICATE
- Test model version specification functionality
- Test both synchronous and asynchronous execution paths
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
## Summary
This PR introduces a complete cloud storage infrastructure and file
upload system that agents can use instead of passing base64 data
directly in inputs, while maintaining backward compatibility for the
builder's node inputs.
### Problem Statement
Currently, when agents need to process files, they pass base64-encoded
data directly in the input, which has several limitations:
1. **Size limitations**: Base64 encoding increases file size by ~33%,
making large files impractical
2. **Memory usage**: Large base64 strings consume significant memory
during processing
3. **Network overhead**: Base64 data is sent repeatedly in API requests
4. **Performance impact**: Encoding/decoding base64 adds processing
overhead
### Solution
This PR introduces a complete cloud storage infrastructure and new file
upload workflow:
1. **New cloud storage system**: Complete `CloudStorageHandler` with
async GCS operations
2. **New upload endpoint**: Agents upload files via `/files/upload` and
receive a `file_uri`
3. **GCS storage**: Files are stored in Google Cloud Storage with
user-scoped paths
4. **URI references**: Agents pass the `file_uri` instead of base64 data
5. **Block processing**: File blocks can retrieve actual file content
using the URI
### Changes Made
#### New Files Introduced:
- **`backend/util/cloud_storage.py`** - Complete cloud storage
infrastructure (545 lines)
- **`backend/util/cloud_storage_test.py`** - Comprehensive test suite
(471 lines)
#### Backend Changes:
- **New cloud storage infrastructure** in
`backend/util/cloud_storage.py`:
- Complete `CloudStorageHandler` class with async GCS operations
- Support for multiple cloud providers (GCS implemented, S3/Azure
prepared)
- User-scoped and execution-scoped file storage with proper
authorization
- Automatic file expiration with metadata-based cleanup
- Path traversal protection and comprehensive security validation
- Async file operations with proper error handling and logging
- **New `UploadFileResponse` model** in `backend/server/model.py`:
- Returns `file_uri` (GCS path like
`gcs://bucket/users/{user_id}/file.txt`)
- Includes `file_name`, `size`, `content_type`, `expires_in_hours`
- Proper Pydantic schema instead of dictionary response
- **New `upload_file` endpoint** in `backend/server/routers/v1.py`:
- Complete new endpoint for file upload with cloud storage integration
- Returns GCS path URI directly as `file_uri`
- Supports user-scoped file storage for proper isolation
- Maintains fallback to base64 data URI when GCS not configured
- File size validation, virus scanning, and comprehensive error handling
#### Frontend Changes:
- **Updated API client** in
`frontend/src/lib/autogpt-server-api/client.ts`:
- Modified return type to expect `file_uri` instead of `signed_url`
- Supports the new upload workflow
- **Enhanced file input component** in
`frontend/src/components/type-based-input.tsx`:
- **Builder nodes**: Still use base64 for immediate data retention
without expiration
- **Agent inputs**: Use the new upload endpoint and pass `file_uri`
references
- Maintains backward compatibility for existing workflows
#### Test Updates:
- **New comprehensive test suite** in
`backend/util/cloud_storage_test.py`:
- 27 test cases covering all cloud storage functionality
- Tests for file storage, retrieval, authorization, and cleanup
- Tests for path validation, security, and error handling
- Coverage for user-scoped, execution-scoped, and system storage
- **New upload endpoint tests** in `backend/server/routers/v1_test.py`:
- Tests for GCS path URI format (`gcs://bucket/path`)
- Tests for base64 fallback when GCS not configured
- Validates file upload, virus scanning, and size limits
- Tests user-scoped file storage and access control
### Benefits
1. **New Infrastructure**: Complete cloud storage system with
enterprise-grade features
2. **Scalability**: Supports larger files without base64 size penalties
3. **Performance**: Reduces memory usage and network overhead with async
operations
4. **Security**: User-scoped file storage with comprehensive access
control and path validation
5. **Flexibility**: Maintains base64 support for builder nodes while
providing URI-based approach for agents
6. **Extensibility**: Designed for multiple cloud providers (GCS, S3,
Azure)
7. **Reliability**: Automatic file expiration, cleanup, and robust error
handling
8. **Backward compatibility**: Existing builder workflows continue to
work unchanged
### Usage
**For Agent Inputs:**
```typescript
// 1. Upload file
const response = await api.uploadFile(file);
// 2. Pass file_uri to agent
const agentInput = { file_input: response.file_uri };
```
**For Builder Nodes (unchanged):**
```typescript
// Still uses base64 for immediate data retention
const nodeInput = { file_input: "data:image/jpeg;base64,..." };
```
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All new cloud storage tests pass (27/27)
- [x] All upload file tests pass (7/7)
- [x] Full v1 router test suite passes (21/21)
- [x] All server tests pass (126/126)
- [x] Backend formatting and linting pass
- [x] Frontend TypeScript compilation succeeds
- [x] Verified GCS path URI format (`gcs://bucket/path`)
- [x] Tested fallback to base64 data URI when GCS not configured
- [x] Confirmed file upload functionality works in UI
- [x] Validated response schema matches Pydantic model
- [x] Tested agent workflow with file_uri references
- [x] Verified builder nodes still work with base64 data
- [x] Tested user-scoped file access control
- [x] Verified file expiration and cleanup functionality
- [x] Tested security validation and path traversal protection
#### For configuration changes:
- [x] No new configuration changes required
- [x] `.env.example` remains compatible
- [x] `docker-compose.yml` remains compatible
- [x] Uses existing GCS configuration from media storage
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude AI <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
The docs have an issue with not building due to changes made in the
process of updating the e2e testing to remove the monitor page.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
fixes the missing start and end tags that prevent docs from building.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] run the build and check for no errors indicating missing tags
- [x] deploy to netlify
This PR adds two new Gmail integration blocks—**Gmail Get Thread** and
**Gmail Reply**—to the platform, enhancing threaded email workflows. Key
changes include:
- **GmailGetThreadBlock**:
- New block that retrieves an entire Gmail thread by `threadId`, with an
option to include or exclude messages from Spam and Trash.
- Supports use cases like fetching all messages in a conversation to
check for responses.
- **GmailReplyBlock**:
- New block that sends a reply within an existing Gmail thread,
maintaining the thread context.
- Accepts detailed input fields including recipients, CC, BCC, subject,
body, and attachments.
- Ensures replies are properly associated with their parent message and
thread.
- **Enhancements to existing Gmail blocks**:
- The `Email` model and related outputs now include a `threadId` field.
- Updated test data and mock data to support threaded operations.
- Expanded OAuth scopes for actions requiring thread metadata.
- **Documentation updates**:
- Added documentation for the new Gmail blocks in both the general block
listing and the detailed Gmail block docs.
- Clarified that the `Email` output now includes the `threadId`.
These updates enable more advanced and context-aware Gmail automations,
such as fetching full conversations and replying inline, supporting
richer communication workflows with Gmail.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try all the gmail blocks
- [x] Send an email reply based on a thread from the get thread block
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
### Changes 🏗️
This PR introduces several key improvements to message handling, block
functionality, and execution reliability:
- **Renamed CSV block to Spreadsheet block** with enhanced CSV/Excel
processing capabilities
- **Added message size limiting and truncation** for Redis communication
to prevent connection issues
- **Optimized FileReadBlock** to yield content chunks instead of
duplicated outputs for better performance
- **Improved execution termination handling** with better timeout
management and event publishing
- **Enhanced continuous retry decorator** with async function support
- **Implemented payload truncation** to prevent Redis connection issues
from oversized messages
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified backend starts without errors
- [x] Confirmed message truncation works for large payloads
- [x] Tested spreadsheet block functionality with CSV and Excel files
- [x] Validated execution termination improvements
- [x] Checked FileReadBlock chunk processing
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes
- Introduced a new script to generate test data for end-to-end (E2E)
tests using API functions, ensuring compatibility with future model
changes.
- The script creates test users, agent blocks, graphs, profiles, library
agents, presets, API keys, and store submissions.
- Utilizes external services for image and video URLs, and includes
error handling for data creation processes.
- Provides a summary of created data upon completion, enhancing the
testing framework for the AutoGPT platform.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test scripts are working perfectly and not breaking anything. Data
is also correctly visible in the database.
## Changes 🏗️
<img width="1843" height="321" alt="Screenshot 2025-07-17 at 15 48 01"
src="https://github.com/user-attachments/assets/63f528f7-1dc3-4587-a5af-d02b2c858191"
/>
In this recent PR
https://github.com/Significant-Gravitas/AutoGPT/pull/10394/ the
navigation bar disappeared when logged out. A change was introduced
where the navigation bar does not show up if we don't have profile data
( _which we won't have when logged out_ ). This solves it + adds tests
covering the navigation bar in the logged out state.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run this locally
- [x] See the navbar appearing
- [x] E2E tests pass on the CI
### For configuration changes:
None
## Changes 🏗️
### User creation tests
Now, all tests use the users created via the platform signup in
`global-setup.ts`. Their login details are on a `.auth/user-pool.json`
file. I have the delete the logic that created tests users via Supabase
directly.
### Build tests speed
I have refactored the builder tests, so that, instead of adding 100s of
blocks under a given test user session, a new test user logins and adds
block for each letter:
```
Test user 1
- logins and adds blocks starting with "a"
Test user 2
- logins and adds blocks starting with "b"
```
Given that we know the builder becomes slow once we have 30 or more
blocks, in this way a test user never adds more than 10 blocks on a
given test ( _without losing coverage_ ), so we don't need time-outs or
artificially waiting due to the UI being slow.
### Readability test changes
Refactor existing tests, using short-hand utilities, to be:
- easier to write
- clearer to read
- easier to debug
```ts
// Selectors
getId("id") // --> page.getByTestId("id")
getText("foo") // --> page.getByText("id")
getButton("Run") // --> page.getByRole("button", {name: "Run"}
...
// Assetions
const btn = getButton("Save")
isVisible(btn) // --> expect(btn).toBeVisible()
```
These utilities live under `selectors.ts` and `assertions.ts`. Their
usage is optional but encouraged.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Refactored tests code looks good
- [x] E2E tests are 🟢 on the CI
### For configuration changes:
No config changes
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Bumps [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio)
from 0.26.0 to 1.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-asyncio/releases">pytest-asyncio's
releases</a>.</em></p>
<blockquote>
<h2>pytest-asyncio 1.0.0</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/1.0.0">1.0.0</a>
- 2025-05-26</h1>
<h2>Removed</h2>
<ul>
<li>The deprecated <em>event_loop</em> fixture.
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1106">#1106</a>)</li>
</ul>
<h2>Added</h2>
<ul>
<li>Prelimiary support for Python 3.14
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1025">#1025</a>)</li>
</ul>
<h2>Changed</h2>
<ul>
<li>Scoped event loops (e.g. module-scoped loops) are created once
rather
than per scope (e.g. per module). This reduces the number of fixtures
and speeds up collection time, especially for large test suites.
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1107">#1107</a>)</li>
<li>The <em>loop_scope</em> argument to <code>pytest.mark.asyncio</code>
no longer forces
that a pytest Collector exists at the level of the specified scope.
For example, a test function marked with
<code>pytest.mark.asyncio(loop_scope="class")</code> no longer
requires a class
surrounding the test. This is consistent with the behavior of the
<em>scope</em> argument to <code>pytest_asyncio.fixture</code>.
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1112">#1112</a>)</li>
</ul>
<h2>Fixed</h2>
<ul>
<li>An error caused when using pytest's [--setup-plan]{.title-ref}
option.
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/630">#630</a>)</li>
<li>Unsuppressed import errors with pytest option
<code>--doctest-ignore-import-errors</code>
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/797">#797</a>)</li>
<li>A "fixture not found" error in connection with
package-scoped loops
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1052">#1052</a>)</li>
</ul>
<h2>Notes for Downstream Packagers</h2>
<ul>
<li>Removed a test that had an ordering dependency on other tests.
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1114">#1114</a>)</li>
</ul>
<h2>pytest-asyncio 1.0.0a1</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/1.0.0a1">1.0.0a1</a>
- 2025-05-09</h1>
<h2>Removed</h2>
<ul>
<li>The deprecated <em>event_loop</em> fixture.
(<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1106">#1106</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5ef97bd60a"><code>5ef97bd</code></a>
chore: Prepare release of v1.0.0.</li>
<li><a
href="f212e24ec5"><code>f212e24</code></a>
docs: Mention fix of <a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/797">#797</a>.</li>
<li><a
href="32c1d10e87"><code>32c1d10</code></a>
test: Removed obsolete test for async_gen_fixtures.</li>
<li><a
href="627ce9265e"><code>627ce92</code></a>
[pre-commit.ci] pre-commit autoupdate</li>
<li><a
href="a55ff36f2c"><code>a55ff36</code></a>
Build(deps): Bump pluggy from 1.5.0 to 1.6.0 in
/dependencies/default</li>
<li><a
href="633389f302"><code>633389f</code></a>
Build(deps): Bump hypothesis in /dependencies/default</li>
<li><a
href="0c99466a6c"><code>0c99466</code></a>
docs: Fixed an error that reported a missing event_loop fixture when
using pa...</li>
<li><a
href="0688d17581"><code>0688d17</code></a>
ci: Replace Github template expansion with env variable expansion.</li>
<li><a
href="2adcf52664"><code>2adcf52</code></a>
ci: Quote Github variable expansion.</li>
<li><a
href="dd0fac96cd"><code>dd0fac9</code></a>
ci: Fixed a bug that prevented release notes from being extracted from a
Git ...</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-asyncio/compare/v0.26.0...v1.0.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
This update enhances the performance and user experience by allowing
data to be prefetched, reducing loading times on the frontend.
### Changes
- Introduced `usePrefetch` in Orval configuration to support
prefetching.
- Added prefetch queries for user profiles, admin listings history,
notification preferences, and execution schedules.
- Updated OpenAPI specifications to include descriptions for provider
names and adjusted required fields in request models.
- Enhanced the Navbar component to utilize the new prefetch
functionality for user profile data.
- Improved type definitions for various models to ensure better
integration with the API.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I’ve checked everything manually, and everything is working fine.
This PR adds Excel file support to CSV processing and enhances text file
reading capabilities.
### Changes 🏗️
**ReadSpreadsheetBlock (formerly ReadCsvBlock):**
- Renamed `ReadCsvBlock` to `ReadSpreadsheetBlock` for better clarity
- Added Excel file support (.xlsx, .xls) with automatic conversion to
CSV using pandas
- Enhanced parameter `file_in` to `file_input` for consistency
- Excel files are automatically detected by extension and converted to
CSV format
- Maintains all existing CSV processing functionality (delimiters,
headers, etc.)
- Graceful error handling when pandas library is not available
**FileReadBlock:**
- Enhanced text file reading with advanced chunking capabilities
- Added parameters: `skip_size`, `skip_rows`, `row_limit`, `size_limit`,
`delimiter`
- Supports both character-based and row-based processing
- Chunked output for large files based on size limits
- Proper file handling with UTF-8 and latin-1 encoding fallbacks
- Uses `store_media_file` for secure file processing (URLs, data URIs,
local paths)
- Fixed test input to use data URI instead of non-existent file
**General Improvements:**
- Consistent parameter naming across blocks (`file_input`)
- Enhanced error handling and validation
- Comprehensive test coverage
- All existing functionality preserved
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Both ReadSpreadsheetBlock and FileReadBlock instantiate correctly
- [x] ReadSpreadsheetBlock processes CSV data with existing
functionality
- [x] FileReadBlock reads text files with data URI input
- [x] All block tests pass (457 passed, 83 skipped)
- [x] No linting errors in modified files
- [x] Excel support gracefully handles missing pandas dependency
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes required for this PR.*
## Changes 🏗️
<img width="800" height="265" alt="Screenshot 2025-07-16 at 13 10 57"
src="https://github.com/user-attachments/assets/01164bde-0523-4284-bf74-d1a735b77d5c"
/>
When redoing the navigation bar, I moved the profile query to be
executed on the server using the new
[react-query](https://tanstack.com/query/latest) generated hooks.
The README [states the new hooks can be called on the
server](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/README.md#server-side-prefetching),
but when looking deeply into the implementation of
[`custom-mutator.ts`](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/src/app/api/mutators/custom-mutator.ts),
it turns out they can not ( yet ) as `custom-mutator` calls the proxy
API ( _which can't be called from a RSC_ 😅 ).
### Solution
For now, I changed the call to be made through the old `BackendAPI`,
which can be called client and server side ✅ ( _I did that as part of
the server 🍪 migration_ ) and added an E2E test to catch this ever
disappears again.
Next, I will open a separate PR refactoring `custom-mutator` so that it
can be called on the server.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app
- [x] Login
- [x] You see your account name and email when opening the account menu
<!-- Clearly explain the need for these changes: -->
The `GmailReadBlock._get_email_body()` method was only inspecting the
top-level payload and a single `text/plain` part, causing it to return
the fallback string "This email does not contain a text body." for most
Gmail messages. This occurred because Gmail messages are typically
wrapped in `multipart/alternative` or other multipart containers, which
the original implementation couldn't handle.
This critical issue made the Gmail integration unusable for reading
email body content, as virtually every real Gmail message uses multipart
MIME structures.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Changes
#### Core Implementation:
- **Replaced simple `_get_email_body()` with recursive multipart
parser** that can walk through nested MIME structures
- **Added `_walk_for_body()` method** for recursive traversal of email
parts with depth limiting (max 10 levels)
- **Implemented safe base64 decoding** with automatic padding correction
in `_decode_base64()`
- **Added attachment body support** via `_download_attachment_body()`
for emails where body content is stored as attachments
#### Email Format Support:
- **HTML to text conversion** using `html2text` library for HTML-only
emails
- **Multipart/alternative handling** with preference for `text/plain`
over `text/html`
- **Nested multipart structure support** (e.g., `multipart/mixed`
containing `multipart/alternative`)
- **Single-part email support** (maintains backward compatibility)
#### Dependencies & Testing:
- **Added `html2text = "^2024.2.26"`** to `pyproject.toml` for HTML
conversion
- **Created comprehensive unit tests** in `test/blocks/test_gmail.py`
covering all email types and edge cases
- **Added error handling and graceful fallbacks** for malformed data and
missing dependencies
#### Security & Performance:
- **Recursion depth limiting** prevents infinite loops on malformed
email structures
- **Exception handling** ensures graceful degradation when API calls
fail
- **Efficient tree traversal** with early returns for better performance
### Checklist
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<details>
<summary>Test Plan</summary>
- **Single-part text/plain emails** - Verified correct extraction of
plain text content
- **Multipart/alternative emails** - Tested preference for plain text
over HTML when both available
- **HTML-only emails** - Confirmed HTML to text conversion works
correctly
- **Nested multipart structures** - Tested deeply nested
`multipart/mixed` containing `multipart/alternative`
- **Attachment-based body content** - Verified downloading and decoding
of body stored as attachments
- **Base64 padding edge cases** - Tested malformed base64 data with
missing padding
- **Recursion depth limits** - Confirmed protection against infinite
recursion
- **Error handling scenarios** - Tested graceful fallbacks for API
failures and missing dependencies
- **Backward compatibility** - Ensured existing functionality remains
unchanged for edge cases
- **Integration testing** - Ran standalone verification script with 100%
test pass rate
</details>
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration Changes</summary>
- Added `html2text` dependency to `pyproject.toml` - no environment or
infrastructure changes required
- No changes to ports, services, secrets, or databases
- Fully backward compatible with existing Gmail API configuration
</details>
---------
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Updates to the readme + docs to add info about the newly added auto
setup script.
Changes to ``new_blocks.md`` and ``installer.md`` are to make netlify CI
happy and pass
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Changes 🏗️
- Put `Continue with Google` button below the other button on the forms
( _to confirm with design_ )
- Ensure some vertical spacing so the forms don't end touching the
header on small screens
- Apply style adjustments asked by design on navbar links
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Check the above
### For configuration changes:
None
## Changes 🏗️
Disable the Cloudflare check:
<img width="600" height="861" alt="Screenshot 2025-07-11 at 18 51 46"
src="https://github.com/user-attachments/assets/792ecca0-967e-4cef-a562-789125452d2f"
/>
On Vercel previews, so we can use previews for testing Front-end only
changes.
Vercel previews have dynamically generated URLs:
```
https://{branch}-{commit}-significant-gravitas.vercel.app/login
```
So if Cloudflare does not support URL wildcards we will neeed to do this
🙇🏽 ( _as an experiment_ )
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] You can login on the preview
### For configuration changes:
None
Currently, my agents count is showing the initial agent count loads on
the library and then adding more agents after pagination.
### Changes 🏗️
- I’ve used `total_items` inside the pagination response and shown the
correct result.
### Demo
https://github.com/user-attachments/assets/b9a2cf18-c9fc-42f8-b0d4-3f8a7ad3cbc5
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manually test everything, and it works fine.
### Motivation 💡
The previous Discord badge in the README used `dcbadge.vercel.app`,
which often fails to render correctly and displays an invalid or broken
badge.
### Changes 🛠️
- Replaced the broken badge with a `shields.io` Discord badge that is
visually consistent with the Twitter badge
- Ensures clearer visual guidance and a more professional appearance
### Notes ✏️
This PR only updates the `README.md` no frontend, backend, or
configuration files are touched. This change improves the aesthetics and
onboarding experience for new contributors.
Screenshot of the issue:
<img width="405" height="47" alt="Screenshot 2025-07-12 175316"
src="https://github.com/user-attachments/assets/41f7355c-f795-4163-855f-3d01f2478dd7"
/>
---------
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Bently <Github@bentlybro.com>
Co-authored-by: Bently <tomnoon9@gmail.com>
## Summary
This PR enhances the node execution stats tracking system to properly
handle nested graph executions and additional cost/step metrics:
- **Add extra_cost and extra_steps fields** to `NodeExecutionStats`
model for tracking additional metrics from sub-graphs
- **Update AgentExecutorBlock** to merge nested execution stats from
sub-graphs into the parent execution
- **Fix stats update mechanism** in `execute_node` to use in-place
updates instead of `model_copy` for better performance
- **Add proper tracking** of extra costs and steps in graph execution
stats aggregation
## Changes Made
- Modified `backend/backend/data/model.py` to add `extra_cost` and
`extra_steps` fields
- Updated `backend/backend/blocks/agent.py` to merge stats from nested
graph executions
- Fixed `backend/backend/executor/manager.py` to properly update
execution stats and aggregate extra metrics
## Test Plan
- [x] Verify that nested graph executions properly propagate their stats
to parent graphs
- [x] Test that extra costs and steps are correctly tracked and
aggregated
- [x] Ensure debug logging provides useful information for monitoring
- [x] Run existing tests to ensure no regressions
- [x] Test with multi-level nested agent graphs
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10313
- Resolves#10333
Before:
https://github.com/user-attachments/assets/a105b2b0-a90b-4bc6-89da-bef3f5a5fa1f
- No credentials input
- Stuttery experience when panning or zooming the viewport
After:
https://github.com/user-attachments/assets/f58d7864-055f-4e1c-a221-57154467c3aa
- Pretty much the same UX as in the Library, with fully-fledged
credentials input support
- Much smoother when moving around the canvas
### Changes 🏗️
Frontend:
- Add credentials input support to Run UX in Builder
- Pass run inputs instead of storing them on the input nodes
- Re-implement `RunnerInputUI` using `AgentRunDetailsView`; rename to
`RunnerInputDialog`
- Make `AgentRunDraftView` more flexible
- Remove `RunnerInputList`, `RunnerInputBlock`
- Make moving around in the Builder *smooooth* by reducing unnecessary
re-renders
- Clean up and partially re-write bead management logic
- Replace `request*` fire-and-forget methods in `useAgentGraph` with
direct action async callbacks
- Clean up run input UI components
- Simplify `RunnerUIWrapper`
- Add `isEmpty` utility function in `@/lib/utils` (expanding on
`_.isEmpty`)
- Fix default value handling in `TypeBasedInput` (**Note:** after all
the changes I've made I'm not sure this is still necessary)
- Improve & clean up Builder test implementations
Backend + API:
- Fix front-end `Node`, `GraphMeta`, and `Block` types
- Small refactor of `Graph` to match naming of some `LibraryAgent`
attributes
- Fix typing of `list_graphs`,
`get_graph_meta_by_store_listing_version_id` endpoints
- Add `GraphMeta` model and `GraphModel.meta()` shortcut
- Move `POST /library/agents/{library_agent_id}/setup-trigger` to `POST
/library/presets/setup-trigger`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Test the new functionality in the Builder:
- [x] Running an agent with (credentials) inputs from the builder
- [x] Beads behave correctly
- [x] Running an agent without any inputs from the builder
- [x] Scheduling an agent from the builder
- [x] Adding and searching blocks in the block menu
- [x] Test that all existing `AgentRunDraftView` functionality in the
Library still works the same
- [x] Run an agent
- [x] Schedule an agent
- [x] View past runs
- [x] Run an agent with inputs, then edit the agent's inputs and view
the agent in the Library (should be fine)
## Summary
This PR adds a simple block error rate monitoring system that runs every
24 hours (configurable) and sends Discord alerts when blocks exceed the
error rate threshold.
## Changes Made
**Modified Files:**
- `backend/executor/scheduler.py` - Added `report_block_error_rates`
function and scheduled job
- `backend/util/settings.py` - Added configuration options
- `backend/.env.example` - Added environment variable examples
- Refactor scheduled job logics in scheduler.py into seperate files
## Configuration
```bash
# Block Error Rate Monitoring
BLOCK_ERROR_RATE_THRESHOLD=0.5 # 50% error rate threshold
BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS=86400 # 24 hours
```
## How It Works
1. **Scheduled Job**: Runs every 24 hours (configurable via
`BLOCK_ERROR_RATE_CHECK_INTERVAL_SECS`)
2. **Error Rate Calculation**: Queries last 24 hours of node executions
and calculates error rates per block
3. **Threshold Check**: Alerts on blocks with ≥50% error rate
(configurable via `BLOCK_ERROR_RATE_THRESHOLD`)
4. **Discord Alert**: Sends alert to Discord using existing
`discord_system_alert` function
5. **Manual Execution**: Available via
`execute_report_block_error_rates()` scheduler client method
## Alert Format
```
Block Error Rate Alert:
🚨 Block 'DeprecatedGPT3Block' has 75.0% error rate (75/100) in the last 24 hours
🚨 Block 'BrokenImageBlock' has 60.0% error rate (30/50) in the last 24 hours
```
## Testing
Can be tested manually via:
```python
from backend.executor.scheduler import SchedulerClient
client = SchedulerClient()
result = client.execute_report_block_error_rates()
```
## Implementation Notes
- Follows the same pattern as `report_late_executions` function
- Only checks blocks with ≥10 executions to avoid noise
- Uses existing Discord notification infrastructure
- Configurable threshold and check interval
- Proper error handling and logging
## Test plan
- [x] Verify configuration loads correctly
- [x] Test error rate calculation with existing database
- [x] Confirm Discord integration works
- [x] Test manual execution via scheduler client
- [x] Verify scheduled job runs correctly
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude AI <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
<img width="800" alt="Screenshot 2025-07-07 at 13 16 44"
src="https://github.com/user-attachments/assets/0d404958-d4c9-454d-b71a-9dd677fe0fdc"
/>
<img width="800" alt="Screenshot 2025-07-07 at 13 17 08"
src="https://github.com/user-attachments/assets/1142f6d5-a6af-485d-b42e-98afd26de3ed"
/>
Update the UI of the logged-out pages ( _login, signup,
reset-password..._ ) using the new Design System components, so the app
starts to look a bit more cohesive 💆🏽
Some notes:
- I refactored the `<AuthCard />` components a bit to be easier to use
- I split the render from hook login on login/signup
- I added a couple of modals to improve the UX when logging in with
Google or using non-whitelisted emails
- _see below my comments for more context_
- When there are API errors, they are shown in a toast to prevent the
layout of the form from jumping
- When using the components in the UI, an issue with border-radius, see
comments for an explanation
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Logout on the platform
- [x] Check the updated Login/Signup/Reset password pages
- [x] The UI looks good and is consistent
- [x] The forms work as expected
## Changes 🏗️
<img width="900" height="327" alt="Screenshot 2025-07-10 at 20 12 38"
src="https://github.com/user-attachments/assets/044f00ed-7e05-46b7-a821-ce1cb0ee9298"
/>
<br /><br />
Navbar updated to look pretty from the new designs:
- the logo is now centred instead of on the left
- menu items have been updated to a smaller font-size and less radius
- icons have been updated
I also generated the API files ( _sorry for the noise_ ). I had to do
some border-radius and button updates on the atoms/tokens for it to look
good.
## Checklist 📋
## For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login/logout
- [x] The new navbar looks good across screens
## For configuration changes
No config changes
## Changes 🏗️
Style refinements on Toasts.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Check Storybook toast stories
- [x] They match Figma
#### For configuration changes:
None
### Summary
Performance optimization for the platform's store and creator
functionality by adding targeted database indexes and implementing
materialized views to reduce query execution time.
### Changes 🏗️
**Database Performance Optimizations:**
- Added strategic database indexes for `StoreListing`,
`StoreListingVersion`, `StoreListingReview`, `AgentGraphExecution`, and
`Profile` tables
- Implemented materialized views (`mv_agent_run_counts`,
`mv_review_stats`) to cache expensive aggregation queries
- Optimized `StoreAgent` and `Creator` views to use materialized views
and improved query patterns
- Added automated refresh function with 15-minute scheduling for
materialized views (when pg_cron extension is available)
**Key Performance Improvements:**
- Filtered indexes on approved store listings to speed up marketplace
queries
- GIN index on categories for faster category-based searches
- Composite indexes for common query patterns (e.g., listing + version
lookups)
- Pre-computed agent run counts and review statistics to eliminate
expensive aggregations
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified migration runs successfully without errors
- [x] Confirmed materialized views are created and populated correctly
- [x] Tested StoreAgent and Creator view queries return expected results
- [x] Validated automatic refresh function works properly
- [x] Confirmed rollback migration successfully removes all changes
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note:** No configuration changes were required as this is purely a
database schema optimization.
## Block Development SDK - Simplifying Block Creation
### Problem
Currently, creating a new block requires manual updates to **5+ files**
scattered across the codebase:
- `backend/data/block_cost_config.py` - Manually add block costs
- `backend/integrations/credentials_store.py` - Add default credentials
- `backend/integrations/providers.py` - Register new providers
- `backend/integrations/oauth/__init__.py` - Register OAuth handlers
- `backend/integrations/webhooks/__init__.py` - Register webhook
managers
This creates significant friction for developers, increases the chance
of configuration errors, and makes the platform difficult to scale.
### Solution
This PR introduces a **Block Development SDK** that provides:
- Single import for all block development needs: `from backend.sdk
import *`
- Automatic registration of all block configurations
- Zero external file modifications required
- Provider-based configuration with inheritance
### Changes 🏗️
#### 1. **New SDK Module** (`backend/sdk/`)
- **`__init__.py`**: Unified exports of 68+ block development components
- **`registry.py`**: Central auto-registration system for all block
configurations
- **`builder.py`**: `ProviderBuilder` class for fluent provider
configuration
- **`provider.py`**: Provider configuration management
- **`cost_integration.py`**: Automatic cost application system
#### 2. **Provider Builder Pattern**
```python
# Configure once, use everywhere
my_provider = (
ProviderBuilder("my-service")
.with_api_key("MY_SERVICE_API_KEY", "My Service API Key")
.with_base_cost(5, BlockCostType.RUN)
.build()
)
```
#### 3. **Automatic Cost System**
- Provider base costs automatically applied to all blocks using that
provider
- Override with `@cost` decorator for block-specific pricing
- Tiered pricing support with cost filters
#### 4. **Dynamic Provider Support**
- Modified `ProviderName` enum to accept any string via `_missing_`
method
- No more manual enum updates for new providers
#### 5. **Application Integration**
- Added `sync_all_provider_costs()` to `initialize_blocks()` for
automatic cost registration
- Maintains full backward compatibility with existing blocks
#### 6. **Comprehensive Examples** (`backend/blocks/examples/`)
- `simple_example_block.py` - Basic block structure
- `example_sdk_block.py` - Provider with credentials
- `cost_example_block.py` - Various cost patterns
- `advanced_provider_example.py` - Custom API clients
- `example_webhook_sdk_block.py` - Webhook configuration
#### 7. **Extensive Testing**
- 6 new test modules with 30+ test cases
- Integration tests for all SDK features
- Cost calculation verification
- Provider registration tests
### Before vs After
**Before SDK:**
```python
# 1. Multiple complex imports
from backend.data.block import Block, BlockCategory, BlockOutput
from backend.data.model import SchemaField, CredentialsField
# ... many more imports
# 2. Update block_cost_config.py
BLOCK_COSTS[MyBlock] = [BlockCost(...)]
# 3. Update credentials_store.py
DEFAULT_CREDENTIALS.append(...)
# 4. Update providers.py enum
# 5. Update oauth/__init__.py
# 6. Update webhooks/__init__.py
```
**After SDK:**
```python
from backend.sdk import *
# Everything configured in one place
my_provider = (
ProviderBuilder("my-service")
.with_api_key("MY_API_KEY", "My API Key")
.with_base_cost(10, BlockCostType.RUN)
.build()
)
class MyBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput = my_provider.credentials_field()
data: String = SchemaField(description="Input data")
class Output(BlockSchema):
result: String = SchemaField(description="Result")
# That's it\! No external files to modify
```
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created new blocks using SDK pattern with provider configuration
- [x] Verified automatic cost registration for provider-based blocks
- [x] Tested cost override with @cost decorator
- [x] Confirmed custom providers work without enum modifications
- [x] Verified all example blocks execute correctly
- [x] Tested backward compatibility with existing blocks
- [x] Ran all SDK tests (30+ tests, all passing)
- [x] Created blocks with credentials and verified authentication
- [x] Tested webhook block configuration
- [x] Verified application startup with auto-registration
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
(no changes needed)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes needed)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
### Impact
- **Developer Experience**: Block creation time reduced from hours to
minutes
- **Maintainability**: All block configuration in one place
- **Scalability**: Support hundreds of blocks without enum updates
- **Type Safety**: Full IDE support with proper type hints
- **Testing**: Easier to test blocks in isolation
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
- Follow-up fix for #10301
The condition that determines whether
`LibraryAgent.credentials_input_schema` is set incorrectly handles empty
lists of sub-graphs.
### Changes 🏗️
- Check if `sub_graphs is not None` rather than using the boolean
interpretation of `sub_graphs`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed.
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Changes 🏗️
### The Issue
- Backend returns: `"https://storage.googleapis.com/..."` (valid JSON
string)
- Frontend was calling `response.text()` which gave:
`"\"https://storage.googleapis.com/...\""`
- This resulted in a URL with extra quotes that couldn't be loaded
### The Fix
I changed both file upload methods to use `response.json()` instead of
`response.text()`:
1. **Client-side uploads** (`_makeClientFileUpload`): Changed `return
await response.text();` to `return await response.json();`
2. **Server-side uploads** (`makeAuthenticatedFileUpload`): Changed
`return await response.text();` to `return await response.json();`
Now when the backend returns a JSON string like
`"https://example.com/file.png"`, the frontend will properly parse it as
JSON and extract just the URL without the quotes.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Upload an image on your profile
- [x] It works
### For configuration changes:
No configuration changes
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
We flew too close to the sun
### Changes 🏗️
adds back perplexity due to the need to remove it after it has already
been migrated not before or the system will automatically migrate it to
a different model so that it is one that exists
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested locally; no impact since we are simply re-enabling it
<!-- Clearly explain the need for these changes: -->
Adds the latest Perplexity Sonar models from OpenRouter and removes the
decommissioned Sonar Large model.
### Changes 🏗️
- Added constants for `perplexity/sonar`, `perplexity/sonar-pro`, and
`perplexity/sonar-deep-research` in the `LlmModel` enum
- Included metadata entries for the new models
- Mapped the new models in the cost configuration with their respective
pricing tiers
- Removed the outdated Sonar Large model
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run test`
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
This plugin helps users organise library pages and use React Query for
data fetching on these pages.
### Changes
- Restructure the component position.
- Divide the component into two parts: one for rendering and the other
for hooks.
- Change data fetching from the normal fetch to an autogenerated React
query.
- Everything is shifted to the client side.
### Important Notes
- I haven’t changed any UI in this. I’ve divided it into sub-parts
because my main focus is on data fetching.
- Edit is not working, so I need to fix it in the follow-up PR. I
haven’t broken it; it broke already.
- I need to fix prop drilling in further PRs.
- I need to fix loading states.
> I haven’t changed the credit page or integration because I’m getting
errors while setting up Stripe for testing. My card is constantly
declined, and the integration page is attached to the builder page. I’ll
add changes to it when I’m working with the builder.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested manually and everything is working perfectly
- [x] Verified agent listing loads correctly
- [x] Confirmed delete functionality works
Auto type conversion doesn't work on optional type.
To reproduce:
<img width="981" alt="image"
src="https://github.com/user-attachments/assets/92198d32-bce9-44fd-a9b0-b7b431aec3ba"
/>
Use the AgentNumberInput block and try to pass a string value to the
sub-agent that uses it.
### Changes 🏗️
Added optional type auto conversation support.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Try to convert string to optional[int]
## Changes 🏗️
`pbkdf2` should be on `3.1.3` to address [this
alert](https://github.com/Significant-Gravitas/AutoGPT/security/dependabot/343).
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] pnpm installs work
### For configuration changes:
None
## Changes 🏗️
If the proxied API call fails with an error that is not JSON-like,
expose it still to the client so it can be shown.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Try to top up credits
- [x] You see a better failure on the error toast when redirected back
to the app
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Changes 🏗️
Fixes the layout getting very wide if the output of an agent is very
long:
https://github.com/user-attachments/assets/e032f425-ed9a-4a13-925f-1bb444f84ef1
It also makes the library agent code a bit more defensive, I get full
page errors on certain agents in the library:
<img width="800" alt="Screenshot 2025-07-04 at 17 35 46"
src="https://github.com/user-attachments/assets/ff8ae461-3792-4e94-941e-9fdd2ead1c87"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login and go to agent in library
- [x] It loads without errors
- [x] When the execution output is long, it doesn't make the page wider
This pull request introduces extensive updates to the frontend testing
infrastructure, focusing on Playwright-based testing for user
authentication flows. Key changes include the addition of a global setup
for creating test users, new utilities for managing test user pools, and
expanded test coverage for signup and authentication scenarios.
### Testing Infrastructure Enhancements:
* **Global Setup for Tests**:
- Added `globalSetup` in `playwright.config.ts` to create test users
before all tests run. This ensures consistent test data across test
suites. (`autogpt_platform/frontend/playwright.config.ts`,
[autogpt_platform/frontend/playwright.config.tsR16-R17](diffhunk://#diff-27484f7f20f2eb1aeb289730a440f3a126fa825a7b3fae1f9ed19e217c4f2e40R16-R17))
- Implemented `global-setup.ts` to handle test user creation and save
user pools to the file system. Includes fallback for reusing existing
user pools if available.
(`autogpt_platform/frontend/src/tests/global-setup.ts`,
[autogpt_platform/frontend/src/tests/global-setup.tsR1-R43](diffhunk://#diff-3a8141beba2a6117e0eb721c35b39acc168a8f913ee625ce056c6fab5ac3b192R1-R43))
* **Test User Management Utilities**:
- Added functions in `auth.ts` to create, save, load, and clean up test
users. Supports batch creation and file-based persistence for user
pools. (`autogpt_platform/frontend/src/tests/utils/auth.ts`,
[autogpt_platform/frontend/src/tests/utils/auth.tsR1-R190](diffhunk://#diff-198b5d07aa72d50c153a70ecdfdc4bacc408c2d638c90d858f40d0183549973bR1-R190))
- Enhanced `user-generator.ts` to generate individual or multiple test
users with customizable options.
(`autogpt_platform/frontend/src/tests/utils/user-generator.ts`,
[autogpt_platform/frontend/src/tests/utils/user-generator.tsR2-R41](diffhunk://#diff-a7cb4f403a4cf3605ed1046b0263412205e56e51b16052a9da1e8db9bf34b940R2-R41))
### Expanded Test Coverage:
* **Signup Flow Tests**:
- Added comprehensive tests for signup functionality, including
successful signup, form validation, custom credentials, and duplicate
email handling. (`autogpt_platform/frontend/src/tests/signup.spec.ts`,
[autogpt_platform/frontend/src/tests/signup.spec.tsR1-R113](diffhunk://#diff-d1baa54deff7f3b1eedefd6cec5619ae8edd872d361ef57b6c32998ed22d6661R1-R113))
- Developed `signup.ts` utility functions to automate signup processes
and validate form behavior.
(`autogpt_platform/frontend/src/tests/utils/signup.ts`,
[autogpt_platform/frontend/src/tests/utils/signup.tsR1-R184](diffhunk://#diff-cb05d73a6bd7a129759b0b3382825e90cde561a42fc85b6a25777f6bd2f84511R1-R184))
* **Authentication Utilities**:
- Introduced `SigninUtils` in `signin.ts` for login, logout, and
authentication cycle testing. Provides reusable methods for verifying
user states. (`autogpt_platform/frontend/src/tests/utils/signin.ts`,
[autogpt_platform/frontend/src/tests/utils/signin.tsR1-R94](diffhunk://#diff-7cfec955c159d69f51ba9fcca7d979be090acd6fe246b125551d60192d697d98R1-R94))
### Minor Updates:
* Added environment variable `BROWSER_TYPE` to CI workflow for
browser-specific Playwright tests.
(`.github/workflows/platform-frontend-ci.yml`,
[.github/workflows/platform-frontend-ci.ymlR215-R216](diffhunk://#diff-29396f5dccefac146b71bed295fdbb790b17fda6a5ce2e9f4f8abe80eb14a527R215-R216))
These changes collectively improve the robustness and maintainability of
the frontend testing framework, enabling more reliable and scalable
testing of user authentication features.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Validated all authentication tests, and they are working
- Resolves#10307
- Follow-up fix to #10167
### Changes 🏗️
- Update check for empty/missing inputs in `AgentRunDraftView` to
correctly handle number values
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] In the library, run an agent that requires a number input
## Changes 🏗️
We created a proxy route ( `/api/proxy/...` ) to handle API calls made
in the browser from the legacy `BackendAPI`, ensuring security and
compatibility with server cookies 💆🏽🍪
However, the code on the proxy was written optimistically, expecting the
payload to be present in the JSON requests... even though many requests,
such as `POST` or `PATCH`, can sometimes be fired without a body.
This fixed the issue we saw when stopping a running agent wasn't
working, because to stop it, fires a `PATCH` without a payload.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Checkout and run this locally
- [x] Login
- [x] Go to Library
- [x] Run agent
- [x] Stop it
- [x] It works without errors
## Changes 🏗️
### Root Cause
With httpOnly cookies, the Supabase client can't automatically exchange
password reset codes for sessions client-side because it can't access
the secure cookies 🍪 ( _which is a good thing_ ).
Previously, when users clicked email reset links, the Supabase client on
the browser would automatically handle the code exchange, but
with`httpOnly`, this is not possible because the Supabase browser client
does not have access to session info, so it fails silently 🥵
### Solution
Moved password reset code exchange to server-side middleware that can
access `httpOnly` cookies and properly create authenticated sessions.
### Code Changes
**`middleware.ts`**
- intercepts `/reset-password` URLs containing `code` parameter
- uses helper function to exchange code for session server-side
- redirects with error parameters if exchange fails
- moved `getUser()` call to avoid middleware timing issues
**`reset-password/page.tsx`**
- added toast notifications for password reset errors
- checks URL parameters for error messages on page load
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Password reset emails send successfully
- [x] Valid reset codes exchange for sessions server-side
- [x] Invalid/expired codes show error messages via toast
- [x] Successfully authenticated users can change passwords
- [x] URL parameters are cleaned up after error display
- [x] Middleware doesn't break normal authentication flows
### For configuration changes:
For this to work we need to configure Supabase with the new
password-reset redirect URL.
```
/api/auth/callback/reset-password
```
- [x] Already added in Supabase dev
- [ ] We need to add it on Supabase prod
- Resolves#10300
- Follow-up fix to #10167
### Changes 🏗️
- Include sub-graphs in `get_library_agent` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Executing agent with sub-graphs that require credentials works
During data manipulation refactoring, the CreateListBlock lost its
important batching functionality with max_size and max_tokens parameters.
This restores the original implementation that can yield lists in chunks
based on size or token limits.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
## Summary
This PR adds missing plural output versions to blocks that yield
individual items in loops but don't provide the complete collection,
enabling both individual item access (for iteration) and complete
collection access (for aggregate operations).
## Changes
### GitHub Blocks (existing)
- **GithubListPullRequestsBlock**: Added `pull_requests` output
alongside existing `pull_request`
- **GithubListPRReviewersBlock**: Added `reviewers` output alongside
existing `reviewer`
### Additional Blocks (added in this PR)
- **GetRedditPostsBlock**: Added `posts` output for complete list of
Reddit posts
- **ReadRSSFeedBlock**: Added `entries` output for complete list of RSS
entries
- **AddMemoryBlock**: Added `results` output for complete list of memory
operation results
## Pattern Applied
The pattern ensures blocks provide both:
```python
# Complete collection first
yield "plural_output", all_items
# Then individual items for iteration
for item in all_items:
yield "singular_output", item
```
## Testing
- Updated test outputs to include plural versions
- All blocks maintain backward compatibility with existing singular
outputs
- `poetry run format` - ✅ Passed
- `poetry run test` - ✅ Blocks validated
## Benefits
- **Iteration**: Users can still iterate over individual items as before
- **Aggregation**: Users can now access complete collections for
operations like counting, filtering, or batch processing
- **Compatibility**: Existing workflows continue to work unchanged
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
#### New List Operation Blocks
- Implement `GetListItemBlock` for retrieving an element at a specific
index, with negative index support
- Introduce `RemoveFromListBlock` to remove or pop items and optionally
return the removed value
- Add `ReplaceListItemBlock` to overwrite an item at a given index and
return the old value
- Provide `ListIsEmptyBlock` for quickly checking if a list has no
elements
#### New Dictionary Operation Blocks (for consistency with list
operations)
- Add `RemoveFromDictionaryBlock` to remove key-value pairs and
optionally return the removed value
- Implement `ReplaceDictionaryValueBlock` to replace values for a
specified key and return the old value
- Provide `DictionaryIsEmptyBlock` for checking if a dictionary has no
elements
#### Code Organization & Refactoring
- **Created `data_manipulation.py`**: Moved all dictionary and list
manipulation blocks to a dedicated file to prevent `basic.py` from
becoming too large
- **Refactored `basic.py`**: Now contains only core utility blocks
(FileStore, StoreValue, PrintToConsole, Note, UniversalTypeConverter)
- **Ensured consistency**: Dictionary and list blocks now have
equivalent functionality and follow the same patterns
- **Removed redundancy**: Eliminated duplicate `GetDictionaryValueBlock`
since `FindInDictionaryBlock` already provides comprehensive lookup
functionality
- **Preserved UUIDs**: All existing block UUIDs maintained to ensure no
breaking changes
#### Block Organization Summary
**`basic.py` (core utilities):**
- `FileStoreBlock`, `StoreValueBlock`, `PrintToConsoleBlock`,
`NoteBlock`, `UniversalTypeConverterBlock`
**`data_manipulation.py` (dictionary & list operations):**
- **Dictionary blocks:** Create, Add, Find, Remove, Replace, IsEmpty
- **List blocks:** Create, Add, Find, Get, Remove, Replace, IsEmpty
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `poetry run format`
- [x] `poetry run test`
- [x] `pnpm format`
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
The block library was missing two key capabilities that keep coming up
in real-world agent flows:
1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.
This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}
---
### Changes 🏗️
#### 📚 Mem0 block enhancements
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.
#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |
* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}
---
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
## Changes 🏗️
We noticed that in some pages ( `/build` _mainly_ ), where a lot of API
calls are fired in parallel using the old `BackendAPI`( _running many
agent executions_ ) the performance became worse. That is because the
`BackendAPI` was proxied via [server
actions](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
( _to make calls to our Backend on the Next.js server_ ).
Looks like server actions don't run in parallel, and their performance
is also subpar, given that we are not hosted on Vercel (they don't
utilise the edge middleware).
These changes cause all `BackendAPI` calls to be proxied via the Next.js
`/api/` route when executed on the browser; when executed on the server,
they bypass the proxy and directly access the API. Hopefully we gain:
- 🚀 Better Performance - API routes are faster than server actions for
this use case
- 🔧 Less Magic - Direct fetch calls instead of hidden server action
complexity
- ♻️ Code Reuse - Leveraging the existing proxy infrastructure used by
react-query
- 🎯 Cleaner Architecture - Single proxy pattern for all API calls
- 🔒 Same Security - Still uses server-side authentication with httpOnly
cookies
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] E2E tests pass
- [x] Click through the app, there is no issues
- [x] Agent executions are fast again in the builder
- [x] Test file uploads
This PR introduces key-value storage blocks.
### Changes 🏗️
- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run all 244 block tests - all passing ✅
- [x] Test PersistInformationBlock with mock data storage
- [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
- [x] Verify database function integration through all manager classes
- [x] Run lint and type checking - all passing ✅
- [x] Verify database migration is included and valid
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10298
- Follow-up to #10270
### Changes 🏗️
Amend two changes from #10270:
- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
- Run the platform locally
- [x] -> `/library` loads normally
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking
## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions
### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling
## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Requests to the Backend happen now on the server, given we moved to
server-side cookies 🍪 ... however the client proxy is not exposing the
API errors to the client correctly. This aim to fix that.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login to the platform
- [x] Run agents until you encounter an error
- [x] The error is shown on the toast
## Changes 🏗️
<img width="800" alt="Screenshot 2025-07-02 at 16 43 08"
src="https://github.com/user-attachments/assets/d7cd0dd7-e671-4c5d-8ed9-6d8f56371ff5"
/>
During logout, the user state gets cleared but the onboarding provider
continues to run and tries to access onboarding.completedSteps on a null
object, causing a runtime error 😬
This mostly happens because onboarding is broken on local and dev ( the
onboarding agents don't work ), so I manually skip it after creating an
account, navigating to `/marketplace`. That makes me think that the
onboarding provider still thinks I need to onboard, and hence why this
error?
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new user
- [x] Instead of completing onboarding, navigate to `/marketplace` via
browser URL
- [x] logout, login/logout again few times and you don't see runtime
errors
# Change details
- **Before**: Each job separately installs dependencies (~4 redundant
installations)
### Massive Redundancy in Setup Steps
Each job repeats these SAME 4 steps:
- Checkout repository
- Set up Node.js (version 21)
- Enable corepack
- Install dependencies (pnpm install --frozen-lockfile)
This happens 4+ times across different jobs:
- lint job
- type-check job
- chromatic job
- test job (runs 2x due to matrix strategy)
### No Dependency Caching
No caching strategy - downloads packages fresh every time
- Every workflow run downloads all packages from scratch
- No benefit from previous runs, even with identical pnpm-lock.yaml
- **After**: Dependencies installed once in setup job, cached and reused
This optimization maintains all existing CI functionality while
significantly improving pipeline efficiency.
A workflow run example is dispatched:
https://github.com/souhailaS/AutoGPT/actions/workflows/platform-frontend-ci.yml
## Additional Context
We are a team of researchers from University of Zurich
(https://www.ifi.uzh.ch/en/zest.html) currently **working on automating
energy optimizations in GitHub Actions workflows**. This optimization
maintains full functionality while significantly reducing computational
overhead and energy consumption.
souhaila.serbout@uzh.ch
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284
### Changes 🏗️
- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.
Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.
This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
This PR adds two setup scripts that will setup autogpt fully, it has a
windows .bat and a linux/mac .sh script, for now they are placed in a
new folder called "Installer"
### Note, the installers are supposed to be run outside of the autogpt
repo folder like Desktop/in a new empy folder because it will clone the
repo into the current directory.
I have had to add ``cross-env`` via ``pnpm add cross-env `` as on
windows the env is set differently in the ``package.json`` build section
``"build": "cross-env pnpm run generate:api-client &&
SKIP_STORYBOOK_TESTS=true next build"``
once fully setup i plan to make it so these commands can be run with the
following commands
Linux/Mac
```bash
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
```
Windows cmd/powershell
```bash
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
```
Currently the commands above dont work but will later on!
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have tested the linux ``install.sh`` on a ubuntu system and it
setup the platform fully.
- [x] I have tested the windows ``install.bat`` on my windows system and
it setup the platform fully.
- [x] I have tested on both OS's and checked with missing prerequisites
to see if it shows the errors and it does
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️
We want to make running the AutoGPT Front-end as easy as possible. For
that, you should be able to run it with the least amount of commands.
We recently added generated queries and types on the Front-end from the
Back-end OpenAPI schema, to make development easier and catch bugs
earlier. However, with the current setup, developers are forced to run
`pnpm generate:api-all` with the Back-end running, which is annoying.
After this PR, the Front-end can be rerun with just `pnpm i & pnpm dev`.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the Front-end with just `pnpm dev` and it works
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
### Changes 🏗️
A new undocumented env var, `NEXT_PUBLIC_AGPT_SERVER_BASE_URL`, was
added to the proxy route for it to work with the new `react-query`
mutator.
I removed it and used the existing `NEXT_PUBLIC_AGPT_SERVER_URL`, so we
have fewer environment variables to manage ( _and this one is already
added to all environments_ ).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the server locally
- [x] All pages ( library, marketplace, builder, settings ) work
Graph execution that fails due to interruption or unknown error should
be enqueued back to the queue. However, swallowing the error ends up not
marking the execution as a failure.
### Changes 🏗️
* Avoid keep updating the graph execution status on each node execution
step.
* Added a guard rail to avoid completing graph execution on
non-completed execution status.
* Avoid acknowledging messages from the queue if the graph execution is
not yet completed.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run graph execution, kill the process, re-run the process
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Currently, we are using PyClamd to run a file anti-virus scan for all
the files uploaded into the platform. We split the file into small
chunks and serially check the chunks for the virus scan. The socket is
not thread-safe, and we need to create multiple sockets across many
threads to leverage concurrency. To make this step concurrent and keep
it fully async, we need to migrate PyClamd to aioclamd.
### Changes 🏗️
Convert pyclamd to aioclamd, leverage chunk parallelism scan with a
semaphore limiting the concurrency limit.
#### Side Note
Shout-out to @tedyu for raising this improvement idea.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute file upload into the platform
## Changes 🏗️
We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL`
given is needed on the new API client on the Front-end to make requests.
The `NEXT_PUBLIC` prefix is important so that it is available on the
client.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] The library and other pages work
This pull request includes updates to the environment configuration and
API mutator logic in the `autogpt_platform/frontend` directory. The
changes aim to improve flexibility by introducing dynamic base URLs
through environment variables.
Environment configuration updates:
*
[`autogpt_platform/frontend/.env.example`](diffhunk://#diff-72012a00359825421736dc064be74187011cb5b0462bea1ed3a3c5ca80bb3117R2):
Added `NEXT_PUBLIC_FRONTEND_BASE_URL` to define the base URL for the
frontend dynamically.
API mutator logic updates:
*
[`autogpt_platform/frontend/src/app/api/mutators/custom-mutator.ts`](diffhunk://#diff-28c5af33c7bd0ecddc1793aa6a27bfd5b4f979b62c29990538aceea3320d8be9L1-R1):
Updated `BASE_URL` to use the `NEXT_PUBLIC_FRONTEND_BASE_URL`
environment variable, enabling dynamic configuration of the API proxy
URL.
### Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested manually and everything is working perfectly
This PR demonstrates how the new data fetching strategy works, so other
developers don’t get confused.
### Changes
- Updated `README.md` to explain the new data fetching strategy.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Nothing is breaking, everything works great
### Changes
- Restructure library components.
- Divide the component into two parts: one for rendering and one for
hooks.
- Add a `useInfiniteParams` inside the `orval` config to use `page` as
the pagination parameter.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manually tested everything and everything works fine
- Resolves#10217https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853
### Changes 🏗️
Frontend:
- Show message instead of action buttons ("Run" etc) when graph has
webhook node(s)
- Fix check for webhook nodes used in `BlocksControl` and `FlowEditor`
- Clean up `PrimaryActionBar` implementation
- Add `accent` variant to `ui/button:Button`
API:
- Add `GET /library/agents/by-graph/{graph_id}` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to Builder
- Add a trigger block
- [x] -> action buttons disappear; message shows in their place
- Save the graph
- Click the "Agent Library" link in the message
- [x] -> app navigates to `/library/agents/[id]` for the newly created
agent
This PR helps to send all the React query requests through a Next.js
server proxy. It works something like this: when a user sends a request,
our custom mutator sends a request to the proxy server, where we add the
auth token to the header and send it to the backend again. 🌐
Users can send a client-side request directly to the backend server
because their browser does have access to auth tokens, so they need to
go via the Next.js server. 🚀
### Changes 🏗️
- Change the position of the generated client, mutator, and transfer
inside `/src/app/api`
- Update the mutator to send the request to the proxy server
- Add a proxy server at `/api/proxy`, which handles the request using
`makeAuthenticatedRequest` and `makeAuthenticatedFileUpload` helpers and
sends the request to the backend
- Remove `getSupabaseClient`, because we do not have access to the auth
token on client side, hence no need 🔑
- Update Orval configs to generate the client at the new position
- Added new backend updates to the auto-generated client.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The setting page is using React Query and is working fine.
- [x] The mutator is sending requests to the proxy server correctly.
- [x] The proxy server is handling requests correctly.
- [x] The response handling is correct in both the proxy server and the
custom mutator.
## Changes 🏗️
### Overview
Introduces a new responsive `<Dialog />` component that automatically
adapts to screen size, providing optimal UX across devices.
<img width="800" alt="Screenshot 2025-06-27 at 16 00 01"
src="https://github.com/user-attachments/assets/d0c53b30-488f-4102-8100-c9318168d65b"
/>
<img width="300" alt="Screenshot 2025-06-27 at 16 00 12"
src="https://github.com/user-attachments/assets/f2105708-97d9-4a94-8e26-3c2d582ea8cd"
/>
### Key Features
#### 📱 **Responsive Behavior**
- **Desktop**: Modal dialog with overlay
- **Mobile**: Bottom drawer [Vaul](https://vaul.emilkowal.ski/) with
**swipe-to-dismiss** functionality
#### 🎯 **Multiple Interaction Methods**
- `ESC` key to close (both desktop & mobile)
- Click outside to dismiss
- Swipe down to dismiss (mobile drawer)
- Close button (X)
#### ❓ Why I did not use `shadcn/dialog` in this case as a base
While we already use the raw `shadcn/dialog` on the platform, it's
designed as a desktop-only solution and is not really
responsive-friendly. It lacks 📱 mobile-optimisation patterns like
_bottom drawers_, _swipe-to-dismiss gestures_ ( the new implementation
has it via [Vaul](https://vaul.emilkowal.ski/) ), and automatic
breakpoint adaptation according to screen size.
#### 🧩 **Compound Component Pattern**
```tsx
<Dialog title="Example">
<Dialog.Trigger>
<Button>Open Dialog</Button>
</Dialog.Trigger>
<Dialog.Content>
Content goes here
</Dialog.Content>
</Dialog>
```
#### ⚙️ **Flexible Control**
- **Uncontrolled**: Self-managed state via triggers
- **Controlled**: External state management
- **Force open**: rare but might be needed
- **Custom styling**: if needed
## Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] **Desktop Modal**: Opens/closes via trigger, ESC key, click
outside, close button
- [x] **Mobile Drawer**: Automatically switches at `lg` breakpoint,
swipe-to-dismiss works
- [x] **Controlled Mode**: External state management functions correctly
- [x] **Force Open**: Dialog stays open for preview purposes
- [x] **Custom Styling**: CSS-in-JS overrides work as expected
- [x] **Footer Component**: Action buttons render and function properly
- [x] **No Title Mode**: Dialog works without title prop
- [x] **Accessibility**: Tab navigation, screen reader announcements,
ARIA compliance
- [x] **Responsive Breakpoints**: Component switches modes at correct
screen sizes
- [x] **Storybook**: All stories render and function correctly
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
CreateListBlock can only batch lists based on the size limit, but
sometimes we need the size to be dynamically adjusted based on the token
count.
### Changes 🏗️
Improve CreateListBlock to support batching based on token count
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test CreateListBlock
Complete the implementation of the Agent Run Scheduling UX in the
Library.
Demo:
https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2
### Changes 🏗️
Frontend:
- Add "Schedule" button + dialog + logic to `AgentRunDraftView`
- Update corresponding logic on `AgentRunsPage`
- Add schedule name field to `CronSchedulerDialog`
- Amend Builder components `useAgentGraph`, `FlowEditor`,
`RunnerUIWrapper` to also handle schedule name input
- Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog`
- Make `AgentScheduleDetailsView` more fully functional
- Add schedule description to info box
- Add "Delete schedule" button
- Update schedule create/select/delete logic in `AgentRunsPage`
- Improve schedule UX in `AgentRunsSelectorList`
- Switch tabs automatically when a run or schedule is selected
- Remove now-redundant schedule filters
- Refactor `@/lib/monitor/cronExpressionManager` into
`@/lib/cron-expression-utils`
Backend + API:
- Add name and credentials to graph execution schedule job params
- Update schedule API
- `POST /schedules` -> `POST /graphs/{graph_id}/schedules`
- Add `GET /graphs/{graph_id}/schedules`
- Add not found error handling to `DELETE /schedules/{schedule_id}`
- Minor refactoring
Backend:
- Fix "`GraphModel`->`NodeModel` is not fully defined" error in
scheduler
- Add support for all exceptions defined in `backend.util.exceptions` to
RPC logic in `backend.util.service`
- Fix inconsistent log prefixing in `backend.executor.scheduler`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create a simple agent with inputs and blocks that require credentials;
go to this agent in the Library
- Fill out the inputs and click "Schedule"; make it run every minute
(for testing purposes)
- [x] -> newly created schedule appears in the list
- [x] -> scheduled runs are successful
- Click "Delete schedule"
- [x] -> schedule no longer in list
- [x] -> on deleting the last schedule, view switches back to the Runs
list
- [x] -> no new runs occur from the deleted schedule
Calling LLM using the current block sometimes can break due to the high
context window.
A prompt compaction algorithm is applied (enabled by default) to make
sure the sent prompt is within a context window limit.
### Changes 🏗️
````
Heuristics
--------
* Prefer shrinking the content rather than truncating the conversation.
* If the conversation content is compacted and it's still not enough, then reduce the conversation list.
* The rest of the implementation is adjusted to minimize the LLM call breaking.
Strategy
--------
1. **Token-aware truncation** – progressively halve a per-message cap
(`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the
*content* of every message except the first and last. Tool shells
are included: we keep the envelope but shorten huge payloads.
2. **Middle-out deletion** – if still over the limit, delete the whole
messages working outward from the centre, **skipping** any message
that contains ``tool_calls`` or has ``role == "tool"``.
3. **Last-chance trim** – if still too big, truncate the *first* and
*last* message bodies down to `floor_cap` tokens.
4. If the prompt is *still* too large:
• raise ``ValueError`` when ``lossy_ok == False`` (default)
• return the partially-trimmed prompt when ``lossy_ok == True``
````
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run an SDM block in a loop until it hits 200000 tokens using the
open-ai O3 model.
- Follow-up fix to #10138
AI erased a bit of functionality from the `GithubReadPullRequestBlock`
in #10138. This PR puts it back and improves on the old format.
### Changes 🏗️
- Include full diff in `changes` output of `GithubReadPullRequestBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled
- [ ] -> block runs successfully
- [ ] -> full diff included in `changes` output
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
<html><head></head><body><h3>Why these changes are needed 🧐</h3>
<p>Revid.ai offers several specialised, undocumented rendering flows
beyond the basic “text-to-video” endpoint our platform already
supported.
to:</p>
<ol>
<li>
<p><strong>Generate ads</strong> from copy plus product images
(30-second vertical spots).</p>
</li>
<li>
<p><strong>Turn a single creative prompt</strong> into a fully
AI-generated video (no multi-line script).</p>
</li>
<li>
<p><strong>Transform a screenshot into a narrated, avatar-driven
clip</strong>, ideal for product-led demos.</p>
</li>
</ol>
<p>Without first-class blocks for these flows, users were forced to drop
to raw HTTP nodes, losing schema validation, test mocks and credential
management.</p>
<h3>Changes 🏗️</h3>
Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING
= "Block that helps with marketing"``
Area | Change | Notes
-- | -- | --
ai_shortform_video_block.py | Refactored out a shared _RevidMixin
(webhook + polling helpers). | Keeps DRY across new blocks.
| Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA
enum members. | Required by Revid sample payloads.
| AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow;
supports optional input_media_urls, target_duration,
use_only_provided_media.
| AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with
prompt_target_duration.
| AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow
(avatar narration, BG removal).
| Added full pydantic schemas, test stubs & mock hooks for each new
block. | Ensures unit tests pass and blocks appear in UI.
<p>No existing functionality was removed; current <code
inline="">AIShortformVideoCreatorBlock</code> is untouched apart from
enum imports.</p></body></html>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] use the ``AI ShortForm Video Creator`` block to generate a video
and it will work
- [x] same with `` ai ad maker video creator`` block test it and it
should work
- [x] and test ``ai screenshot to video ad`` block it should work
---------
Co-authored-by: Bently <Github@bentlybro.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
* Add an enriching email feature toggle for SearchPeopleBlock
* Introduce GetPersonDetailBlock
* Adjust the cost of both blocks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute SearchPeopleBlock & GetPersonDetailBlock
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.
### Changes 🏗️
Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
### Why are these changes needed?
<!-- Clearly explain the need for these changes: -->
These changes document the OAuth integration flow for CASA lvl 2
compliance, specifically addressing the requirement to "Verify
documentation and justification of all the application's trust
boundaries, components, and significant data flows." The documentation
clarifies the two distinct OAuth implementations in AutoGPT: user
authentication via Supabase SSO and API integration credentials for
third-party services.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Created comprehensive OAuth integration flow documentation at
`/docs/content/platform/contributing/oauth-integration-flow.md`
- Documented trust boundaries between frontend (untrusted), backend API
(trusted), and external providers (semi-trusted)
- Added detailed component architecture for both frontend and backend
OAuth implementations
- Included mermaid diagrams illustrating:
- OAuth flow sequences (initiation, authorization, token refresh)
- System architecture showing SSO vs API integration OAuth
- Data flow diagram
- Security architecture layers
- Credential lifecycle state diagram
- Documented security measures including CSRF protection, PKCE
implementation, and token management
- Clarified the distinction between Supabase SSO for user login and
custom OAuth for API integrations
- Added references to source files for up-to-date provider lists rather
than hard-coding all providers
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Created documentation file with proper markdown formatting
- [x] Verified all file paths referenced in documentation exist
- [x] Confirmed mermaid diagrams render correctly
- [x] Validated that the documentation accurately reflects the codebase
implementation
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
- We have implemented some backend changes, so I have added a new,
updated OpenAPI specification.
- We have updated the settings and API keys page to enable us to use
React Query for fetching data.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Settings and api keys page is working correctly
### Changes 🏗️
Implemented `httpOnly` cookies 🍪 for secure session management 💆🏽
- 🙏🏽 **Moved all API requests to server-side execution** for maximum XSS
protection
- All authentication now happens server-side with `httpOnly` cookies (no
JWT tokens exposed to client)
- Created `proxyApiRequest()` and `proxyFileUpload()` server actions to
handle all communication with API
- Updated `BackendAPI._request()` to always use proxy approach for
consistent security
- 🚧 **Exception: WebSocket authentication** requires client-side token
exposure
- Added `getWebSocketToken()` server action to securely provide tokens
only for WebSocket connections
- Maintains secure architecture while we keep the real-time features
- 🧹 **Abstracted implementation details** into reusable helper functions
- Reduced proxy actions from 157 lines to 48 lines (70% reduction)
- Added flexible content-type support ( _JSON, form-urlencoded, custom_
)
- Enhanced error handling for graceful logout scenarios
- 📙 **Renamed `/reset_password` page to `/reset-password`**
- couldn't resist sorry... snake case URLs get me
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verify all API requests work through server-side proxy
- [x] Confirm httpOnly cookies prevent client-side JWT access
- [x] Test WebSocket connections work with server-provided tokens
- [x] Verify logout scenarios don't throw authentication errors
- [x] Check file uploads work securely through proxy
- [x] Validate zero breaking changes for existing BackendAPI calls
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
## Changes 🏗️
<img width="800" alt="Screenshot 2025-06-25 at 20 34 38"
src="https://github.com/user-attachments/assets/bfc90504-85b6-4178-9ace-2aa4d14f16b0"
/>
<br /><br />
- To match what is on the AutoGPT design system
- Unit tests commented because they depend on:
https://github.com/Significant-Gravitas/AutoGPT/pull/10243
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally, Badge stories look good
An anti-virus file scan step is added to each file upload step on the
platform before the file is sent to cloud storage or local machine
storage.
### Changes 🏗️
* Added ClamAV service
* Added AV file scan on each upload step
* Added tests & documentation
* Make the step mandatory even on local development
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tried using FileUploadBlock & AgentFileInputBlock
## Changes 🏗️
<img width="1580" alt="Screenshot 2025-06-25 at 18 11 36"
src="https://github.com/user-attachments/assets/c8b136b6-5897-41fa-a03b-010582c4b879"
/>
<br /><br />
Add a new `<Link />` component that will be the standard when rendering
links on the platform.
It is a wrapper of `next/link` and has an `isExternal` prop; when
supplied `target="_blank"` and `rel="noopener noreferrer"` will be added
to it. It comes with the styles agreed on AutoGPT design system.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Tests pass and the component looks good
## Changes 🏗️
<img width="800" alt="Screenshot 2025-06-25 at 17 52 38"
src="https://github.com/user-attachments/assets/18f859cf-5008-4915-925c-1912ab9cf176"
/>
- Depends on #10235 so that we can test the new Chromatic workflow with
this
- Documents our Skeleton atom which is directly
[shadcn/skeleton](https://ui.shadcn.com/docs/components/skeleton)
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run storybook locally
- [x] The Skeleton stories look good
## Changes 🏗️
<img width="800" alt="Screenshot 2025-06-25 at 13 43 06"
src="https://github.com/user-attachments/assets/13ffd32e-ffa1-482e-91a6-8363ad6b67df"
/>
<br /><br />
- Setup Chromatic ( install + `package.json` command )
- Make it run on the CI
- Remove a lot of old component in Storybook which were broken or need
deign review
- for now we only keep on Storybook what has been ✅ by design
- Remove `test-storybook:ci` commands
- I plan to run tests via Chromatic, but I want to look at that setup on
a separate PR and in a clean state
## 📋 Checklist
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The `chromatic` job succeeds on the CI and the changes appear on
Chromatic's dashboard
## Changes 🏗️
The test data script is not working locally, this should fix it 🤞🏽
- Fixed `agentId` → `agentGraphId` field references in preset matching
logic
- Fixed `agentId` → `agentGraphId` field references in store listing
graph lookup
- Added graph uniqueness logic to prevent duplicate library agents per
user
- Improved data consistency by ensuring proper foreign key relationships
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified script runs without database schema errors
- [x] Confirmed foreign key relationships are properly maintained
- [x] Tested that library agents use unique graphs per user
- [x] Validated preset matching uses correct field references
This PR makes several improvements to the `update_library_agent`
endpoint.
- Resolves#10216
### Changes 🏗️
- Add `DELETE /library/agents/{id}` endpoint
- Fix `PUT /library/agents/{id}` endpoint
- Return updated library agent
- Remove `is_deleted` parameter
- Change method from `PUT` to `PATCH`
Also, a small DX improvement:
- Expose `BackendAPI` globally through `window.api` for local dev
purposes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Deleting library agents works
- Follow-up fix to #10167
- Resolves#10228
### Changes 🏗️
- Don't assume `block.input_schema.jsonschema()["required"]` exists
- Unbreak handling of `webhook_type` in
`BaseWebhooksManager.get_manual_webhook(..)`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create an agent with a Generic Webhook Trigger block; go to it in the
Library
- [x] -> `/library/agents/[id]` loads normally
- Follow-up fix to #9862
- Resolves#10097
In #9862, the `AgentExecutorBlock`'s nested input field was renamed from
`data` to `input`, but apparently the frontend also had a reference to
this field and was now broken.
### Changes 🏗️
- Update `getInputPropKey` in `CustomNode` to use `inputs.{key}` instead
of `data.{key}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create an agent with at least one input
- Use the agent with at least one input inside another agent
- Set a default value on the input on the agent block
- Save the graph
- [x] -> default input value is saved
AIImageEditorBlock was not able to accept an image from AgentFileInput
or FileStore block.
### Changes 🏗️
* Add support for image loading for the image editor block:
<img width="1081" alt="Screenshot 2025-06-23 at 10 28 45 AM"
src="https://github.com/user-attachments/assets/ac3fea91-9503-4894-bbe3-2dc3c5649a39"
/>
* Avoid rendering a relative path image file.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run AiImageEditor block using AgentFileInput or FileStore block.
This pull request adds support for setting up (webhook-)triggered agents
in the Library. It contains changes throughout the entire stack to make
everything work in the various phases of a triggered agent's lifecycle:
setup, execution, updates, deletion.
Setting up agents with webhook triggers was previously only possible in
the Builder, limiting their use to the agent's creator only. To make it
work in the Library, this change uses the previously introduced
`AgentPreset` to store information on, instead of on the graph's nodes
to which only a graph's creator has access.
- Initial ticket: #10111
- Builds on #9786


### Changes 🏗️
Frontend:
- Amend the Library's `AgentRunDraftView` to handle creating and editing
Presets
- Add `hideIfSingleCredentialAvailable` parameter to `CredentialsInput`
- Add multi-select support to `TypeBasedInput`
- Add Presets section to `AgentRunsSelectorList`
- Amend `AgentRunSummaryCard` for use for Presets
- Add `AgentStatusChip` to display general agent status (for now: Active
/ Inactive / Error)
- Add Preset loading logic and create/update/delete handlers logic to
`AgentRunsPage`
- Rename `IconClose` to `IconCross`
API:
- Add `LibraryAgent` properties `has_external_trigger`,
`trigger_setup_info`, `credentials_input_schema`
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Remove redundant parameters from `POST
/library/presets/{preset_id}/execute` endpoint
Backend:
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Extract non-node-related logic from `on_node_activate` into
`setup_webhook_for_block`
- Add webhook-related logic to `update_preset` and `delete_preset`
endpoints
- Amend webhook infrastructure to work with AgentPresets
- Add preset trigger support to webhook ingress endpoint
- Amend executor stack to work with passed-in node input
(`nodes_input_masks`, generalized from `node_credentials_input_map`)
- Amend graph validation to work with passed-in node input
- Add `AgentPreset`->`IntegrationWebhook` relation
- Add `WebhookWithRelations` model
- Change behavior of `BaseWebhooksManager.get_manual_webhook(..)` to
avoid unnecessary changes of the webhook URL: ignore `events` to find
matching webhook, and update `events` if necessary.
- Fix & improve `AgentPreset` API, models, and DB logic
- Add `isDeleted` filter to get/list queries
- Add `user_id` attribute to `LibraryAgentPreset` model
- Add separate `credentials` property to `LibraryAgentPreset` model
- Fix `library_db.update_preset(..)` replacement of existing
`InputPresets`
- Make `library_db.update_preset(..)` more usage-friendly with separate
parameters for updateable properties
- Add `user_id` checks to various DB functions
- Fix error handling in various endpoints
- Fix cache race condition on `load_webhook_managers()`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Test existing functionality
- [x] Auto-setup and -teardown of webhooks on save in the builder still
works
- [x] Running an agent normally from the Library still works
- Test new functionality
- [x] Setting up a trigger in the Library
- [x] Updating a trigger in the Library
- [x] Disabling and re-enabling a trigger in the Library
- [x] Deleting a trigger in the Library
- [x] Triggers set up in the Library result in a new run when the
webhook receives a payload
This pull request sets up and configures Orval for API client
generation. It automates the process of creating TypeScript clients from
the backend's OpenAPI specification, improving development efficiency
and reducing manual code maintenance.
### Changes 🏗️
- Configures Orval with a new configuration file (`orval.config.ts`).
- Adds scripts to `package.json` for fetching the OpenAPI spec and
generating the API client.
- Implements a custom mutator for handling authentication.
- Adds API client generation as a step in the CI workflow.
- Adds `.gitignore` entry for generated API client files.
- Adds a security middleware to prevent caching of sensitive data.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the API client is generated correctly.
- [x] Confirmed that the custom mutator is functioning as expected for
authentication.
- [x] Ensured that the new CI workflow step for API client generation is
successful.
- [x] Tested generated API calls
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Since auto conversion is applied before merging nested input in the
block, it breaks the auto conversion break.
### Changes 🏗️
* Enabling auto-type conversion on block input schema mismatch for
nested input
* Add batching feature for `CreateListBlock`
* Increase default max_token size for LLM call
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run `AIStructuredResponseGeneratorBlock` with non-string prompt
value (should be auto-converted).
### Changes 🏗️
Add cost calculation for Apollo integration.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run Apollo block Search People & Organizations Block.
### Why? 🤔
<!-- Clearly explain the need for these changes: -->
We need to prevent sensitive data (authentication tokens, API
keys, user credentials, personal information) from being cached by
browsers and proxies. Following the principle of "secure by
default", we're switching from a deny list to an allow list
approach for cache control.
### Changes 🛠️
<!-- Concisely describe all of the changes made in this pull
request: -->
- **Refactored cache control middleware from deny list to allow
list approach**
- By default, ALL endpoints now have `Cache-Control: no-store,
no-cache, must-revalidate, private` headers
- Only explicitly allowed paths (static assets, health checks,
public store pages) can be cached
- This ensures new endpoints are automatically protected without
developers having to remember to add them to a list
- **Updated `SecurityHeadersMiddleware` in
`/backend/backend/server/middleware/security.py`**
- Renamed `SENSITIVE_PATHS` to `CACHEABLE_PATHS`
- Inverted the logic in `is_cacheable_path()` method
- Cache control headers are now applied to all paths NOT in the
allow list
- **Updated test suite to match new behavior**
- Tests now verify that most endpoints have cache control
headers by default
- Tests verify that only allowed paths (static assets, health
endpoints, etc.) can be cached
- **Updated documentation in `CLAUDE.md`**
- Documented the new allow list approach
- Added instructions for developers on how to allow caching for
new endpoints
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test modified endpoints work still
- [x] Test modified endpoints correctly have no cache rules
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Main issues:
* `AIStructuredResponseGeneratorBlock` is not able to produce a list of
objects.
* `SmartDecisionBlock` is not able to call tools with some optional
inputs.
### Changes 🏗️
* Allow persisting `null` / `None` value as execution output.
* Provide `multiple_tool_calls` option for `SmartDecisionBlock`.
* Provide `list_result` option for `AIStructuredResponseGeneratorBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run `SmartDecisionBlock` & `AIStructuredResponseGeneratorBlock`
This PR introduces a custom function for generating unique operation IDs
for OpenAPI specifications to improve auto-generated client code
quality.
## Why This Change?
**Better Auto-Generated Clients**: Default FastAPI operation IDs create
unclear method names in generated clients. Our custom generator produces
clean, readable operation IDs that translate to intuitive method names.
- **Before**: `get_items_api_v1_items_get` → unclear generated methods
- **After**: `get_users_list` → clean, descriptive method names
## Changes
- ✨ **Added**: `custom_generate_unique_id` utility function
- Generates IDs using pattern: `{method}_{tag}_{summary}`
- Ensures uniqueness and readability
- 🔧 **Updated**: FastAPI app configuration to use custom generator
## Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] OpenAPI docs reflect new operation ID format
- [x] Tested various HTTP methods, tags, and summaries
- [x] Verified app startup functionality
- [x] Validated improved client generation output
Current Apollo blocks only work with keywords; the huge number of list
filter fields doesn't work because it's passing the wrong GET parameter
(missing `[]`).
### Changes 🏗️
Change the GET request to a POST request for Apollo.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run SearchPeopleBlock with title filter
This PR integrates React Query DevTools and ESLint rules to improve the
development workflow and enforce best practices for data fetching.
### Changes:
- **React Query DevTools:**
- Added the `@tanstack/react-query-devtools` package.
- DevTools are enabled by default in the development environment.
- They can be disabled by setting
`NEXT_PUBLIC_REACT_QUERY_DEVTOOL=false` in your environment variables.
- **ESLint Rules:**
- Integrated `@tanstack/eslint-plugin-query` to enforce best practices
and catch common errors in React Query usage.
- **Configuration:**
- Added the `NEXT_PUBLIC_REACT_QUERY_DEVTOOL` variable to the
`.env.example` file so other developers are aware of this option.
- **Documentation:**
- Updated the `README.md` with instructions on how to toggle the
DevTools using the environment variable.
Configuration Changes Checklist
- `.env.example` has been updated with the new environment variable.
### Checklist
For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app in development with pnpm dev.
- [x] Verified DevTools toggle with environment variables
- [x] Run pnpm lint in the frontend directory.
- [x] Confirm that linting passes on the current codebase.
### Screenshot
<img width="1512" alt="Screenshot 2025-06-19 at 6 32 22 PM"
src="https://github.com/user-attachments/assets/a3defd23-2c3d-4d20-b152-037d85e04503"
/>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 15:52:31 +00:00
743 changed files with 77981 additions and 22648 deletions
All portions of this repository are under one of two licenses. The majority of the AutoGPT repository is under the MIT License below. The autogpt_platform folder is under the
Polyform Shield License.
All portions of this repository are under one of two licenses.
- Everything inside the autogpt_platform folder is under the Polyform Shield License.
- Everything outside the autogpt_platform folder is under the MIT License.
More info:
**Polyform Shield License:**
Code and content within the `autogpt_platform` folder is licensed under the Polyform Shield License. This new project is our in-developlemt platform for building, deploying and managing agents.
Read more about this effort here: https://agpt.co/blog/introducing-the-autogpt-platform
**MIT License:**
All other portions of the AutoGPT repository (i.e., everything outside the `autogpt_platform` folder) are licensed under the MIT License. This includes:
We also publish additional work under the MIT Licence in other repositories, such as GravitasML (https://github.com/Significant-Gravitas/gravitasml) which is developed for and used in the AutoGPT Platform, and our [Code Ability](https://github.com/Significant-Gravitas/AutoGPT-Code-Ability) project.
This will install dependencies, configure Docker, and launch your local instance — all in one go.
### 🧱 AutoGPT Frontend
The AutoGPT frontend is where users interact with our powerful AI automation platform. It offers multiple ways to engage with and leverage our AI agents. This is the interface where you'll bring your AI automation ideas to life:
@@ -96,7 +123,17 @@ Here are two examples of what you can do with AutoGPT:
These examples show just a glimpse of what you can achieve with AutoGPT! You can create customized workflows to build agents for any use case.
---
### Mission and Licencing
### **License Overview:**
🛡️ **Polyform Shield License:**
All code and content within the `autogpt_platform` folder is licensed under the Polyform Shield License. This new project is our in-developlemt platform for building, deploying and managing agents.</br>_[Read more about this effort](https://agpt.co/blog/introducing-the-autogpt-platform)_
🦉 **MIT License:**
All other portions of the AutoGPT repository (i.e., everything outside the `autogpt_platform` folder) are licensed under the MIT License. This includes the original stand-alone AutoGPT Agent, along with projects such as [Forge](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/forge), [agbenchmark](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/benchmark) and the [AutoGPT Classic GUI](https://github.com/Significant-Gravitas/AutoGPT/tree/master/classic/frontend).</br>We also publish additional work under the MIT Licence in other repositories, such as [GravitasML](https://github.com/Significant-Gravitas/gravitasml) which is developed for and used in the AutoGPT Platform. See also our MIT Licenced [Code Ability](https://github.com/Significant-Gravitas/AutoGPT-Code-Ability) project.
---
### Mission
Our mission is to provide the tools, so that you can focus on what matters:
- 🏗️ **Building** - Lay the foundation for something amazing.
@@ -109,14 +146,6 @@ Be part of the revolution! **AutoGPT** is here to stay, at the forefront of AI i
 | 
**🚀 [Contributing](CONTRIBUTING.md)**
**Licensing:**
MIT License: The majority of the AutoGPT repository is under the MIT License.
Polyform Shield License: This license applies to the autogpt_platform folder.
For more information, see https://agpt.co/blog/introducing-the-autogpt-platform
---
## 🤖 AutoGPT Classic
> Below is information about the classic version of AutoGPT.
1.`.env.default` files provide base configuration (tracked in git)
2.`.env` files provide user-specific overrides (gitignored)
3. Docker Compose `environment:` sections provide service-specific overrides
4. Shell environment variables have highest precedence
#### Key Points
- All services use hardcoded defaults in docker-compose files (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
- Supabase services (`db/docker/docker-compose.yml`) follow the same pattern
### Common Development Tasks
**Adding a new block:**
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
Note: when making many new blocks analyze the interfaces for each of these blcoks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
4. Test with Playwright if user-facing
4. Test with Playwright if user-facing
### Security Implementation
**Cache Protection Middleware:**
- Located in `/backend/backend/server/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
- Cacheable paths include: static assets (`/static/*`, `/_next/static/*`), health checks, public store pages, documentation
- Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications
### Creating Pull Requests
- Create the PR aginst the `dev` branch of the repository.
- Ensure the branch name is descriptive (e.g., `feature/add-new-block`)/
- Use conventional commit messages (see below)/
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description/
- Run the github pre-commit hooks to ensure code quality.
### Reviewing/Revising Pull Requests
- When the user runs /pr-comments or tries to fetch them, also run gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews to get the reviews
- Use gh api /repos/Significant-Gravitas/AutoGPT/pulls/[issuenum]/reviews/[review_id]/comments to get the review contents
- Use gh api /repos/Significant-Gravitas/AutoGPT/issues/9924/comments to get the pr specific comments
### Conventional Commits
Use this format for commit messages and Pull Request titles:
**Conventional Commit Types:**
-`feat`: Introduces a new feature to the codebase
-`fix`: Patches a bug in the codebase
-`refactor`: Code change that neither fixes a bug nor adds a feature; also applies to removing features
-`ci`: Changes to CI configuration
-`docs`: Documentation-only changes
-`dx`: Improvements to the developer experience
**Recommended Base Scopes:**
-`platform`: Changes affecting both frontend and backend
-`frontend`
-`backend`
-`infra`
-`blocks`: Modifications/additions of individual blocks
**Subscope Examples:**
-`backend/executor`
-`backend/db`
-`frontend/builder` (includes changes to the block UI component)
-`infra/prod`
Use these scopes and subscopes for clarity and consistency in commit messages.
@@ -8,7 +8,6 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
### Running the System
@@ -24,10 +23,10 @@ To run the AutoGPT Platform, follow these steps:
2. Run the following command:
```
cp .env.example .env
cp .env.default .env
```
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
This command will copy the `.env.default` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command:
@@ -37,38 +36,7 @@ To run the AutoGPT Platform, follow these steps:
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
5. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command:
Enable corepack and install dependencies by running:
```
corepack enable
pnpm i
```
Then start the frontend application in development mode:
```
pnpm dev
```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Docker Compose Commands
@@ -164,3 +132,28 @@ To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml
3. Save the file and run `docker compose up -d` to apply the changes.
This configuration will create named volumes for PostgreSQL and Redis, ensuring that your data persists across container restarts.
### API Client Generation
The platform includes scripts for generating and managing the API client:
- `pnpm fetch:openapi`: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)
- `pnpm generate:api-client`: Generates the TypeScript API client from the OpenAPI specification using Orval
- `pnpm generate:api`: Runs both fetch and generate commands in sequence
#### Manual API Client Updates
If you need to update the API client after making changes to the backend API:
1. Ensure the backend services are running:
```
docker compose up -d
```
2. Generate the updated API client:
```
pnpm generate:api
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.
This directory contains scripts for creating and updating test data in the AutoGPT Platform database, specifically designed to test the materialized views for the store functionality.
## Scripts
### test_data_creator.py
Creates a comprehensive set of test data including:
- Users with profiles
- Agent graphs, nodes, and executions
- Store listings with multiple versions
- Reviews and ratings
- Library agents
- Integration webhooks
- Onboarding data
- Credit transactions
**Image/Video Domains Used:**
- Images: `picsum.photos` (for all image URLs)
- Videos: `youtube.com` (for store listing videos)
### test_data_updater.py
Updates existing test data to simulate real-world changes:
- Adds new agent graph executions
- Creates new store listing reviews
- Updates store listing versions
- Adds credit transactions
- Refreshes materialized views
### check_db.py
Tests and verifies materialized views functionality:
- Checks pg_cron job status (for automatic refresh)
- Displays current materialized view counts
- Adds test data (executions and reviews)
- Creates store listings if none exist
- Manually refreshes materialized views
- Compares before/after counts to verify updates
- Provides a summary of test results
## Materialized Views
The scripts test three key database views:
1.**mv_agent_run_counts**: Tracks execution counts by agent
2.**mv_review_stats**: Tracks review statistics (count, average rating) by store listing
3.**StoreAgent**: A view that combines store listing data with execution counts and ratings for display
The materialized views (mv_agent_run_counts and mv_review_stats) are automatically refreshed every 15 minutes via pg_cron, or can be manually refreshed using the `refresh_store_materialized_views()` function.
## Usage
### Prerequisites
1. Ensure the database is running:
```bash
docker compose up -d
# or for test database:
docker compose -f docker-compose.test.yaml --env-file ../.env up -d
```
2. Run database migrations:
```bash
poetry run prisma migrate deploy
```
### Running the Scripts
#### Option 1: Use the helper script (from backend directory)
```bash
poetry run python run_test_data.py
```
#### Option 2: Run individually
```bash
# From backend/test directory:
# Create initial test data
poetry run python test_data_creator.py
# Update data to test materialized view changes
poetry run python test_data_updater.py
# From backend directory:
# Test materialized views functionality
poetry run python check_db.py
# Check store data status
poetry run python check_store_data.py
```
#### Option 3: Use the shell script (from backend directory)
```bash
./run_test_data_scripts.sh
```
### Manual Materialized View Refresh
To manually refresh the materialized views:
```sql
SELECTrefresh_store_materialized_views();
```
## Configuration
The scripts use the database configuration from your `.env` file:
-`DATABASE_URL`: PostgreSQL connection string
- Database should have the platform schema
## Data Generation Limits
Configured in `test_data_creator.py`:
- 100 users
- 100 agent blocks
- 1-5 graphs per user
- 2-5 nodes per graph
- 1-5 presets per user
- 1-10 library agents per user
- 1-20 executions per graph
- 1-5 reviews per store listing version
## Notes
- All image URLs use `picsum.photos` for consistency with Next.js image configuration
- The scripts create realistic relationships between entities
- Materialized views are refreshed at the end of each script
- Data is designed to test both happy paths and edge cases
## Troubleshooting
### Reviews and StoreAgent view showing 0
If `check_db.py` shows that reviews remain at 0 and StoreAgent view shows 0 store agents:
1.**No store listings exist**: The script will automatically create test store listings if none exist
2.**No approved versions**: Store listings need approved versions to appear in the StoreAgent view
3.**Check with `check_store_data.py`**: This script provides detailed information about:
- Total store listings
- Store listing versions by status
- Existing reviews
- StoreAgent view contents
- Agent graph executions
### pg_cron not installed
The warning "pg_cron extension is not installed" is normal in local development environments. The materialized views can still be refreshed manually using the `refresh_store_materialized_views()` function, which all scripts do automatically.
### Common Issues
- **Type errors with None values**: Fixed in the latest version of check_db.py by using `or 0` for nullable numeric fields
- **Missing relations**: Ensure you're using the correct field names (e.g., `StoreListing` not `storeListing` in includes)
- **Column name mismatches**: The database uses camelCase for column names (e.g., `agentGraphId` not `agent_graph_id`)
description="The workspace ID where the base will be created"
)
name:str=SchemaField(description="The name of the new base")
tables:list[dict]=SchemaField(
description="At least one table and field must be specified. Array of table objects to create in the base. Each table should have 'name' and 'fields' properties",
default=[
{
"description":"Default table",
"name":"Default table",
"fields":[
{
"name":"ID",
"type":"number",
"description":"Auto-incrementing ID field",
"options":{"precision":0},
}
],
}
],
)
classOutput(BlockSchema):
base_id:str=SchemaField(description="The ID of the created base")
tables:list[dict]=SchemaField(description="Array of table objects")
table:dict=SchemaField(description="A single table object")
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
@@ -375,28 +387,30 @@ To exclude companies based on location, use the organization_not_locations param
description="""Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude.
This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results.
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry."""
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry.""",
default_factory=list,
)
q_organization_name:str=SchemaField(
q_organization_name:Optional[str]=SchemaField(
description="""Filter search results to include a specific company name.
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible."""
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible.""",
default="",
)
organization_ids:list[str]=SchemaField(
organization_ids:Optional[list[str]]=SchemaField(
description="""The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, identify the values for organization_id when you call this endpoint.""",
default_factory=list,
)
max_results:int=SchemaField(
max_results:Optional[int]=SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
ge=1,
@@ -421,11 +435,11 @@ Use the page parameter to search the different pages of data.""",
classSearchOrganizationsResponse(BaseModel):
"""Response from Apollo's search organizations API"""
breadcrumbs:list[Breadcrumb]=[]
partial_results_only:bool=True
has_join:bool=True
disable_eu_prospecting:bool=True
partial_results_limit:int=0
breadcrumbs:Optional[list[Breadcrumb]]=[]
partial_results_only:Optional[bool]=True
has_join:Optional[bool]=True
disable_eu_prospecting:Optional[bool]=True
partial_results_limit:Optional[int]=0
pagination:Pagination=Pagination(
page=0,per_page=0,total_entries=0,total_pages=0
)
@@ -433,14 +447,14 @@ class SearchOrganizationsResponse(BaseModel):
accounts:list[Any]=[]
organizations:list[Organization]=[]
models_ids:list[str]=[]
num_fetch_result:Optional[str]="N/A"
derived_params:Optional[str]="N/A"
num_fetch_result:Optional[str]=""
derived_params:Optional[str]=""
classSearchPeopleRequest(BaseModel):
"""Request for Apollo's search people API"""
person_titles:list[str]=SchemaField(
person_titles:Optional[list[str]]=SchemaField(
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
Results also include job titles with the same terms, even if they are not exact matches. For example, searching for marketing manager might return people with the job title content marketing manager.
@@ -450,13 +464,13 @@ Use this parameter in combination with the person_seniorities[] parameter to fin
default_factory=list,
placeholder="marketing manager",
)
person_locations:list[str]=SchemaField(
person_locations:Optional[list[str]]=SchemaField(
description="""The location where people live. You can search across cities, US states, and countries.
To find people based on the headquarters locations of their current employer, use the organization_locations parameter.""",
description="""The job seniority that people hold within their current employer. This enables you to find people that currently hold positions at certain reporting levels, such as Director level or senior IC level.
For a person to be included in search results, they only need to match 1 of the seniorities you add. Adding more seniorities expands your search results.
@@ -466,7 +480,7 @@ Searches only return results based on their current job title, so searching for
Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels.""",
description="""The location of the company headquarters for a person's current employer. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters.
@@ -474,7 +488,7 @@ If a company has several office locations, results are still based on the headqu
To find people based on their personal location, use the person_locations parameter.""",
description="""The domain name for the person's employer. This can be the current employer or a previous employer. Do not include www., the @ symbol, or similar.
You can add multiple domains to search across companies.
@@ -482,23 +496,23 @@ You can add multiple domains to search across companies.
description="""The email statuses for the people you want to find. You can add multiple statuses to expand your search.""",
default_factory=list,
)
organization_ids:list[str]=SchemaField(
organization_ids:Optional[list[str]]=SchemaField(
description="""The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, call the Organization Search endpoint and identify the values for organization_id.""",
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
default_factory=list,
)
q_keywords:str=SchemaField(
q_keywords:Optional[str]=SchemaField(
description="""A string of words over which we want to filter the results""",
default="",
)
@@ -514,7 +528,7 @@ Use this parameter in combination with the per_page parameter to make search res
Use the page parameter to search the different pages of data.""",
default=100,
)
max_results:int=SchemaField(
max_results:Optional[int]=SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
ge=1,
@@ -533,16 +547,61 @@ class SearchPeopleResponse(BaseModel):
populate_by_name=True,
)
breadcrumbs:list[Breadcrumb]=[]
partial_results_only:bool=True
has_join:bool=True
disable_eu_prospecting:bool=True
partial_results_limit:int=0
breadcrumbs:Optional[list[Breadcrumb]]=[]
partial_results_only:Optional[bool]=True
has_join:Optional[bool]=True
disable_eu_prospecting:Optional[bool]=True
partial_results_limit:Optional[int]=0
pagination:Pagination=Pagination(
page=0,per_page=0,total_entries=0,total_pages=0
)
contacts:list[Contact]=[]
people:list[Contact]=[]
model_ids:list[str]=[]
num_fetch_result:Optional[str]="N/A"
derived_params:Optional[str]="N/A"
num_fetch_result:Optional[str]=""
derived_params:Optional[str]=""
classEnrichPersonRequest(BaseModel):
"""Request for Apollo's person enrichment API"""
person_id:Optional[str]=SchemaField(
description="Apollo person ID to enrich (most accurate method)",
default="",
)
first_name:Optional[str]=SchemaField(
description="First name of the person to enrich",
default="",
)
last_name:Optional[str]=SchemaField(
description="Last name of the person to enrich",
default="",
)
name:Optional[str]=SchemaField(
description="Full name of the person to enrich",
default="",
)
email:Optional[str]=SchemaField(
description="Email address of the person to enrich",
default="",
)
domain:Optional[str]=SchemaField(
description="Company domain of the person to enrich",
default="",
)
company:Optional[str]=SchemaField(
description="Company name of the person to enrich",
default="",
)
linkedin_url:Optional[str]=SchemaField(
description="LinkedIn URL of the person to enrich",
default="",
)
organization_id:Optional[str]=SchemaField(
description="Apollo organization ID of the person's company",
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
@@ -65,7 +65,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
@@ -90,14 +93,19 @@ class SearchPeopleBlock(Block):
advanced=False,
)
max_results:int=SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
description="""The maximum number of results to return. If you don't specify this parameter, the default is 25. Limited to 500 to prevent overspending.""",
default=25,
ge=1,
le=50000,
le=500,
advanced=True,
)
enrich_info:bool=SchemaField(
description="""Whether to enrich contacts with detailed information including real email addresses. This will double the search cost.""",
"""Block for posting to Instagram with Instagram-specific options."""
classInput(BaseAyrshareInput):
"""Input schema for Instagram posts."""
# Override post field to include Instagram-specific information
post:str=SchemaField(
description="The post text (max 2,200 chars, up to 30 hashtags, 3 @mentions)",
default="",
advanced=False,
)
# Override media_urls to include Instagram-specific constraints
media_urls:list[str]=SchemaField(
description="Optional list of media URLs. Instagram supports up to 10 images/videos in a carousel.",
default_factory=list,
advanced=False,
)
# Instagram-specific options
is_story:bool|None=SchemaField(
description="Whether to post as Instagram Story (24-hour expiration)",
default=None,
advanced=True,
)
# ------- REELS OPTIONS -------
share_reels_feed:bool|None=SchemaField(
description="Whether Reel should appear in both Feed and Reels tabs",
default=None,
advanced=True,
)
audio_name:str|None=SchemaField(
description="Audio name for Reels (e.g., 'The Weeknd - Blinding Lights')",
default=None,
advanced=True,
)
thumbnail:str|None=SchemaField(
description="Thumbnail URL for Reel video",default=None,advanced=True
)
thumbnail_offset:int|None=SchemaField(
description="Thumbnail frame offset in milliseconds (default: 0)",
default=0,
advanced=True,
)
# ------- POST OPTIONS -------
alt_text:list[str]=SchemaField(
description="Alt text for each media item (up to 1,000 chars each, accessibility feature), each item in the list corresponds to a media item in the media_urls list",
default_factory=list,
advanced=True,
)
location_id:str|None=SchemaField(
description="Facebook Page ID or name for location tagging (e.g., '7640348500' or '@guggenheimmuseum')",
default=None,
advanced=True,
)
user_tags:list[dict[str,Any]]=SchemaField(
description="List of users to tag with coordinates for images",
default_factory=list,
advanced=True,
)
collaborators:list[str]=SchemaField(
description="Instagram usernames to invite as collaborators (max 3, public accounts only)",
default_factory=list,
advanced=True,
)
auto_resize:bool|None=SchemaField(
description="Auto-resize images to 1080x1080px for Instagram",
default=None,
advanced=True,
)
classOutput(BlockSchema):
post_result:PostResponse=SchemaField(description="The result of the post")
post:PostIds=SchemaField(description="The result of the post")
def__init__(self):
super().__init__(
id="89b02b96-a7cb-46f4-9900-c48b32fe1552",
description="Post to Instagram using Ayrshare. Requires a Business or Creator Instagram Account connected with a Facebook Page",
categories={BlockCategory.SOCIAL},
block_type=BlockType.AYRSHARE,
input_schema=PostToInstagramBlock.Input,
output_schema=PostToInstagramBlock.Output,
)
asyncdefrun(
self,
input_data:"PostToInstagramBlock.Input",
*,
user_id:str,
**kwargs,
)->BlockOutput:
"""Post to Instagram with Instagram-specific options."""
profile_key=awaitget_profile_key(user_id)
ifnotprofile_key:
yield"error","Please link a social account via Ayrshare"
return
client=create_ayrshare_client()
ifnotclient:
yield"error","Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
return
# Validate Instagram constraints
iflen(input_data.post)>2200:
yield"error",f"Instagram post text exceeds 2,200 character limit ({len(input_data.post)} characters)"
return
iflen(input_data.media_urls)>10:
yield"error","Instagram supports a maximum of 10 images/videos in a carousel"
return
iflen(input_data.collaborators)>3:
yield"error","Instagram supports a maximum of 3 collaborators"
return
# Validate that if any reel option is set, all required reel options are set
reel_options=[
input_data.share_reels_feed,
input_data.audio_name,
input_data.thumbnail,
]
ifany(reel_options)andnotall(reel_options):
yield"error","When posting a reel, all reel options must be set: share_reels_feed, audio_name, and either thumbnail or thumbnail_offset"
return
# Count hashtags and mentions
hashtag_count=input_data.post.count("#")
mention_count=input_data.post.count("@")
ifhashtag_count>30:
yield"error",f"Instagram allows maximum 30 hashtags ({hashtag_count} found)"
return
ifmention_count>3:
yield"error",f"Instagram allows maximum 3 @mentions ({mention_count} found)"
"""Block for posting to Telegram with Telegram-specific options."""
classInput(BaseAyrshareInput):
"""Input schema for Telegram posts."""
# Override post field to include Telegram-specific information
post:str=SchemaField(
description="The post text (empty string allowed). Use @handle to mention other Telegram users.",
default="",
advanced=False,
)
# Override media_urls to include Telegram-specific constraints
media_urls:list[str]=SchemaField(
description="Optional list of media URLs. For animated GIFs, only one URL is allowed. Telegram will auto-preview links unless image/video is included.",
default_factory=list,
advanced=False,
)
# Override is_video to include GIF-specific information
is_video:bool=SchemaField(
description="Whether the media is a video. Set to true for animated GIFs that don't end in .gif/.GIF extension.",
default=False,
advanced=True,
)
classOutput(BlockSchema):
post_result:PostResponse=SchemaField(description="The result of the post")
post:PostIds=SchemaField(description="The result of the post")
def__init__(self):
super().__init__(
disabled=True,
id="47bc74eb-4af2-452c-b933-af377c7287df",
description="Post to Telegram using Ayrshare",
categories={BlockCategory.SOCIAL},
block_type=BlockType.AYRSHARE,
input_schema=PostToTelegramBlock.Input,
output_schema=PostToTelegramBlock.Output,
)
asyncdefrun(
self,
input_data:"PostToTelegramBlock.Input",
*,
user_id:str,
**kwargs,
)->BlockOutput:
"""Post to Telegram with Telegram-specific validation."""
profile_key=awaitget_profile_key(user_id)
ifnotprofile_key:
yield"error","Please link a social account via Ayrshare"
return
client=create_ayrshare_client()
ifnotclient:
yield"error","Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
return
# Validate Telegram constraints
# Check for animated GIFs - only one URL allowed
gif_extensions=[".gif",".GIF"]
has_gif=any(
any(url.endswith(ext)forextingif_extensions)
forurlininput_data.media_urls
)
ifhas_gifandlen(input_data.media_urls)>1:
yield"error","Telegram animated GIFs support only one URL per post"
return
# Auto-detect if we need to set is_video for GIFs without proper extension
"""Block for posting to TikTok with TikTok-specific options."""
classInput(BaseAyrshareInput):
"""Input schema for TikTok posts."""
# Override post field to include TikTok-specific information
post:str=SchemaField(
description="The post text (max 2,200 chars, empty string allowed). Use @handle to mention users. Line breaks will be ignored.",
advanced=False,
)
# Override media_urls to include TikTok-specific constraints
media_urls:list[str]=SchemaField(
description="Required media URLs. Either 1 video OR up to 35 images (JPG/JPEG/WEBP only). Cannot mix video and images.",
default_factory=list,
advanced=False,
)
# TikTok-specific options
auto_add_music:bool=SchemaField(
description="Whether to automatically add recommended music to the post. If you set this field to true, you can change the music later in the TikTok app.",
default=False,
advanced=True,
)
disable_comments:bool=SchemaField(
description="Disable comments on the published post",
default=False,
advanced=True,
)
disable_duet:bool=SchemaField(
description="Disable duets on published video (video only)",
default=False,
advanced=True,
)
disable_stitch:bool=SchemaField(
description="Disable stitch on published video (video only)",
default=False,
advanced=True,
)
is_ai_generated:bool=SchemaField(
description="If you enable the toggle, your video will be labeled as “Creator labeled as AI-generated” once posted and can’t be changed. The “Creator labeled as AI-generated” label indicates that the content was completely AI-generated or significantly edited with AI.",
default=False,
advanced=True,
)
is_branded_content:bool=SchemaField(
description="Whether to enable the Branded Content toggle. If this field is set to true, the video will be labeled as Branded Content, indicating you are in a paid partnership with a brand. A “Paid partnership” label will be attached to the video.",
default=False,
advanced=True,
)
is_brand_organic:bool=SchemaField(
description="Whether to enable the Brand Organic Content toggle. If this field is set to true, the video will be labeled as Brand Organic Content, indicating you are promoting yourself or your own business. A “Promotional content” label will be attached to the video.",
default=False,
advanced=True,
)
image_cover_index:int=SchemaField(
description="Index of image to use as cover (0-based, image posts only)",
default=0,
advanced=True,
)
title:str=SchemaField(
description="Title for image posts",default="",advanced=True
)
thumbnail_offset:int=SchemaField(
description="Video thumbnail frame offset in milliseconds (video only)",
default=0,
advanced=True,
)
visibility:TikTokVisibility=SchemaField(
description="Post visibility: 'public', 'private', 'followers', or 'friends'",
default=TikTokVisibility.PUBLIC,
advanced=True,
)
draft:bool=SchemaField(
description="Create as draft post (video only)",
default=False,
advanced=True,
)
classOutput(BlockSchema):
post_result:PostResponse=SchemaField(description="The result of the post")
post:PostIds=SchemaField(description="The result of the post")
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
)
max_size:int|None=SchemaField(
default=None,
description="Maximum size of the list. If provided, the list will be yielded in chunks of this size.",
advanced=True,
)
max_tokens:int|None=SchemaField(
default=None,
description="Maximum tokens for the list. If provided, the list will be yielded in chunks that fit within this token limit.",
advanced=True,
)
classOutput(BlockSchema):
list:List[Any]=SchemaField(
description="The created list containing the specified values."
)
error:str=SchemaField(description="Error message if list creation failed.")
def__init__(self):
super().__init__(
id="a912d5c7-6e00-4542-b2a9-8034136930e4",
description="Creates a list with the specified values. Use this when you know all the values you want to add upfront. This block can also yield the list in batches based on a maximum size or token limit.",
description="The Exa integration requires an API Key."
)
# Search parameters (flattened)
search_query:str=SchemaField(
description="Your search query. Use this to describe what you are looking for. Any URL provided will be crawled and used as context for the search.",
placeholder="Marketing agencies based in the US, that focus on consumer products",
)
search_count:Optional[int]=SchemaField(
default=10,
description="Number of items the search will attempt to find. The actual number of items found may be less than this number depending on the search complexity.",
ge=1,
le=1000,
)
search_entity_type:SearchEntityType=SchemaField(
default=SearchEntityType.AUTO,
description="Entity type: 'company', 'person', 'article', 'research_paper', or 'custom'. If not provided, we automatically detect the entity from the query.",
description="Description for custom entity type (required when search_entity_type is 'custom')",
advanced=True,
)
# Search criteria (flattened)
search_criteria:list[str]=SchemaField(
default_factory=list,
description="List of criteria descriptions that every item will be evaluated against. If not provided, we automatically detect the criteria from the query.",
advanced=True,
)
# Search exclude sources (flattened)
search_exclude_sources:list[str]=SchemaField(
default_factory=list,
description="List of source IDs (imports or websets) to exclude from search results",
description="List of formats for enrichment responses ('text', 'date', 'number', 'options', 'email', 'phone'). If not specified, we automatically select the best format.",
advanced=True,
)
enrichment_options:list[list[str]]=SchemaField(
default_factory=list,
description="List of option lists for enrichments with 'options' format. Each inner list contains the option labels.",
advanced=True,
)
enrichment_metadata:list[dict]=SchemaField(
default_factory=list,
description="List of metadata dictionaries for enrichments",
advanced=True,
)
# Webset metadata
external_id:Optional[str]=SchemaField(
default=None,
description="External identifier for the webset. You can use this to reference the webset by your own internal identifiers.",
placeholder="my-webset-123",
advanced=True,
)
metadata:Optional[dict]=SchemaField(
default_factory=dict,
description="Key-value pairs to associate with this webset",
advanced=True,
)
classOutput(BlockSchema):
webset:Webset=SchemaField(
description="The unique identifier for the created webset"
)
def__init__(self):
super().__init__(
id="0cda29ff-c549-4a19-8805-c982b7d4ec34",
description="Create a new Exa Webset for persistent web search collections",
description="Optional inline comments to add to specific files/lines. Note: Only path, body, and position are supported. Position is line number in diff from first @@ hunk.",
default=None,
advanced=True,
)
classOutput(BlockSchema):
review_id:int=SchemaField(description="ID of the created review")
state:str=SchemaField(
description="State of the review (e.g., PENDING, COMMENTED, APPROVED, CHANGES_REQUESTED)"
)
html_url:str=SchemaField(description="URL of the created review")
error:str=SchemaField(
description="Error message if the review creation failed"
)
def__init__(self):
super().__init__(
id="84754b30-97d2-4c37-a3b8-eb39f268275b",
description="This block creates a review on a GitHub pull request with optional inline comments. You can create it as a draft or post immediately. Note: For inline comments, 'position' should be the line number in the diff (starting from the first @@ hunk header).",
description="Position in the diff (line number from first @@ hunk). Use this OR line.",
placeholder="6",
default=None,
advanced=True,
)
line:Optional[int]=SchemaField(
description="Line number in the file (will be used as position if position not provided)",
placeholder="42",
default=None,
advanced=True,
)
side:Optional[str]=SchemaField(
description="Side of the diff to comment on (NOTE: Only for standalone comments, not review comments)",
default="RIGHT",
advanced=True,
)
start_line:Optional[int]=SchemaField(
description="Start line for multi-line comments (NOTE: Only for standalone comments, not review comments)",
default=None,
advanced=True,
)
start_side:Optional[str]=SchemaField(
description="Side for the start of multi-line comments (NOTE: Only for standalone comments, not review comments)",
default=None,
advanced=True,
)
classOutput(BlockSchema):
comment_object:dict=SchemaField(
description="The comment object formatted for GitHub API"
)
def__init__(self):
super().__init__(
id="b7d5e4f2-8c3a-4e6b-9f1d-7a8b9c5e4d3f",
description="Creates a comment object for use with GitHub blocks. Note: For review comments, only path, body, and position are used. Side fields are only for standalone PR comments.",
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.