mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-08 22:05:08 -05:00
881e7cacefd2787e41e00f0dff14c8e0cb739014
7121 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
881e7cacef | Merge branch 'dev' into lluisagusti/secrt-1488-password-is-logged-by-consolelog-2 | ||
|
|
af58b316a2 |
chore(frontend/deps-dev): Bump the development-dependencies group across 1 directory with 16 updates (#10548)
Bumps the development-dependencies group with 16 updates in the /autogpt_platform/frontend directory: | Package | From | To | | --- | --- | --- | | [@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests) | `4.0.1` | `4.1.0` | | [@playwright/test](https://github.com/microsoft/playwright) | `1.54.1` | `1.54.2` | | [@storybook/addon-a11y](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/a11y) | `9.0.17` | `9.1.1` | | [@storybook/addon-docs](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/docs) | `9.0.17` | `9.1.1` | | [@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links) | `9.0.17` | `9.1.1` | | [@storybook/addon-onboarding](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/onboarding) | `9.0.17` | `9.1.1` | | [@storybook/nextjs](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/nextjs) | `9.0.17` | `9.1.1` | | [@tanstack/eslint-plugin-query](https://github.com/TanStack/query/tree/HEAD/packages/eslint-plugin-query) | `5.81.2` | `5.83.1` | | [@tanstack/react-query-devtools](https://github.com/TanStack/query/tree/HEAD/packages/react-query-devtools) | `5.83.0` | `5.84.1` | | [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) | `24.0.15` | `24.2.0` | | [chromatic](https://github.com/chromaui/chromatic-cli) | `13.1.2` | `13.1.3` | | [eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next) | `15.4.2` | `15.4.5` | | [eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin) | `9.0.17` | `9.1.1` | | [orval](https://github.com/orval-labs/orval) | `7.10.0` | `7.11.2` | | [storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/core) | `9.0.17` | `9.1.1` | | [typescript](https://github.com/microsoft/TypeScript) | `5.8.3` | `5.9.2` | Updates `@chromatic-com/storybook` from 4.0.1 to 4.1.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/chromaui/addon-visual-tests/releases"><code>@chromatic-com/storybook</code>'s releases</a>.</em></p> <blockquote> <h2>v4.1.0</h2> <h4>🚀 Enhancement</h4> <ul> <li>Support disabling ChannelFetch using <code>--debug</code> flag <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/378">#378</a> (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> </ul> <h4>🐛 Bug Fix</h4> <ul> <li>Chore: Fix package.json <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/385">#385</a> (<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li> <li>Add support for Storybook 9.2 <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/384">#384</a> (<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li> <li>Update GraphQL schema and handle <code>ComparisonResult.SKIPPED</code> value <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/379">#379</a> (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> </ul> <h4>Authors: 2</h4> <ul> <li>Gert Hengeveld (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> <li>Yann Braga (<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li> </ul> <h2>v4.1.0-next.1</h2> <h4>🐛 Bug Fix</h4> <ul> <li>Update GraphQL schema and handle <code>ComparisonResult.SKIPPED</code> value <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/379">#379</a> (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> </ul> <h4>Authors: 1</h4> <ul> <li>Gert Hengeveld (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/chromaui/addon-visual-tests/blob/v4.1.0/CHANGELOG.md"><code>@chromatic-com/storybook</code>'s changelog</a>.</em></p> <blockquote> <h1>v4.1.0 (Fri Aug 01 2025)</h1> <h4>🚀 Enhancement</h4> <ul> <li>Support disabling ChannelFetch using <code>--debug</code> flag <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/378">#378</a> (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> </ul> <h4>🐛 Bug Fix</h4> <ul> <li>Chore: Fix package.json <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/385">#385</a> (<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li> <li>Add support for Storybook 9.2 <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/384">#384</a> (<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li> <li>Update GraphQL schema and handle <code>ComparisonResult.SKIPPED</code> value <a href="https://redirect.github.com/chromaui/addon-visual-tests/pull/379">#379</a> (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> </ul> <h4>Authors: 2</h4> <ul> <li>Gert Hengeveld (<a href="https://github.com/ghengeveld"><code>@ghengeveld</code></a>)</li> <li>Yann Braga (<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li> </ul> <hr /> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
8bab923477 | chore: fix all the console logs... | ||
|
|
2d0c2166c8 | chore: wip | ||
|
|
03e3e2ea9a |
fix(frontend): remove console.log (#10649)
## Changes 🏗️ Not a helpful console log to land in production... We should disallow console logs all together on the Front-end code, but that is a separate, bigger PR... ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Go to the signup page - [x] Play with the password inputs - [x] Password is not printed in the console #### For configuration changes: None |
||
|
|
9be32e15b1 | chore: disallow console errors | ||
|
|
6bb6a081a2 |
feat(backend): add support for v0 by Vercel models and credentials (#10641)
## Summary This PR adds support for v0 by Vercel's Model API to the AutoGPT platform, enabling users to leverage v0's framework-aware AI models optimized for React and Next.js code generation. v0 provides OpenAI-compatible endpoints with models specifically trained for frontend development, making them ideal for generating UI components and web applications. ### Changes 🏗️ #### Backend Changes - **Added v0 Provider**: Added `V0 = "v0"` to `ProviderName` enum in `/backend/backend/integrations/providers.py` - **Added v0 Models**: Added three v0 models to `LlmModel` enum in `/backend/backend/blocks/llm.py`: - `V0_1_5_MD = "v0-1.5-md"` - Everyday tasks and UI generation (128K context, 64K output) - `V0_1_5_LG = "v0-1.5-lg"` - Advanced reasoning (512K context, 64K output) - `V0_1_0_MD = "v0-1.0-md"` - Legacy model (128K context, 64K output) - **Implemented v0 Provider**: Added v0 support in `llm_call()` function using OpenAI-compatible client with base URL `https://api.v0.dev/v1` - **Added Credentials Support**: Created `v0_credentials` in `/backend/backend/integrations/credentials_store.py` with UUID `c4e6d1a0-3b5f-4789-a8e2-9b123456789f` - **Cost Configuration**: Added model costs in `/backend/backend/data/block_cost_config.py`: - v0-1.5-md: 1 credit - v0-1.5-lg: 2 credits - v0-1.0-md: 1 credit #### Configuration Changes - **Settings**: Added `v0_api_key` field to `Secrets` class in `/backend/backend/util/settings.py` - **Environment Variables**: Added `V0_API_KEY=` to `/backend/.env.default` ### Features - ✅ Full OpenAI-compatible API support - ✅ Tool/function calling support - ✅ JSON response format support - ✅ Framework-aware completions optimized for React/Next.js - ✅ Large context windows (up to 512K tokens) - ✅ Integrated with platform credit system ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Run existing block tests to ensure no regressions: `poetry run pytest backend/blocks/test/test_block.py` - [x] Verify AITextGeneratorBlock works with v0 models - [x] Confirm all model metadata is correctly configured - [x] Validate cost configuration is properly set up - [x] Check that v0_credentials has a valid UUID4 #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - Added `V0_API_KEY=` to `/backend/.env.default` - [x] `docker-compose.yml` is updated or already compatible with my changes - No changes needed - uses existing environment variable patterns - [x] I have included a list of my configuration changes in the PR description (under **Changes**) ### Configuration Requirements Users need to: 1. Obtain a v0 API key from [v0.app](https://v0.app) (requires Premium or Team plan) 2. Add `V0_API_KEY=your-api-key` to their `.env` file ### API Documentation - v0 API Docs: https://v0.app/docs/api - Model API Docs: https://v0.app/docs/api/model ### Testing All existing tests pass with the new v0 integration: ```bash poetry run pytest backend/blocks/test/test_block.py::test_available_blocks -k "AITextGeneratorBlock" -xvs # Result: PASSED ``` |
||
|
|
df20b70f44 |
feat(blocks): Enrichlayer integration (#9924)
<!-- Clearly explain the need for these changes: --> We want to support ~~proxy curl~~ enrichlayer as an integration, and this is a baseline way to get there ### Changes 🏗️ - Adds some subset of proxycurl blocks based on the API docs: ~~https://nubela.co/proxycurl/docs#people-api-person-profile-endpoint~~ https://enrichlayer.com/docs/pc/#people-api <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] manually test the blocks with an API key - [x] make sure the automated tests pass --------- Co-authored-by: SwiftyOS <craigswift13@gmail.com> Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: majdyz <zamil@agpt.co> |
||
|
|
21faf1b677 |
fix(backend): update and fix weekly summary email (#10343)
<!-- Clearly explain the need for these changes: --> Our weekly summary emails are currently broken, hard-coded, and so ugly. ### Changes 🏗️ Update the email template to look better Update the way we queue messages to work after other changes have occurred <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test by sending a self email with the cron job set to every minute, so you can see what it would look like --------- Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
b53c373a59 |
feat(docker): streamline Supabase to minimal essential services (#10639)
## Summary Streamline Supabase stack from 13 services to 3 core services for faster startup and lower resource usage while maintaining full API compatibility. ## Changes Made ### Core Services (Always Running) - **Kong**: API gateway providing standard `/auth/v1/` endpoints and API key validation - **Auth**: GoTrue authentication service for user management - **Database**: PostgreSQL with pgvector support for data persistence ### Removed Services (9 services eliminated) - `rest` (PostgREST API) - not needed for auth-only usage - `realtime` (real-time subscriptions) - not used by platform - `storage` (file storage) - platform uses separate file handling - `imgproxy` (image processing) - not required for core functionality - `meta` (database metadata) - not needed for runtime operations - `functions` (edge functions) - not utilized - `analytics` (Logflare) - monitoring overhead not needed locally - `vector` (log collection) - not required for basic operation - `supavisor` (connection pooler) - direct DB access sufficient for local dev ### Studio (Development Only) - Moved to `local` profile: `docker compose --profile local up` - Available for database management during development - Excluded from normal startup for cleaner production-like environment ## Benefits - **80% faster startup**: 3 services vs 13 services - **Lower resource usage**: Significant reduction in memory/CPU consumption - **Simpler debugging**: Fewer moving parts, cleaner logs, easier troubleshooting - **Maintained compatibility**: All auth functionality preserved through Kong ## Backwards Compatibility ✅ **No breaking changes** - All existing auth endpoints (`/auth/v1/*`) work unchanged - API key authentication (`anon`/`service_role`) preserved - CORS and security policies maintained via Kong - No application code changes required ## Testing - [x] Docker compose starts successfully with minimal services - [x] Auth endpoints accessible via Kong at `/auth/v1/` - [x] Database connectivity maintained - [x] Studio accessible with `--profile local` flag - [x] All existing environment variables preserved ## File Changes - `autogpt_platform/docker-compose.yml`: Removed unnecessary Supabase services, moved studio to local profile - `autogpt_platform/db/docker/docker-compose.yml`: Cleaned up service dependencies on analytics/vector 🤖 Generated with [Claude Code](https://claude.ai/code) |
||
|
|
4bfeddc03d |
feat(platform/docker): add frontend service to docker-compose with env config improvements (#10615)
## Summary This PR adds the frontend service to the Docker Compose configuration, enabling `docker compose up` to run the complete stack, including the frontend. It also implements comprehensive environment variable improvements, unified .env file support, and fixes Docker networking issues. ## Key Changes ### 🐳 Docker Compose Improvements - **Added frontend service** to `docker-compose.yml` and `docker-compose.platform.yml` - **Production build**: Uses `pnpm build + serve` instead of dev server for better stability and lower memory usage - **Service dependencies**: Frontend now waits for backend services (`rest_server`, `websocket_server`) to be ready - **YAML anchors**: Implemented DRY configuration to avoid duplicating environment values ### 📁 Unified .env File Support - **Frontend .env loading**: Automatically loads `.env` file during Docker build and runtime - **Backend .env loading**: Optional `.env` file support with fallback to sensible defaults in `settings.py` - **Single source of truth**: All `NEXT_PUBLIC_*` and API keys can be defined in respective `.env` files - **Docker integration**: Updated `.dockerignore` to include `.env` files in build context - **Git tracking**: Frontend and backend `.env` files are now trackable (removed from gitignore) ### 🔧 Environment Variable Architecture - **Dual environment strategy**: - Server-side code uses Docker service names (`http://rest_server:8006/api`) - Client-side code uses localhost URLs (`http://localhost:8006/api`) - **Comprehensive config**: Added build args and runtime environment variables - **Network compatibility**: Fixes connection issues between frontend and backend containers - **Shared backend variables**: Common environment variables (service hosts, auth settings) centralized using YAML anchors ### 🛠️ Code Improvements - **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`) with server-side priority - **Updated all frontend code** to use shared environment helpers instead of direct `process.env` access - **Consistent API**: All environment variable access now goes through helper functions - **Settings.py improvements**: Better defaults for CORS origins and optional .env file loading ### 🔗 Files Changed - `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend service and shared backend env vars - `frontend/Dockerfile` - Simplified build process to use .env files directly - `backend/settings.py` - Optional .env loading and better defaults - `frontend/src/lib/env-config.ts` - New centralized environment configuration - `.dockerignore` - Allow .env files in build context - `.gitignore` - Updated to allow frontend/backend .env files - Multiple frontend files - Updated to use env helpers - Updates to both auto installer scripts to work with the latest setup! ## Benefits - ✅ **Single command deployment**: `docker compose up` now runs everything - ✅ **Better reliability**: Production build reduces memory usage and crashes - ✅ **Network compatibility**: Proper container-to-container communication - ✅ **Maintainable config**: Centralized environment variable management with .env files - ✅ **Development friendly**: Works in both Docker and local development - ✅ **API key management**: Easy configuration through .env files for all services - ✅ **No more manual env vars**: Frontend and backend automatically load their respective .env files ## Testing - ✅ Verified Docker service communication works correctly - ✅ Frontend responds and serves content properly - ✅ Environment variables are correctly resolved in both server and client contexts - ✅ No connection errors after implementing service dependencies - ✅ .env file loading works correctly in both build and runtime phases - ✅ Backend services work with and without .env files present ### Checklist 📋 #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Lluis Agusti <hi@llu.lu> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> Co-authored-by: Claude <claude@users.noreply.github.com> Co-authored-by: Bentlybro <Github@bentlybro.com> |
||
|
|
af7d56612d |
fix(logging): remove uvicorn log config to prevent startup deadlock (#10638)
## Problem After applying the CloudLoggingHandler fix to use BackgroundThreadTransport (#10634), scheduler pods entered a new deadlock during startup when uvicorn reconfigures logging. ## Root Cause When uvicorn starts with a log_config parameter, it calls `logging.config.dictConfig()` which: 1. Calls `_clearExistingHandlers()` 2. Which calls `logging.shutdown()` 3. Which tries to `flush()` all handlers including CloudLoggingHandler 4. CloudLoggingHandler with BackgroundThreadTransport tries to flush its queue 5. The background worker thread tries to acquire the logging module lock to check log levels 6. **Deadlock**: shutdown holds lock waiting for flush to complete, worker thread needs lock to continue ## Thread Dump Evidence From py-spy analysis of the stuck pod: - **Thread 21 (FastAPI)**: Stuck in `flush()` waiting for background thread to drain queue - **Thread 13 (google.cloud.logging.Worker)**: Waiting for logging lock in `isEnabledFor()` - **Thread 1 (MainThread)**: Waiting for logging lock in `getLogger()` during SQLAlchemy import - **Threads 30, 31 (Sentry)**: Also waiting for logging lock ## Solution Set `log_config=None` for all uvicorn servers. This prevents uvicorn from calling `dictConfig()` and avoids the deadlock entirely. **Trade-off**: Uvicorn will use its default logging configuration which may produce duplicate log entries (one from uvicorn, one from the app), but the application will start successfully without deadlocks. ## Changes - Set `log_config=None` in all uvicorn.Config() calls - Remove unused `generate_uvicorn_config` imports ## Testing - [x] Verified scheduler pods can start and become healthy - [x] Health checks respond properly - [x] No deadlocks during startup - [x] Application logs still appear (though may be duplicated) ## Related Issues - Fixes the startup deadlock introduced after #10634 |
||
|
|
0dd30e275c |
docs(blocks): Add AI/ML API integration guide and update LLM headers (#10402)
### Summary Added a new documentation page and images for integrating AI/ML API with AutoGPT, including step-by-step instructions. Updated LLM block to send additional headers for requests to aimlapi.com. Improved provider listing in index.md and added the new guide to mkdocs navigation. Builds on and extends the integration work from https://github.com/Significant-Gravitas/AutoGPT/pull/9996 ### Changes 🏗️ This PR introduces official support and documentation for using **AI/ML API** with the **AutoGPT platform**: * 📄 **Added a new documentation page** `platform/aimlapi.md` with a detailed step-by-step integration guide. * 🖼️ **Added 12+ reference images** to `docs/content/imgs/aimlapi/` for clear visual walkthrough. * 🧠 **Updated the LLM block** (`llm.py`) to send extra headers (`X-Project`, `X-Title`, `Referer`) in requests to `aimlapi.com` for analytics and source attribution. * 📚 **Improved provider listing** in `index.md` — added section about AI/ML API models and benefits. * 🧭 **Added the new guide to the mkdocs navigation** via `mkdocs.yml`. --- ### Checklist 📋 #### For code changes: * [x] I have clearly listed my changes in the PR description * [x] I have made a test plan * [x] I have tested my changes according to the test plan: * [x] Successfully authenticated against `api.aimlapi.com` * [x] Verified requests use correct headers * [x] Confirmed `AI Text Generator` block returns completions for all supported models * [x] End-to-end tested: created, saved, and ran agent with AI/ML API successfully * [x] Verified outputs render correctly in the Output panel No breaking changes introduced. Let me know if you'd like this guide cross-referenced from other onboarding pages. ✅ --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
a135f09336 |
feat(frontend): update settings form (#10628)
## Changes 🏗️ <img width="800" height="687" alt="Screenshot 2025-08-12 at 15 52 41" src="https://github.com/user-attachments/assets/0d2d70b8-e727-428b-915e-d4c108ab7245" /> <img width="800" height="772" alt="Screenshot 2025-08-12 at 15 52 53" src="https://github.com/user-attachments/assets/b9790616-3754-455e-b8f6-58cd7f6b5a18" /> Update the Account Settings ( `profile/settings` ) form so that: - it uses the new Design System components - it is split into 2 forms ( update email & notifications ) - the change password inputs have been removed instead we link to the `/reset-password` page - uses a normal API route and client query to update the email This might fix as well an error we are seeing when updating email preferences on dev. My guess is it is failing because previously it was using a server action + supabase and it didn't have access to the cookies auth 🍪 ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Navigate to `/profile/settings` - [x] Can update the email - [x] Can change notification preferences - [x] New E2E tests pass on the CI and make sense ### For configuration changes: None |
||
|
|
2d436caa84 |
fix(backend/AM): Fix AutoMod api key issue (#10635)
### Changes 🏗️ Calls to the moderation API now strip whitespace from the API key before including it in the 'X-API-Key' header, preventing authentication issues due to accidental leading or trailing spaces. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Setup and run the platform with moderation and test it worksautogpt-platform-beta-v0.6.22 |
||
|
|
34dd218a91 |
fix(backend): resolve CloudLoggingHandler deadlock causing scheduler hangs (#10634)
## 🚨 Critical Deadlock Fix: Scheduler Pod Stuck for 3+ Hours This PR resolves a critical production deadlock where scheduler pods become completely unresponsive due to a CloudLoggingHandler locking issue. ## 📋 Incident Summary **Affected Pod**: `autogpt-scheduler-server-6d7b89c4f9-mqp59` - **Duration**: Stuck for 3+ hours (still ongoing) - **Symptoms**: Health checks failing, appears completely dead - **Impact**: No new job executions, system appears down - **Root Cause**: CloudLoggingHandler deadlock with gRPC timeout failure ## 🔍 Detailed Incident Analysis ### The Deadlock Chain 1. **Thread 58 (APScheduler Worker)**: - Completed job successfully - Called `logger.info("Job executed successfully")` - CloudLoggingHandler acquired lock at `logging/__init__.py:976` - Made gRPC call to Google Cloud Logging - **Got stuck in TCP black hole for 3+ hours** 2. **Thread 26 (FastAPI Health Check)**: - Tried to log health check response - **Blocked at `logging/__init__.py:927` waiting for same lock** - Health check never completes → Kubernetes thinks pod is dead 3. **All Other Threads**: Similarly blocked on any logging attempt ### Why gRPC Timeout Failed The gRPC call had a 60-second timeout but has been stuck for 10,775+ seconds because: - **TCP Black Hole**: Network packets silently dropped (firewall/load balancer timeout) - **No Socket Timeout**: Python default is `None` (infinite wait) - **TCP Keepalive Disabled**: Dead connections hang forever - **Kernel-Level Block**: gRPC timeout can't interrupt `socket.recv()` syscall ### Evidence from Thread Dump ```python Thread 58: "ThreadPoolExecutor-0_1" _blocking (grpc/_channel.py:1162) timeout: 60 # ← Should have timed out deadline: 1755061203 # ← Expired 3 hours ago\! emit (logging_v2/handlers/handlers.py:225) # ← HOLDING LOCK handle (logging/__init__.py:978) # ← After acquire() Thread 26: "Thread-4 (__start_fastapi)" acquire (logging/__init__.py:927) # ← BLOCKED waiting for lock self: <CloudLoggingHandler at 0x7a657280d550> # ← Same instance\! ``` ## 🔧 The Fix ### Primary Solution Replace **blocking** `SyncTransport` with **non-blocking** `BackgroundThreadTransport`: ```python # BEFORE (Dangerous - blocks while holding lock) transport=SyncTransport, # AFTER (Safe - queues and returns immediately) transport=BackgroundThreadTransport, ``` ### Why BackgroundThreadTransport Solves It 1. **Non-blocking**: `emit()` returns immediately after queuing 2. **Lock Released**: No network I/O while holding the logging lock 3. **Isolated Failures**: Background thread hangs don't affect main app 4. **Better Performance**: Built-in batching and retry logic ### Additional Hardening - **Socket Timeout**: 30-second global timeout prevents infinite hangs - **gRPC Keepalive**: Detects and closes dead connections faster - **Comprehensive Logging**: Comments explain the deadlock prevention ## 🧪 Technical Validation ### Before (SyncTransport) ``` log.info("message") ↓ acquire_lock() ✅ ↓ gRPC_call() ❌ HANGS FOR HOURS ↓ [DEADLOCK - lock never released] ``` ### After (BackgroundThreadTransport) ``` log.info("message") ↓ acquire_lock() ✅ ↓ queue_message() ✅ Instant ↓ release_lock() ✅ Immediate ↓ [Background thread handles gRPC separately] ``` ## 🚀 Impact & Benefits **Immediate Impact**: - ✅ Prevents CloudLoggingHandler deadlocks - ✅ Health checks respond normally - ✅ System remains observable during network issues - ✅ Scheduler can continue processing jobs **Long-term Benefits**: - 📈 Better logging performance (batching + async) - 🛡️ Resilient to network partitions and timeouts - 🔍 Maintained observability during failures - ⚡ No blocking I/O on critical application threads ## 📊 Files Changed - `autogpt_libs/autogpt_libs/logging/config.py`: Transport change + socket hardening ## 🧪 Test Plan - [x] Validate BackgroundThreadTransport import works - [x] Confirm socket timeout configuration applies - [x] Verify gRPC keepalive environment variables set - [ ] Deploy to staging and verify no deadlocks under load - [ ] Monitor Cloud Logging delivery remains reliable ## 🔍 Monitoring After Deploy - Watch for any logging delivery delays (expected: minimal) - Confirm health checks respond consistently - Verify no more scheduler "hanging" incidents - Monitor gRPC connection patterns in Cloud Logging metrics ## 🎯 Risk Assessment - **Risk**: Very Low - BackgroundThreadTransport is the recommended approach - **Rollback**: Simple revert if any issues observed - **Testing**: Extensively used in production Google Cloud services --- **This fixes a critical production stability issue affecting scheduler reliability and system observability.** 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
41f500790f |
fix(marketplace): loading state (#10629)
## Changes 🏗️ Use a skeleton for the martkeplace loading state, representing visually how the place should looks. Looks a bit more stylish than the previous `Loading...` text. ### Before <img width="800" height="774" alt="Screenshot 2025-08-12 at 16 01 22" src="https://github.com/user-attachments/assets/29e44a1a-2089-468c-a253-3a6b763ada5a" /> ### After <img width="800" height="761" alt="Screenshot 2025-08-12 at 16 01 01" src="https://github.com/user-attachments/assets/5ad362ae-df1d-4a1b-90ae-9349a81a4d75" /> ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Martketplace loading state looks good across screen sizes ### For configuration changes: None |
||
|
|
793de77e76 |
ref(backend): update Gmail blocks to unify architecture and improve email handling (#10588)
## Summary
This PR refactors all Gmail blocks to share a common base class
(`GmailBase`) and adds several improvements to email handling, including
proper HTML content support, async API calls, and fixing the
78-character line wrapping issue for plain text emails.
## Changes
### Architecture Improvements
- **Unified base class**: Created `GmailBase` abstract class that
consolidates common functionality across all Gmail blocks
- **Async API calls**: Converted all Gmail API calls to use
`asyncio.to_thread` for better performance and non-blocking operations
- **Code deduplication**: Moved shared methods like `_build_service`,
`_get_email_body`, `_get_attachments`, and `_get_label_id` to the base
class
### Email Content Handling
- **Smart content type detection**: Added automatic detection of HTML vs
plain text content
- **Fix 78-char line wrapping**: Plain text emails now use a no-wrap
policy (`max_line_length=0`) to prevent Gmail's default 78-character
hard line wrapping
- **Content type parameter**: Added optional `content_type` field to
Send, Draft, Reply, and Forward blocks allowing manual override ("auto",
"plain", or "html")
- **Proper MIME handling**: Created `_make_mime_text` helper function to
properly configure MIME types and policies
### New Features
- **Gmail Forward Block**: Added new `GmailForwardBlock` for forwarding
emails with proper thread preservation
- **Reply improvements**: Reply block now properly reads the original
email content when replying
### Bug Fixes
- Fixed issue where reply block wasn't reading the email it was replying
to
- Fixed attachment handling in multipart messages
- Improved error handling for base64 decoding
## Technical Details
The refactoring introduces:
- `NO_WRAP_POLICY = SMTP.clone(max_line_length=0)` to prevent line
wrapping in plain text emails
- UTF-8 charset support for proper Unicode/emoji handling
- Consistent async patterns using `asyncio.to_thread` for all Gmail API
calls
- Proper HTML to text conversion using html2text library when available
## Testing
All existing tests pass. The changes maintain backward compatibility
while adding new optional parameters.
## Breaking Changes
None - all changes are backward compatible. The new `content_type`
parameter is optional and defaults to "auto" detection.
---------
Co-authored-by: Claude <claude@users.noreply.github.com>
|
||
|
|
a2059c6023 |
refactor(backend): consolidate LaunchDarkly feature flag management (#10632)
This PR consolidates LaunchDarkly feature flag management by moving it from autogpt_libs to backend and fixing several issues with boolean handling and configuration management. ### Changes 🏗️ **Code Structure:** - Move LaunchDarkly client from `autogpt_libs/feature_flag` to `backend/util/feature_flag.py` - Delete redundant `config.py` file and merge LaunchDarkly settings into `backend/util/settings.py` - Update all imports throughout the codebase to use `backend.util.feature_flag` - Move test file to `backend/util/feature_flag_test.py` **Bug Fixes:** - Fix `is_feature_enabled` function to properly return boolean values instead of arbitrary objects that were always evaluating to `True` - Add proper async/await handling for all `is_feature_enabled` calls - Add better error handling when LaunchDarkly client is not initialized **Performance & Architecture:** - Load Settings at module level instead of creating new instances inside functions - Remove unnecessary `sdk_key` parameter from `initialize_launchdarkly()` function - Simplify initialization by using centralized settings management **Configuration:** - Add `launch_darkly_sdk_key` field to `Secrets` class in settings.py with proper validation alias - Remove environment variable fallback in favor of centralized settings ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] All existing feature flag tests pass (6/6 tests passing) - [x] LaunchDarkly initialization works correctly with settings - [x] Boolean feature flags return correct values instead of objects - [x] Non-boolean flag values are properly handled with warnings - [x] Async/await calls work correctly in AutoMod and activity status generator - [x] Code formatting and imports are correct #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) **Configuration Changes:** - LaunchDarkly SDK key is now managed through the centralized Settings system instead of a separate config file - Uses existing `LAUNCH_DARKLY_SDK_KEY` environment variable (no changes needed to env files) 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
b9c3920227 |
fix(backend): Support dynamic values_#_* fields in CreateDictionaryBlock (#10587)
## Summary Fixed Smart Decision Maker's function signature generation to properly handle dynamic fields (e.g., `values_#_*`, `items_$_*`) when connecting to any block as a tool. ### Context When Smart Decision Maker calls other blocks as tools, it needs to generate OpenAI-compatible function signatures. Previously, when connected to blocks via dynamic fields (which get merged by the executor at runtime), the signature generation would fail because blocks don't inherently know about these dynamic field patterns. ### Changes 🏗️ - **Modified `SmartDecisionMakerBlock._create_block_function_signature()`** to detect and handle dynamic fields: - Detects fields containing `_#_` (dict merge), `_$_` (list merge), or `_@_` (object merge) - Provides generic string schema for dynamic fields (OpenAI API compatible) - Falls back gracefully for unknown fields - **Added comprehensive tests** for dynamic field handling with both dictionary and list patterns - **No changes needed to individual blocks** - this solution works universally ### Why This Approach Instead of modifying every block to handle dynamic fields (original PR approach), we handle it centrally in Smart Decision Maker where the function signatures are generated. This is cleaner and more maintainable. ### Test Plan 📋 - [x] Created test cases for Smart Decision Maker generating function signatures with dynamic dict fields (`_#_`) - [x] Created test cases for Smart Decision Maker generating function signatures with dynamic list fields (`_$_`) - [x] Verified Smart Decision Maker can successfully call blocks like CreateDictionaryBlock via dynamic connections - [x] All existing Smart Decision Maker tests pass - [x] Linting and formatting pass --------- Co-authored-by: Claude <claude@users.noreply.github.com> |
||
|
|
abba10b649 |
feat(block): Remove paralel tool-call system prompting (#10627)
We're forcing this note to the end of the system prompt SDM block: Only provide EXACTLY one function call; multiple tool calls are strictly prohibited., this is being interpreted by GPT5 as "Only call one tool per task," which is resulting in many agent runs that only use a tool once (i.e., useless low low-effort answers) ### Changes 🏗️ Remove parallel tool-call system prompting entirely. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] automated tests. |
||
|
|
6c34790b42 |
Revert "feat(platform): add py-spy profiling support"
This reverts commit
|
||
|
|
c168277b1d |
feat(platform): add py-spy profiling support
Add py-spy for production-safe Python profiling across all backend services: - Add py-spy dependency to pyproject.toml - Grant SYS_PTRACE capability to Docker services for profiling access - Enable low-overhead performance monitoring in development and production 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|
|
89eb5d1189 |
feat(feature-flag): add LaunchDarkly user context and metadata support (#10595)
## Summary Enable LaunchDarkly feature flags to use rich user context and metadata for advanced targeting, including user segments, account age, email domains, and custom attributes. This unlocks LaunchDarkly's powerful targeting capabilities beyond simple user ID checks. ## Problem LaunchDarkly feature flags were only receiving basic user IDs, preventing the use of: - **Segment-based targeting** (e.g., "employees", "beta users", "new accounts") - **Contextual rules** (e.g., account age, email domain, custom metadata) - **Advanced LaunchDarkly features** like percentage rollouts by user attributes This limited feature flag flexibility and required manual user ID management for targeting. ## Solution ### 🎯 **LaunchDarkly Context Enhancement** - **Rich user context**: Send user metadata, segments, account age, email domain to LaunchDarkly - **Automatic segmentation**: Users automatically categorized as "employee", "new_user", "established_user" etc. - **Custom metadata support**: Any user metadata becomes available for LaunchDarkly targeting - **24-hour caching**: Efficient user context retrieval with TTL cache to reduce database calls ### 📊 **User Context Data** ```python # Before: Only user ID context = Context.builder("user-123").build() # After: Full context with targeting data context = { "email": "user@agpt.co", "created_at": "2023-01-15T10:00:00Z", "segments": ["employee", "established_user"], "email_domain": "agpt.co", "account_age_days": 365, "custom_role": "admin" } ``` ### 🏗️ **Required Infrastructure Changes** To support proper LaunchDarkly serialization, we needed to implement clean application models: #### **Application-Layer User Model** - Created snake_case User model (`created_at`, `email_verified`) for proper JSON serialization - LaunchDarkly expects consistent field naming - camelCase Prisma objects caused validation errors - Added `User.from_db()` converter to safely transform database objects #### **HTTP Client Reliability** - Fixed HTTP 4xx retry issue that was causing unnecessary load - Added layer validation to prevent database objects leaking to external services #### **Type Safety** - Eliminated `Any` types and defensive coding patterns - Proper typing enables better IDE support and catches errors early ## Technical Implementation ### **Core LaunchDarkly Enhancement** ```python # autogpt_libs/feature_flag/client.py @async_ttl_cache(maxsize=1000, ttl_seconds=86400) # 24h cache async def _fetch_user_context_data(user_id: str) -> dict[str, Any]: user = await get_user_by_id(user_id) return _build_launchdarkly_context(user) def _build_launchdarkly_context(user: User) -> dict[str, Any]: return { "email": user.email, "created_at": user.created_at.isoformat(), # snake_case for serialization "segments": determine_user_segments(user), "account_age_days": calculate_account_age(user), # ... more context data } ``` ### **User Segmentation Logic** - **Role-based**: `admin`, `user`, `system` segments - **Domain-based**: `employee` for @agpt.co emails - **Account age**: `new_user` (<7 days), `recent_user` (7-30 days), `established_user` (>30 days) - **Custom metadata**: Any user metadata becomes available for targeting ### **Infrastructure Updates** - `backend/data/model.py`: Application User model with proper serialization - `backend/util/service.py`: HTTP client improvements and layer validation - Multiple files: Migration to use application models for consistency ## LaunchDarkly Usage Examples With this enhancement, you can now create LaunchDarkly rules like: ```yaml # Target employees only - variation: true targets: - values: ["employee"] contextKind: "user" attribute: "segments" # Target new users for gradual rollout - variation: true rollout: variations: - variation: true weight: 25000 # 25% of new users contextKind: "user" bucketBy: "segments" filters: - attribute: "segments" op: "contains" values: ["new_user"] ``` ## Performance & Caching - **24-hour TTL cache**: Dramatically reduces database calls for user context - **Graceful fallbacks**: Simple user ID context if database unavailable - **Efficient caching**: 1000 entry LRU cache with automatic TTL expiration ## Testing - [x] LaunchDarkly context includes all expected user attributes - [x] Segmentation logic correctly categorizes users - [x] 24-hour cache reduces database load - [x] Fallback to simple context works when database unavailable - [x] All existing feature flag functionality preserved - [x] HTTP retry improvements work correctly ## Breaking Changes ✅ **No external API changes** - all existing feature flag usage continues to work ⚠️ **Internal changes only**: - `get_user_by_id()` returns application User model instead of Prisma model - Test utilities need to import User from `backend.data.model` ## Impact 🎯 **Product Impact**: - **Advanced targeting**: Product teams can now use sophisticated LaunchDarkly rules - **Better user experience**: Gradual rollouts, A/B testing, and segment-based features - **Operational efficiency**: Reduced need for manual user ID management 🚀 **Performance Impact**: - **Reduced database load**: 24-hour caching minimizes repeated user context queries - **Improved reliability**: Fixed HTTP retry inefficiencies - **Better monitoring**: Cleaner logs without 4xx retry noise --- **Primary goal**: Enable rich LaunchDarkly targeting with user context and segments **Infrastructure changes**: Required for proper serialization and reliability 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
e13e0d4376 |
test(frontend): add e2e test for profile form page (#10596)
This PR has added end-to-end tests for the profile form page. These tests include: - Redirects to the login page when the user is not authenticated. - Can save profile changes successfully. - Can cancel profile changes (skipped because we need to fix the form for this test). ### Changes 🏗️ - Added test-id's inside the ProfileInfoForm. - Created a page object for the profile form page. - Added a test for this page in `profile-form.spec.ts`. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] All test are working perfectly locally |
||
|
|
f4a732373b | fix(frontend): remove state limits from agent activity dropdown | ||
|
|
28d85ad61c |
feat(backend/AM): Integrate AutoMod content moderation (#10539)
Copy of [feat(backend/AM): Integrate AutoMod content moderation - By Bentlybro - PR #10490](https://github.com/Significant-Gravitas/AutoGPT/pull/10490) cos i messed it up 🤦 Adds AutoMod input and output moderation to the execution flow. Introduces a new AutoMod manager and models, updates settings for moderation configuration, and modifies execution result handling to support moderation-cleared data. Moderation failures now clear sensitive data and mark executions as failed. <img width="921" height="816" alt="image" src="https://github.com/user-attachments/assets/65c0fee8-d652-42bc-9553-ff507bc067c5" /> ### Changes 🏗️ I have made some small changes to ``autogpt_platform\backend\backend\executor\manager.py`` to send the needed into to the AutoMod system which collects the data, combines and makes the api call to AM and based on its reply lets it run or not! I also had to make small changes to ``autogpt_platform\backend\backend\data\execution.py`` to add checks that allow me to clear the content from the blocks if it was flagged I am working on finalizing the AM repo then that will be public To note: we will want to set this up behind launch darkly first for testing on the team before we roll it out any more ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Setup and run the platform with ``automod_enabled`` set to False and it works normally - [x] Setup and run the platform with ``automod_enabled`` set to True, set the AM URL and API Key and test it runs safe blocks normally - [x] Test AM with content that would trigger it to flag and watch it stop and clear all the blocks outputs Message @Bentlybro for the URL and an API key to AM for local testing! ## Changes made to Settings.py I have added a few new options to the settings.py for AutoMod Config! ``` # AutoMod configuration automod_enabled: bool = Field( default=False, description="Whether AutoMod content moderation is enabled", ) automod_api_url: str = Field( default="", description="AutoMod API base URL - Make sure it ends in /api", ) automod_timeout: int = Field( default=30, description="Timeout in seconds for AutoMod API requests", ) automod_retry_attempts: int = Field( default=3, description="Number of retry attempts for AutoMod API requests", ) automod_retry_delay: float = Field( default=1.0, description="Delay between retries for AutoMod API requests in seconds", ) automod_fail_open: bool = Field( default=False, description="If True, allow execution to continue if AutoMod fails", ) automod_moderate_inputs: bool = Field( default=True, description="Whether to moderate block inputs", ) automod_moderate_outputs: bool = Field( default=True, description="Whether to moderate block outputs", ) ``` and ``` automod_api_key: str = Field(default="", description="AutoMod API key") ``` --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co> |
||
|
|
d4b5508ed1 |
fix(backend): resolve scheduler deadlock and improve health checks (#10589)
## Summary Fix critical deadlock issue where scheduler pods would freeze completely and become unresponsive to health checks, causing pod restarts and stuck QUEUED executions. ## Root Cause Analysis The scheduler was using `BlockingScheduler` which blocked the main thread, and when concurrent jobs deadlocked in the async event loop, the entire process would freeze - unable to respond to health checks or process any requests. From crash analysis: - At 01:18:00, two jobs started executing concurrently - At 01:18:01.482, last successful health check - Process completely froze - no more logs until pod was killed at 01:18:46 - Execution `8174c459-c975-4308-bc01-331ba67f26ab` was created in DB but never published to RabbitMQ ## Changes Made ### Core Deadlock Fix - **Switch from BlockingScheduler to BackgroundScheduler**: Prevents main thread blocking, allows health checks to work even if scheduler jobs deadlock - **Make all health_check methods async**: Makes health checks completely independent of thread pools and more resilient to blocking operations ### Enhanced Monitoring & Debugging - **Add execution timing**: Track and log how long each graph execution takes to create and publish - **Warn on slow operations**: Alert when operations take >10 seconds, indicating resource contention - **Enhanced error logging**: Include elapsed time and exception types in error messages - **Better APScheduler event listeners**: Add listeners for missed jobs and max instances with actionable messages ### Files Modified - `backend/executor/scheduler.py` - Switch to BackgroundScheduler, async health_check, timing monitoring - `backend/util/service.py` - Base async health_check method - `backend/executor/database.py` - Async health_check override - `backend/notifications/notifications.py` - Async health_check override ## Test Plan - [x] All existing tests pass (914 passed, 1 failed unrelated connection issue) - [x] Scheduler starts correctly with BackgroundScheduler - [x] Health checks respond properly under load - [x] Enhanced logging provides visibility into execution timing ## Impact - **Prevents pod freezes**: Scheduler remains responsive even when jobs deadlock - **Better observability**: Clear visibility into slow operations and failures - **No dropped executions**: Jobs won't get stuck in QUEUED state due to process freezes - **Faster incident response**: Health checks and logs provide actionable debugging info 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
0116866199 |
feat(backend): add more discord blocks support (#10586)
# Enhanced Discord Integration Blocks Introduces new blocks for sending DMs, embeds, files, and replies in Discord, as well as blocks for retrieving user and channel information. Enhances existing message blocks with additional metadata fields and server/channel identification. Improves test coverage and input/output schemas for all Discord-related blocks. Co-Authored-By: Claude <claude@users.noreply.github.com> ## Why These Changes Are Needed 🎯 The existing Discord integration was limited to basic message sending and reading. Users needed more sophisticated Discord functionality to build comprehensive automation workflows: 1. **Limited messaging options** - Could only send plain text to channels, no DMs, embeds, or file attachments 2. **Poor graph connectivity** - Blocks didn't output IDs needed for chaining operations (e.g., couldn't reply to a message after sending it) 3. **No user management** - Couldn't get user information or send direct messages 4. **Type safety issues** - Discord.py's incomplete type hints caused linting errors 5. **No channel resolution** - Had to manually find channel IDs instead of using names ### Changes 🏗️ #### New Blocks Added - **SendDiscordDMBlock** - Send direct messages to users via their Discord ID - **SendDiscordEmbedBlock** - Create rich embedded messages with images, fields, and formatting - **SendDiscordFileBlock** - Upload any file type (images, PDFs, videos, etc.) using MediaFileType - **ReplyToDiscordMessageBlock** - Reply to specific messages in threads - **DiscordUserInfoBlock** - Retrieve user profile information (username, avatar, creation date, etc.) - **DiscordChannelInfoBlock** - Resolve channel names to IDs and get channel metadata #### Enhanced Existing Blocks - **ReadDiscordMessagesBlock**: - Now outputs: `message_id`, `channel_id`, `user_id` (previously missing all IDs) - Enables workflows like: read message → reply to it, or read message → DM the author - **SendDiscordMessageBlock**: - Now outputs: `message_id`, `channel_id` (previously had no outputs except status) - Enables tracking sent messages and replying to them later #### Technical Improvements - **MediaFileType Support**: SendDiscordFileBlock accepts data URIs, URLs, or local paths - **Defensive Programming**: Added runtime type checks for Discord.py's incomplete typing - **ID Passthrough**: DiscordUserInfoBlock passes through user_id for chaining - **Better Error Messages**: Clear feedback when operations fail (e.g., "Channel cannot receive messages") - **Channel Flexibility**: Blocks accept both channel names and IDs ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: #### Test Plan 🧪 - [x] **Import and initialization**: All 8 Discord blocks import and initialize without errors - [x] **Type checking**: `poetry run format` passes with no type errors - [x] **Interface connectivity**: Verified blocks can chain together: - [x] ReadDiscordMessages → ReplyToDiscordMessage (via message_id, channel_id) - [x] ReadDiscordMessages → SendDiscordDM (via user_id) - [x] SendDiscordMessage → ReplyToDiscordMessage (via message_id, channel_id) - [x] DiscordUserInfo → SendDiscordDM (via user_id passthrough) - [x] DiscordChannelInfo → SendDiscordEmbed/File (via channel_id) - [x] **MediaFileType handling**: SendDiscordFileBlock correctly processes: - [x] Data URIs (base64 encoded files) - [x] URLs (downloads from web) - [x] Local paths (from other blocks) - [x] **Defensive checks**: Verified error handling for: - [x] Non-text channels (forums, categories) - [x] Private/DM channels without guilds - [x] Missing attributes on channel objects - [x] **Mock test data**: All blocks have appropriate test inputs/outputs defined ## Example Workflows Now Possible 🚀 1. **Auto-reply to mentions**: Read messages → Check if bot mentioned → Reply in thread 2. **File distribution**: Generate report → Send as PDF to Discord channel 3. **User notifications**: Get user info → Check if online → Send DM with alert 4. **Cross-platform sync**: Receive email attachment → Forward to Discord channel 5. **Rich notifications**: Create embed with thumbnail → Add fields → Send to announcement channel ## Breaking Changes ⚠️ None - all changes are backward compatible. Existing workflows using SendDiscordMessageBlock and ReadDiscordMessagesBlock will continue to work, they just now have additional outputs available. ## Dependencies 📦 No new dependencies added. Uses existing: - `discord.py` (already in project) - `aiohttp` (already in project) - Backend utilities: `MediaFileType`, `store_media_file` (already in project) --------- Co-authored-by: Claude <claude@users.noreply.github.com> |
||
|
|
b68e490868 |
fix(backend): correct LLM configurations (#10585)
## Summary Corrects the context window for GPT5_CHAT, fixes provider for CLAUDE_4_1_OPUS from 'openai' to 'anthropic', and adds a 600s timeout to the Anthropic client call in llm_call. ## Changes 🏗️ - changed gpt5's context limit to be smaller, 16k - changed claude's provider from openai to anthropic - Adding a 600s timeout to the Anthropic client call ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] test all models and they workautogpt-platform-beta-v0.6.21 |
||
|
|
c1c5571fd5 |
feat(blocks): Add 5 additional GitHub Integration blocks (#10561)
### Summary Implemented 5 additional GitHub blocks on top of the existing GitHub Integration to enhance CI/CD workflows and code review automation capabilities. [New Github Blocks_v41.json](https://github.com/user-attachments/files/21684665/New.Github.Blocks_v41.json) <img width="902" height="1073" alt="Screenshot 2025-08-08 at 15 09 40" src="https://github.com/user-attachments/assets/ebb6d33b-f3cd-4a56-acc6-56ace5a01274" /> ### Changes 🏗️ - Added **GitHub CI Results Block** (`github/ci.py`): Fetch and analyze CI/CD check runs, workflow statuses, and logs - Added **GitHub Review Blocks** (`github/reviews.py`): - Create PR reviews with comments - Approve/request changes on PRs - Add review comments to specific lines - Fetch existing reviews and comments - Dismiss stale reviews ### Related Tickets - SECRT-1423: GitHub CI Results Integration - SECRT-1426: GitHub PR Review Creation - SECRT-1425: GitHub Review Comments - SECRT-1424: GitHub Review Approval/Changes - SECRT-1427: GitHub Review Management ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Created and tested CI results block with various repositories - [x] Tested PR review creation with comments - [x] Verified review approval and change request functionality - [x] Tested adding line-specific review comments - [x] Confirmed fetching and dismissing reviews works correctly |
||
|
|
da16397882 |
feat(blocks): update exa websets implementation (#10521)
## Summary This PR fixes and enhances the Exa Websets implementation to resolve issues with the expand_items parameter and improve the overall block functionality. The changes address UI limitations with nested response objects while providing a more comprehensive and user-friendly interface for creating and managing Exa websets. [Websets_v14.json](https://github.com/user-attachments/files/21596313/Websets_v14.json) <img width="1335" height="949" alt="Screenshot 2025-08-05 at 11 45 07" src="https://github.com/user-attachments/assets/3a9b3da0-3950-4388-96b2-e5dfa9df9b67" /> **Why these changes are necessary:** 1. **UI Compatibility**: The current implementation returns deeply nested objects that cause the UI to crash. This PR flattens the input parameters and returns simplified response objects to work around these UI limitations. 2. **Expand Items Issue**: The `expand_items` toggle in the GetWebset block was causing failures. This parameter has been removed as it's not essential for the basic functionality. 3. **Missing SDK Integration**: The previous implementation used raw HTTP requests instead of the official Exa SDK, making it harder to maintain and more prone to errors. 4. **Limited Functionality**: The original implementation lacked support for many Exa API features like imports, enrichments, and scope configuration. ### Changes 🏗️ <\!-- Concisely describe all of the changes made in this pull request: --> 1. **Added Pydantic models** (`model.py`): - Created comprehensive type definitions for all Exa webset objects - Added proper enums for status values and types - Structured models to match the Exa API response format 2. **Refactored websets.py**: - Replaced raw HTTP requests with the official `exa-py` SDK - Flattened nested input parameters to avoid UI issues with complex objects - Enhanced `ExaCreateWebsetBlock` with support for: - Search configuration with entity types, criteria, exclude/scope sources - Import functionality from existing sources - Enrichment configuration with multiple formats - Removed problematic `expand_items` parameter from `ExaGetWebsetBlock` - Updated response objects to use simplified `Webset` model that returns dicts for nested objects 3. **Updated webhook_blocks.py**: - Disabled the webhook block temporarily (`disabled=True`) as it needs further testing 4. **Added exa-py dependency**: - Added official Exa Python SDK to `pyproject.toml` and `poetry.lock` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <\!-- Put your test plan here: --> - [x] Created a new webset using the ExaCreateWebsetBlock with basic search parameters - [x] Verified the webset was created successfully in the Exa dashboard - [x] Listed websets using ExaListWebsetsBlock and confirmed pagination works - [x] Retrieved individual webset details using ExaGetWebsetBlock without expand_items - [x] Tested advanced features including entity types, criteria, and exclude sources - [x] Confirmed the UI no longer crashes when displaying webset responses - [x] Verified the Docker environment builds successfully with the new exa-py dependency #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) - Added `exa-py` dependency to backend requirements ### Additional Notes - The webhook functionality has been temporarily disabled pending further testing and UI improvements - The flattened parameter approach is a workaround for current UI limitations with nested objects - Future improvements could include re-enabling nested objects once the UI supports them better |
||
|
|
098c12a961 |
feat(backend): Enable Ayrshare TikTok support (#10537)
## Summary - Enabled the TikTok posting block that was previously disabled - The block provides comprehensive TikTok-specific posting options ## Changes 🏗️ - Removed `disabled=True` from TikTok posting block to enable functionality - Added full TikTok API integration with all supported options: ## Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified YouTube block is now available in the block list --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co> |
||
|
|
a28b2cf04f | fix(backend/scheduler): Reconfigure scheduling setting & Add more logging on execution scheduling logic autogpt-platform-beta-v0.6.20 | ||
|
|
de7b6b503f | fix(backend): Add timeout on stopping message consumer on manager | ||
|
|
5338ab5b80 | feat(backend): standardize service health checks with UnhealthyServiceError (#10584) | ||
|
|
e8f897ead1 |
feat(backend): standardize service health checks with UnhealthyServiceError (#10584)
This PR standardizes health check error handling across all services by introducing and using a consistent `UnhealthyServiceError` exception type. This improves monitoring, debugging, and service reliability by providing uniform error reporting when services are unhealthy. ### Changes 🏗️ - **Added `UnhealthyServiceError` class** in `backend/util/service.py`: - Custom exception for unhealthy service states - Includes service name in error message - Added to `EXCEPTION_MAPPING` for proper serialization - **Updated health checks across services** to use `UnhealthyServiceError`: - **Database service** (`backend/executor/database.py`): Replace `RuntimeError` with `UnhealthyServiceError` for database connection failures - **Scheduler service** (`backend/executor/scheduler.py`): Replace `RuntimeError` with `UnhealthyServiceError` for scheduler initialization and running state checks - **Notification service** (`backend/notifications/notifications.py`): - Replace `RuntimeError` with `UnhealthyServiceError` for RabbitMQ configuration issues - Added new `health_check()` method to verify RabbitMQ readiness - **REST API** (`backend/server/rest_api.py`): Replace `RuntimeError` with `UnhealthyServiceError` for database health checks - **Updated imports** across all affected files to include `UnhealthyServiceError` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified health check endpoints return appropriate errors when services are unhealthy - [x] Confirmed services start up properly and health checks pass when healthy - [x] Tested error serialization through API responses - [x] Verified no breaking changes to existing functionality #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) No configuration changes were made in this PR - only code changes to improve error handling consistency. --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
fbe432919d | fix(backend/scheduler): Add more robust health check mechanism for scheduler service | ||
|
|
4f208d262e |
test(frontend): add e2e tests for agent dashboard page (#10572)
I have added e2e tests for agent dashboard page It includes, tests like - dashboard page loads successfully - submit agent button works correctly - agent table displays data correctly - agent table actions work correctly I’ve also updated the e2e test script to include some static agent submissions, so I can test if it loads on the frontend. #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] All tests are working perfectly locally <img width="469" height="177" alt="Screenshot 2025-08-08 at 12 13 42 PM" src="https://github.com/user-attachments/assets/5e37afc3-c151-476a-84de-0a06f44a0722" /> |
||
|
|
ac9265c40d | Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev | ||
|
|
e60deba05f |
refactor(backend): separate notification service from scheduler (#10579)
## Summary - Create dedicated notification service entry point (backend.notification:main) - Remove NotificationManager from scheduler service for better separation of concerns - Update docker-compose to run notification service on dedicated port 8007 - Configure all services to communicate with separate notification service This refactoring separates the notification service from the scheduler service, allowing them to run as independent microservices instead of two processes in the same pod. ## Changes Made - **New notification service entry point**: Created `backend/backend/notification.py` with dedicated main function - **Updated pyproject.toml**: Added notification service entry point registration - **Modified scheduler service**: Removed NotificationManager from `backend/backend/scheduler.py` - **Docker Compose updates**: Added notification_server service on port 8007, updated NOTIFICATIONMANAGER_HOST references ## Test plan - [x] Verify notification service starts correctly with new entry point - [x] Confirm scheduler service runs without notification manager - [x] Test docker-compose configuration with separate services - [x] Validate service discovery between microservices - [x] Run linting and type checking 🤖 Generated with [Claude Code](https://claude.ai/code) |
||
|
|
3131e2e856 |
fix(backend): resolve unclosed HTTP client session errors (#10566)
## Summary This PR resolves unclosed HTTP client session errors that were occurring in the backend, particularly during file uploads and service-to-service communication. ### Key Changes - **Fixed GCS storage operations**: Convert `gcloud.aio.storage.Storage()` to use async context managers in `media.py` and `cloud_storage.py` - **Enhanced service client cleanup**: Added proper cleanup methods to `DynamicClient` class in `service.py` with `__del__` fallback and context manager support - **Application shutdown cleanup**: Added cloud storage handler cleanup to FastAPI application lifespan - **Updated test mocks**: Fixed test fixtures to properly mock async context manager behavior ### Root Cause Analysis The "Unclosed client session" and "Unclosed connector" errors were caused by: 1. **GCS storage clients** not using context managers (agent image uploads) 2. **Service HTTP clients** (`httpx.Client`/`AsyncClient`) not being properly cleaned up in the `DynamicClient` class ### Technical Details - All `gcloud.aio.storage.Storage()` instances now use `async with` context managers - `DynamicClient` class now has proper cleanup methods and context manager support - Application shutdown hook ensures cloud storage handlers are properly closed - Test fixtures updated to mock async context manager protocol ### Testing - ✅ All media upload tests pass - ✅ Service client tests pass - ✅ Linting and formatting pass ## Test plan - [ ] Deploy to staging environment - [ ] Monitor logs for "Unclosed client session" errors (should be eliminated) - [ ] Verify file upload functionality works correctly - [ ] Check service-to-service communication operates normally 🤖 Generated with [Claude Code](https://claude.ai/code) --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
378d256b58 |
fix(backend): add graph validation before scheduling recurring jobs (#10568)
## Summary This PR addresses the recurring job validation failures by adding graph validation before scheduling jobs. Previously, validation errors only occurred at runtime during job execution, making it difficult to communicate errors to users for scheduled recurring jobs. ### Changes 🏗️ - **Extract validation logic**: Created `validate_and_construct_node_execution_input` wrapper function that centralizes graph fetching, credential mapping, and validation logic - **Add pre-scheduling validation**: Modified `add_graph_execution_schedule` to validate graphs before creating scheduled jobs - **Make construct function private**: Renamed `construct_node_execution_input` to `_construct_node_execution_input` to prevent direct usage and encourage use of the wrapper - **Reduce code duplication**: Eliminated duplicate validation logic between scheduler and execution paths - **Improve scheduler lifecycle management**: - Enhanced cleanup process with proper event loop shutdown sequence - Added graceful event loop thread termination with timeout - Fixed thread lifecycle management to prevent resource leaks - **Add helper utilities**: - Created `run_async` helper to reduce `asyncio.run_coroutine_threadsafe` boilerplate - Added `SCHEDULER_OPERATION_TIMEOUT_SECONDS` constant for consistent timeout handling across all scheduler operations ### Technical Details **Validation Flow:** The validation now happens in `add_graph_execution_schedule` before calling `scheduler.add_job()`, ensuring that: 1. Graph exists and is accessible to the user 2. All credentials are valid and available 3. Graph structure and node configurations are valid 4. Starting nodes are present and properly configured This uses the same validation logic as runtime execution, guaranteeing consistency. **Scheduler Lifecycle Improvements:** - **Proper cleanup sequence**: Event loop is stopped before thread termination - **Thread management**: Added global tracking of event loop thread for proper cleanup - **Timeout consistency**: All scheduler operations now use the same 300-second timeout - **Resource management**: Prevents potential memory leaks from unclosed event loops **Code Quality Improvements:** - **DRY principle**: `run_async` helper eliminates repeated `asyncio.run_coroutine_threadsafe` patterns - **Single source of truth**: All timeout values use `SCHEDULER_OPERATION_TIMEOUT_SECONDS` constant - **Cleaner abstractions**: Direct utility function calls instead of unnecessary wrapper methods ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified imports work correctly for both scheduler and utils modules - [x] Confirmed code passes all linting and type checking - [x] Validated that existing functionality remains intact - [x] Tested that validation logic is properly extracted and reused - [x] Verified scheduler cleanup process works correctly - [x] Confirmed thread lifecycle management improvements #### For configuration changes: - [x] `.env.example` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) *Note: No configuration changes were required for this fix.* ## Impact - **Prevents runtime failures**: Invalid graphs are caught before scheduling instead of failing silently during execution - **Better error communication**: Validation errors surface immediately when scheduling - **Improved resource management**: Proper event loop and thread cleanup prevents memory leaks - **Enhanced maintainability**: Single source of truth for validation logic and consistent timeout handling - **Reduced code duplication**: Eliminated ~30+ lines of duplicate code across validation and async execution patterns - **Better developer experience**: Cleaner code with helper functions and consistent patterns Resolves the TODO comment: "We need to communicate this error to the user somehow" in scheduler.py:107 Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
3c52b75278 |
fix(frontend): marketplace top agents section (#10571)
Currently, we’re only seeing the top 20 agents, but we need to display all of them until we see more call-to-action buttons. #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] All tests are working perfectly - [x] It's working manually as well |
||
|
|
40601f1616 |
fix(backend): Fix executor running RabbitMQ operations on closed/closing connection (#10578)
The RabbitMQ connection is unreliable (fixing it is a separate issue) and sometimes get restarted. The scope of this PR is to avoid the operation break due to executing on a stale, broken connection. ### Changes 🏗️ Fix executor running RabbitMQ operations on closed/closing connection ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Manually kill rabbitmq and see how it goes while executing an agent |
||
|
|
178c91d6b9 |
ref(backend): time/date blocks to support ISO 8601 and custom formats (#10576)
Introduces discriminated unions for time, date, and date-time format selection, supporting both strftime and ISO 8601 (with timezone and microsecond options). Updates schemas, test cases, and block logic to handle the new format types, improving flexibility and standards compliance for time and date outputs. <!-- Clearly explain the need for these changes: --> ### Why these changes are needed Users need to output timestamps in ISO 8601/RFC 3339 format for API integrations and standardized data exchange. The previous implementation only supported strftime formatting, which made it difficult to generate properly formatted timestamps with timezone information. This change enables: - **Standards compliance**: ISO 8601 and RFC 3339 compliant timestamps - **Timezone support**: 38 timezone options covering all UTC offsets globally - **API compatibility**: Many APIs require RFC 3339 timestamps (e.g., "2011-06-03T10:00:00-07:00") - **Backward compatibility**: Existing workflows continue to work with default strftime format ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> - **Added discriminated union format types** for all time/date blocks: - `GetCurrentTimeBlock`: Now supports `TimeStrftimeFormat` and `TimeISO8601Format` - `GetCurrentDateBlock`: Now supports `DateStrftimeFormat` and `DateISO8601Format` - `GetCurrentDateAndTimeBlock`: Now supports `StrftimeFormat` and `ISO8601Format` - **Implemented shared timezone support**: - Created `TimezoneLiteral` type with 38 timezone options (all UTC offsets) - Supports fractional offsets (e.g., India UTC+05:30, Nepal UTC+05:45) - Deduplicated timezone lists across all format classes - **Added ISO 8601 format features**: - Timezone-aware timestamps with proper offset formatting - Optional microseconds inclusion - RFC 3339 compliance (subset of ISO 8601 with mandatory timezone) - **Updated test cases** for all three blocks to verify: - Default behavior unchanged (backward compatibility) - Custom strftime formats still work - ISO 8601 format produces correct output ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Verified backward compatibility - default strftime format unchanged - [x] Tested ISO 8601 format with UTC timezone - [x] Tested ISO 8601 format with various timezones (India, New York, etc.) - [x] Tested microseconds option for ISO formats - [x] Verified all existing tests pass for GetCurrentTimeBlock - [x] Verified all existing tests pass for GetCurrentDateBlock - [x] Verified all existing tests pass for GetCurrentDateAndTimeBlock - [x] Manually tested each block with different format configurations - [x] Confirmed RFC 3339 compliance for timestamps with mandatory timezone --------- Co-authored-by: Claude <claude@users.noreply.github.com> |
||
|
|
c972f34713 |
Revert "feat(docker): add frontend service to docker-compose with env config improvements" (#10577)
Reverts Significant-Gravitas/AutoGPT#10536 to bring platform back up due to this error: ``` │ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │ │ │ │ Check your Supabase project's API settings to find these values │ │ │ │ https://supabase.com/dashboard/project/_/settings/api │ │ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │ │ at bX (.next/server/chunks/3873.js:6:90688) │ │ at <unknown> (.next/server/chunks/150.js:6:13460) │ │ at n (.next/server/chunks/150.js:6:13419) │ │ at o (.next/server/chunks/150.js:6:14187) │ │ ⨯ Error: Your project's URL and Key are required to create a Supabase client! │ │ │ │ Check your Supabase project's API settings to find these values │ │ │ │ https://supabase.com/dashboard/project/_/settings/api │ │ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │ │ at bY (.next/server/chunks/3006.js:10:486) │ │ at g (.next/server/app/(platform)/auth/callback/route.js:1:5890) │ │ at async e (.next/server/chunks/9836.js:1:101814) │ │ at async k (.next/server/chunks/9836.js:1:15611) │ │ at async l (.next/server/chunks/9836.js:1:15817) { │ │ digest: '424987633' │ │ } │ │ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │ │ │ │ Check your Supabase project's API settings to find these values │ │ │ │ https://supabase.com/dashboard/project/_/settings/api │ │ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │ │ at bX (.next/server/chunks/3873.js:6:90688) │ │ at <unknown> (.next/server/chunks/150.js:6:13460) │ │ at n (.next/server/chunks/150.js:6:13419) │ │ at j (.next/server/chunks/150.js:6:7482) │ │ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │ │ │ │ Check your Supabase project's API settings to find these values │ │ │ │ https://supabase.com/dashboard/project/_/settings/api │ │ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │ │ at bX (.next/server/chunks/3873.js:6:90688) │ │ at <unknown> (.next/server/chunks/150.js:6:13460) │ │ at n (.next/server/chunks/150.js:6:13419) │ │ at h (.next/server/chunks/150.js:6:10561) │ │ Error creating Supabase client Error: @supabase/ssr: Your project's URL and API key are required to create a Supabase client! │ │ │ │ Check your Supabase project's API settings to find these values │ │ │ │ https://supabase.com/dashboard/project/_/settings/api │ │ at <unknown> (https://supabase.com/dashboard/project/_/settings/api) │ │ at bX (.next/server/chunks/3873.js:6:90688) │ │ at <unknown> (.next/server/chunks/150.js:6:13460) │ │ at n (.next/server/chunks/150.js:6:13419) ``` |
||
|
|
7b3ee66247 |
feat(blocks): Add Anthropics new Claude Opus 4.1 model (#10575)
This adds the latest claude opus 4.1 model to the platform
This adds the following models
- claude-opus-4-1-20250805
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test claude opus 4.1 to make sure they work
|
||
|
|
2d10ac92b5 |
feat(blocks): Add GPT-5 models to the platform (#10574)
This adds the latest chatGPT models, gpt 5 to the platform, this is ahead of its release, the prices and context limits are still to be properly set but for now i set them to be the same as gpt4.1, the price is set at 5 for now till we know more This adds the following models - gpt-5 - gpt-5-mini - gpt-5-nano - gpt-5-chat ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] Test all of the models to make sure they work |
||
|
|
377b5ef01c |
fix id not preserved through airtable oauth refresh (#10573)
<!-- Clearly explain the need for these changes: --> ### Changes 🏗️ <!-- Concisely describe all of the changes made in this pull request: --> ### Checklist 📋 #### For code changes: - [ ] I have clearly listed my changes in the PR description - [ ] I have made a test plan - [ ] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [ ] ... <details> <summary>Example test plan</summary> - [ ] Create from scratch and execute an agent with at least 3 blocks - [ ] Import an agent from file upload, and confirm it executes correctly - [ ] Upload agent to marketplace - [ ] Import an agent from marketplace and confirm it executes correctly - [ ] Edit an agent from monitor, and confirm it executes correctly </details> #### For configuration changes: - [ ] `.env.example` is updated or already compatible with my changes - [ ] `docker-compose.yml` is updated or already compatible with my changes - [ ] I have included a list of my configuration changes in the PR description (under **Changes**) <details> <summary>Examples of configuration changes</summary> - Changing ports - Adding new services that need to communicate with each other - Secrets or environment variable changes - New or infrastructure changes such as databases </details> |