Commit Graph

658 Commits

Author SHA1 Message Date
Reinier van der Leer
4928ce3f90 feat(library): Create presets from runs (#10823)
- Resolves #9307

### Changes 🏗️

- feat(library): Create presets from runs
  - Prevent creating preset from run with unknown credentials
- Fix running presets with credentials
  - Add `credential_inputs` parameter to `execute_preset` endpoint

API:
- Return `GraphExecutionMeta` from `*/execute` endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` for an agent that *does not* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
- Go to `/library/agents/[id]` for an agent that *does* require
credentials
- Click the menu on any run and select "Pin as a preset"; fill out the
dialog and submit
    - [x] -> UI works
    - [x] -> Error toast appears with descriptive message
- Initiate a new run; once finished, click "Create preset from run";
fill out the dialog and submit
    - [x] -> UI works
    - [x] -> Operation succeeds and dialog closes
    - [x] -> New preset is shown at the top of the runs list
2025-09-03 01:26:12 +00:00
Reinier van der Leer
e16e69ca55 feat(library, executor): Make "Run Again" work with credentials (#10821)
- Resolves [OPEN-2549: Make "Run again" work with credentials in
`AgentRunDetailsView`](https://linear.app/autogpt/issue/OPEN-2549/make-run-again-work-with-credentials-in-agentrundetailsview)
- Resolves #10237

### Changes 🏗️

- feat(frontend/library): Make "Run Again" button work for runs with
credentials
- feat(backend/executor): Store passed-in credentials on
`GraphExecution`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]` for an agent with credentials inputs
  - Run the agent manually
    - [x] -> runs successfully
- [x] -> "Run again" shows among the action buttons on the newly created
run
  - Click "Run again"
    - [x] -> runs successfully
2025-09-02 18:34:56 +00:00
Reinier van der Leer
0e755a5c85 feat(platform/library): Support UX for manual-setup triggers (#10309)
- Resolves #10234

### Preview

#### Manual setup triggers
![preview of the setup screen for manual-setup
triggers](https://github.com/user-attachments/assets/295d2968-ad11-4291-b360-2eb2acb03397)
![preview of the view for an active manual-setup
trigger](https://github.com/user-attachments/assets/d0ae2246-2305-48f5-aea8-8adb37336401)

#### Auto-setup triggers
![preview of the view for an active auto-setup
trigger](https://github.com/user-attachments/assets/63856311-fc99-450c-ae1f-86951e40dc26)

### Changes 🏗️

- Add "Trigger status" section to `AgentRunDraftView`
- Add `AgentPreset.webhook`, so we can show webhook URL in library
  - Add `AGENT_PRESET_INCLUDE` to `backend.data.includes`
- Add `BaseGraph.trigger_setup_info` (computed field)
- Rename `LibraryAgentTriggerInfo` to `GraphTriggerInfo`; move to
`backend.data.graph`

Refactor:
- Move contents of `@/components/agents/` to
`@/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/`
- Fix small type difference between legacy & generated
`LibraryAgent.image_url`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Setting up GitHub trigger works
  - [x] Setting up manual trigger works
  - [x] Enabling/disabling manual trigger through Library works
2025-09-02 10:23:32 +00:00
Reinier van der Leer
dfdc71f97f feat(backend/external-api): Make API key auth work in Swagger UI (#10783)
![Swagger UI API key auth
dialog](https://github.com/user-attachments/assets/02026802-51f9-410d-bdb8-53840d5eb17b)

- Resolves #10782

### Changes 🏗️

- Use `Security(..)` for security dependencies
- Minor tweaks to auth mechanism (similar to #10720)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] API key auth feature appears in Swagger UI
  - [ ] API key auth *works* in Swagger UI (@ntindle wanna test this?)
2025-09-02 09:24:14 +00:00
Reinier van der Leer
12cdd45551 refactor(backend): Improve auth setup & OpenAPI generation (#10720)
Our current auth setup (`autogpt_libs.auth` + its usage) is quite
inconsistent and doesn't do all of its jobs properly. The 401 responses
you get when unauthenticated are not included in the OpenAPI spec,
causing these to be unaccounted for in the generated frontend API
client. Usage of the FastAPI dependencies supplied by
`autogpt_libs.auth.depends` aren't consistently used the same way,
making maintenance on these hard to oversee. API tests use many
different ways to get around the auth requirement, making this also hard
to maintain and oversee.
This pull request aims to fix all of this and give us a consistent,
clean, and self-documenting API auth implementation.

- Resolves #10715

### Changes 🏗️

- Homogenize use of `autogpt_libs.auth` security dependencies throughout
the backend
- Fix OpenAPI schema generation for 401 responses
  - Handle possible 401 responses in frontend
- Tighten validation and add warnings for weak settings in
`autogpt_libs.auth.config`
- Increase test coverage for `autogpt_libs.auth` to 100%
- Standardize auth setup for API tests
- Rename `APIKeyValidator` to `APIKeyAuthenticator` and move to its own
module in `backend.server`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tests for `autogpt_libs.auth` pass
  - [x] All tests for `backend.server` pass
  - [x] @ntindle does a security audit for these changes
- [x] OpenAPI spec for authenticated routes is generated with the
appropriate `401` response

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-28 14:46:50 +00:00
Reinier van der Leer
df3c81a7a6 fix(backend): Fix 4 (deprecation) warnings on startup (#10759)
Fixes these warnings on startup:
```
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:373: UserWarning: Valid config keys have changed in V2:
* 'schema_extra' has been renamed to 'json_schema_extra'
  warnings.warn(message, UserWarning)
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:323: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
  warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:298: PydanticDeprecatedSince20: `json_encoders` is deprecated. See https://docs.pydantic.dev/2.11/concepts/serialization/#custom-serializers for alternatives. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
  warnings.warn(
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:294: UserWarning: `alias` specification on field "created_at" must be set on outermost annotation to take effect.
  warnings.warn(
/home/reinier/code/agpt/AutoGPT/autogpt_platform/backend/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:294: UserWarning: `alias` specification on field "updated_at" must be set on outermost annotation to take effect.
  warnings.warn(
```

- Resolves #10758

### Changes 🏗️

- Fix field annotations in `backend/blocks/exa/websets.py`
- Replace deprecated JSON encoder specification in
`backend/blocks/wordpress/_api.py` by field serializer
- Move deprecated `schema_extra` example specification in
`backend/server/integrations/models.py` to `Field(examples=...)`

The two remaining warnings that appear on start-up aren't trivial to
fix.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Changes are trivial and do not require further testing
2025-08-28 11:34:58 +00:00
Bently
3718b948ea feat(blocks): Add Ideogram V3 model (#10752)
Adds support for Ideogram V3 model while maintaining backward
compatibility with existing models (V1,
V1_TURBO, V2, V2_TURBO). Updates default model to V3 and implements
smart API routing to handle
Ideogram's new V3 endpoint requirements.

Changes Made

- Added V3 model support: Added V_3 to IdeogramModelName enum and set as
default
- Dual API endpoint handling:
- V3 models route to new /v1/ideogram-v3/generate endpoint with updated
payload format
- Legacy models (V1, V2, Turbo variants) continue using /generate
endpoint
- Model-specific feature filtering:
- V1 models: Basic parameters only (no style_type or color_palette
support)
- V2/V2_TURBO: Full legacy feature support including style_type and
color_palette
- V3: New endpoint with aspect ratio mapping and updated parameter
structure
- Aspect ratio compatibility: Added mapping between internal enum values
and V3's expected format
(ASPECT_1_1 → 1x1)
- Updated pricing: V3 model costs 18 credits (vs 16 for other models)
- Updated default usage: Store image generation now uses V3 by default

Technical Details

Ideogram updated their API with a separate V3 endpoint that has
different requirements:
- Different URL path (/v1/ideogram-v3/generate)
- Different aspect ratio format (e.g., 1x1 instead of ASPECT_1_1)
- Model-specific feature support (V1 models don't support style_type,
etc.)

The implementation intelligently routes requests to the appropriate
endpoint based on the selected model
while maintaining a single unified interface.

I tested all the models and they are working here
<img width="1804" height="887" alt="image"
src="https://github.com/user-attachments/assets/9f2e44ca-50a4-487f-987c-3230dd72fb5e"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the Ideogram model block and watch as they all work!
2025-08-27 08:33:47 +00:00
Swifty
44d739386b feat(platform/blocks): Added stagehand integration (#10751)
Added basic stagehand integration:

<img width="667" height="609" alt="Screenshot 2025-08-27 at 09 20 18"
src="https://github.com/user-attachments/assets/11ab2941-0913-4346-a1d4-45980711e0f9"
/>


[stagehand_v35.json](https://github.com/user-attachments/files/22002924/stagehand_v35.json)

### Changes 🏗️

- Act Block
- Extract Block
- Observe Block

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have added a sample agent
- [x] I have created an agent that uses these blocks and ensured it runs
2025-08-27 08:33:41 +00:00
Zamil Majdy
c0172c93aa fix(backend/executor): prevent infinite requeueing of malformed messages (#10746)
### Changes 🏗️

This PR fixes an infinite loop issue in the execution manager where
malformed or unparseable messages would be continuously requeued,
causing high CPU usage and preventing the system from processing
legitimate messages.

**Key changes:**
- Modified `_ack_message()` function to accept explicit `requeue`
parameter
- Set `requeue=False` for malformed/unparseable messages that cannot be
fixed by retrying
- Set `requeue=False` for duplicate execution attempts (graph already
running)
- Kept `requeue=True` for legitimate failures that may succeed on retry
(e.g., temporary resource constraints, network issues)

**Technical details:**
The previous implementation always set `requeue=True` when rejecting
messages with `basic_nack()`. This caused problematic messages to be
immediately re-delivered to the consumer, creating an infinite loop for:
1. Messages with invalid JSON that cannot be parsed
2. Messages for executions that are already running (duplicates)

These scenarios will never succeed regardless of how many times they're
retried, so they should be rejected without requeueing to prevent
resource exhaustion.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified malformed messages are rejected without requeue
- [x] Confirmed duplicate execution messages are rejected without
requeue
- [x] Ensured legitimate failures (shutdown, pool full) still requeue
properly
- [x] Tested that normal message processing continues to work correctly
2025-08-26 18:34:58 +07:00
Krzysztof Czerwinski
8a68e03eb1 feat(backend): Blocks Menu redesign backend (#10128)
Backend for the Blocks Menu Redesign.

### Changes 🏗️

- Add optional `agent_name` to the `AgentExecutorBlock` - displayed as
the block name in the Builder
- Include `output_schema` in the `LibraryAgent` model
- Make `v2.store.db.py:get_store_agents` accept multiple creators filter
- Add `api/builder` router with endpoints (and accompanying logic in
`v2/builder/db` and models in `v2/builder/models`)
  - `/suggestions`: elements for the suggestions tab
  - `/categories`: categories with a number of blocks per each
  - `/blocks`: blocks based on category, type or provider
  - `/providers`: integration providers with their block counts
- `/serach`: search blocks (including integrations), marketplace agents
and user library agents
  - `/counts`: element counts for each category in the Blocks Menu.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Modified function `get_store_agents` works in existing code paths
  - [x] Agent executor block works
  - [x] New endpoints work
  - [x] Existing Builder menu is unaffected

---------

Co-authored-by: Abhimanyu Yadav <abhimanyu1992002@gmail.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-26 02:23:10 +00:00
Mitansh Jadhav
469b1fccbb fix(blocks): handle invalid or empty response from MusicGen model (#10533)
Handle invalid or empty response from MusicGen model


Fixes: #9145
> ⚠️ Note: This PR does not directly fix issue #9145 (failed run marked
as success), but improves the validation of the URL to reduce the
chances of invalid states entering the system. This is a related
improvement, but not the root cause fix.


### Description
During execution of the meta/musicgen model via Replicate API, the
application failed
with an error indicating the model returned an empty or invalid
response.
Although some API calls succeeded, this error showed the logic was not
checking the
structure and content of the result properly before processing it.

PROBLEM:
CONTEXT:
API: Replicate
MODEL: meta/musicgen:671ac645
STATUS: Failed after 3 attempts
ERROR_MESSAGE: "Unexpected error: Model returned empty or invalid
response"
CAUSE:
- The original logic did not validate result structure.
- It assumed any non-null output was valid, including strings like "No
output received".
- This led to invalid/malformed results being passed to the frontend.


### Changes 🏗️

- Added `AIMusicGeneratorBlock` to support music generation using Meta’s
MusicGen models via Replicate API.
- Supports configurable inputs like prompt, model version, duration,
temperature, top_k/p, and normalization.
- Uses robust retry logic for reliability.
- Output returns audio URL; errors return user-friendly message.

BEFORE_CODE: |
```
if result and result != "No output received":
     yield "result", result
     return
```

AFTER_CODE: |

```
if result and isinstance(result, str) and result.startswith("http"):
      yield "result", result
      return
```

### Checklist 📋

#### For code changes:
- [x] Clearly listed changes in the PR description
- [x] Added test plan and mock outputs
- [x] Tested with various prompts and confirmed working output

### Test Plan

- [x] Ran locally with valid Replicate API key
- [x] Generated audio with different prompts
- [x] Simulated failure to verify retry and error message

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-25 16:35:12 +00:00
Bently
1aa7e10cbd feat(AM): fix moderation id message (#10733)
this fixes and makes the moderation message properly show the moderation
ID

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] try trigger moderation and have it shows the moderation id in the
error message
2025-08-25 16:08:26 +00:00
Nicholas Tindle
890bb3b8b4 feat(backend): implement low balance and insufficient funds notifications (#10656)
Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: majdyz <zamil@agpt.co>
2025-08-25 11:17:40 -05:00
Nicholas Tindle
2bb8e91040 feat(backend): Add user timezone support to backend (#10707)
Co-authored-by: Swifty <craigswift13@gmail.com>
resolve issue #10692 where scheduled time and actual run
2025-08-25 11:00:07 -05:00
Nicholas Tindle
76090f0ba2 feat(backend,frontend): Send applicant email on review response (#10718)
### Changes 🏗️

This PR implements email notifications for agent creators when their
agent submissions are approved or rejected by an admin in the
marketplace.

Specifically, the changes include:
- Added `AGENT_APPROVED` and `AGENT_REJECTED` notification types to
`schema.prisma`.
- Created `AgentApprovalData` and `AgentRejectionData` Pydantic models
for notification data.
- Configured the notification system to use immediate queues and new
Jinja2 templates for these types.
- Designed two new email templates: `agent_approved.html.jinja2` and
`agent_rejected.html.jinja2`, with dynamic content for agent details,
reviewer feedback, and relevant action links.
- Modified the `review_store_submission` function to:
    - Include `User` and `Reviewer` data in the database query.
- Construct and queue the appropriate email notification based on the
approval/rejection status.
- Ensure email sending failures do not block the agent review process.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Approve an agent via the admin dashboard.
- [x] Verify the agent creator receives an "Agent Approved" email with
correct details and a link to the store.
  - [x] Reject an agent via the admin dashboard (providing a reason).
- [x] Verify the agent creator receives an "Agent Rejected" email with
correct details, the rejection reason, and a link to resubmit.
- [x] Verify that if email sending fails (e.g., misconfigured SMTP), the
agent approval/rejection process still completes successfully without
error.

<img width="664" height="975" alt="image"
src="https://github.com/user-attachments/assets/d397f2dc-56eb-45ab-877e-b17f1fc234d1"
/>
<img width="664" height="975" alt="image"
src="https://github.com/user-attachments/assets/25597752-f68c-46fe-8888-6c32f5dada01"
/>


---
Linear Issue: [SECRT-1168](https://linear.app/autogpt/issue/SECRT-1168)

<a
href="https://cursor.com/background-agent?bcId=bc-7394906c-0341-4bd0-8842-6d9d6f83c56c">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-7394906c-0341-4bd0-8842-6d9d6f83c56c">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-08-25 14:24:16 +00:00
Bently
5b12e02c4e feat(AutoMod): add `moderation id` to moderation message (#10728)
AutoModManager now captures and propagates the content_id from the
moderation API for both input and output moderation. AutoModResponse and
ModerationError are updated to include content_id, allowing better
traceability of moderation actions and error reporting, with this the
error message will now show ``Failed due to content moderation
(Moderation ID: uuid-here)``

This is good for if a user is having a issue with automod and its
falsely flagging there runs we can use the moderation ID to look at
automod to see whats going on

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

Just some updates to receive the content_id from AutoMod and then show
it in the "Failed due to content moderation" message

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run autogpt with AM and trigger it and it will show the content_id
in the error message
2025-08-25 12:47:11 +00:00
Nicholas Tindle
476bfc6c84 feat(backend): add store meta blocks (#10633)
<!-- Clearly explain the need for these changes: -->
This PR implements blocks that enable users to interact with the AutoGPT
store and library programmatically. This addresses the need for agents
to be able to add other agents from the store to their library and
manage agent collections automatically, as requested in Linear issue
OPEN-2602. These are locked behind LaunchDarkly for now.


https://github.com/user-attachments/assets/b8518961-abbf-4e9d-a31e-2f3d13fa6b0d


### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Added new store operations blocks**
(`backend/blocks/system/store_operations.py`):
- `GetStoreAgentDetailsBlock`: Retrieves detailed information about an
agent from the store
- `SearchStoreAgentsBlock`: Searches for agents in the store with
various filters

  
- **Added new library operations blocks**
(`backend/blocks/system/library_operations.py`):
  - `ListLibraryAgentsBlock`: Lists all agents in the user's library
- `AddToLibraryFromStoreBlock`: Adds an agent from the store to user's
library

- **Updated block exports** in `backend/blocks/system/__init__.py` to
include new blocks

- **Added comprehensive tests** for store operations in
`backend/blocks/test/test_store_operations.py`

- **Enhanced executor database utilities** in
`backend/executor/database.py` with new helper methods for agent
management

- **Updated frontend marketplace page** to properly handle the new store
operations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Created unit tests for all new store operation blocks
- [x] Tested GetStoreAgentDetailsBlock retrieves correct agent
information
- [x] Tested SearchStoreAgentsBlock filters and returns agents correctly
- [x] Tested AddToLibraryFromStoreBlock successfully adds agents to
library
  - [x] Tested error handling for non-existent agents and invalid inputs
  - [x] Verified all blocks integrate properly with the database manager
  - [x] Confirmed blocks appear in the block registry and are accessible

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-08-25 07:56:23 +00:00
Nicholas Tindle
5502256bea feat(backend): DiscordGetCurrentUserBlock to fetch authenticated user details via OAuth2 (#10723)
<!-- Clearly explain the need for these changes: -->

We want a way to get the user's id from discord without them having to
enable dev mode so this is a way -- oauth login

<img width="2551" height="1202" alt="image"
src="https://github.com/user-attachments/assets/71be07a9-fd37-4ea7-91a1-ced8972fda29"
/>


### Changes 🏗️
- Created DiscordOAuthHandler for managing OAuth2 flow, including login
URL generation, token exchange, and revocation.
- Implemented support for PKCE in the OAuth2 flow.
- Enhanced error handling for user info retrieval and token management.
- Add discord block for getting the logged in user
- Add new client secret field to .env.default

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] add the blocks and test they all work


#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-08-24 20:53:48 +00:00
Nicholas Tindle
bd97727763 feat(platform): add ability to reject approved agents in admin dashboard (#10671)
<!-- Clearly explain the need for these changes: -->
We need the ability to reject or remove already approved agents from the
marketplace via the Admin Dashboard. Previously, once an agent was
approved, there was no easy way to remove it from the marketplace
without direct database intervention.

This addresses several use cases:
- Removing agents that require credentials (short-term solution
discussed with Reinier)
- Handling broken agents mistakenly approved
- Managing outdated or problematic agents
- Quick response to issues without engineering support

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- **Backend**: Modified `review_store_submission` function in
`/backend/server/v2/store/db.py` to handle rejecting already approved
agents
  - Added logic to detect when rejecting an approved agent
  - Updates StoreListing to remove agent from marketplace when rejected
  - Handles multiple approved versions correctly
- **Frontend**: Updated admin marketplace UI components
- `expandable-row.tsx`: Show action buttons for both PENDING and
APPROVED agents
  - `approve-reject-buttons.tsx`: 
- Show only "Revoke" button for approved agents (hide Approve button)
    - Update button text from "Reject" to "Revoke" for approved agents
    - Update dialog titles and descriptions appropriately

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Navigate to Admin Dashboard > Marketplace Management
- [x] Find a PENDING agent and verify both Approve and Reject buttons
appear
  - [x] Find an APPROVED agent and verify only "Revoke" button appears
- [x] Click Revoke on an approved agent and verify dialog shows "Revoke
Approved Agent" title
- [x] Submit revocation with comments and verify agent status changes to
REJECTED
  - [x] Verify the agent is removed from the public marketplace
- [x] Test with an agent that has multiple approved versions - verify it
switches to another approved version
- [x] Test with an agent that has only one approved version - verify
hasApprovedVersion is set to false

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this uses existing admin
authentication and database schema.

Fixes SECRT-1218

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-08-23 21:30:18 +00:00
Reinier van der Leer
aa256f21cd feat(platform/library): Infinite scroll in Agent Runs list (#10709)
- Resolves #10645

### Changes 🏗️

- Implement infinite scroll in the Agent Runs list (on
`/library/agents/[id]`)
- Add horizontal scroll support to `ScrollArea` and `InfiniteScroll`
components
- Fix `InfiniteScroll` triggering twice
- Fix date handling by React Queries
  - Add response mutator to parse dates coming out of API
  - Make legacy `GraphExecutionMeta` compatible with generated type

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Open `/library/agents/[id]`
    - [x] Agent runs list loads
  - Scroll agent runs list to the end
    - [x] More runs are loaded and appear in the list
2025-08-22 15:35:09 +00:00
Swifty
2848e62f8a feat(backend): Add BaaS integration blocks (#10350)
### Changes 🏗️

This PR adds Meeting BaaS (Bot-as-a-Service) integration to the AutoGPT
platform, enabling automated meeting recording and transcription
capabilities.

<img width="1157" height="633" alt="Screenshot 2025-08-22 at 15 06 15"
src="https://github.com/user-attachments/assets/53b88bc8-5580-4287-b6ed-3ae249aed69f"
/>

[BAAS
Test_v12.json](https://github.com/user-attachments/files/21938290/BAAS.Test_v12.json)

**New Features:**
- **Meeting Recording Bot Management:**
  - Deploy bots to join and record meetings automatically
  - Support for multiple meeting platforms via meeting URL
  - Scheduled bot deployment with Unix timestamp support
  - Custom bot avatars and entry messages
  - Webhook support for real-time event notifications
  
- **Meeting Data Operations:**
  - Retrieve MP4 recordings and transcripts from completed meetings
  - Delete recording data for privacy/storage management
  - Force bots to leave ongoing meetings
  
**Technical Implementation:**
- Added 4 new files under `backend/blocks/baas/`:
  - `__init__.py`: Package initialization
  - `_api.py`: Meeting BaaS API client with comprehensive endpoints
  - `_config.py`: Provider configuration using SDK pattern
- `bots.py`: 4 bot management blocks (Join, Leave, Fetch Data, Delete
Recording)

**Key Capabilities:**
- Join meetings with customizable bot names and avatars
- Automatic transcription with configurable speech-to-text providers
- Time-limited MP4 download URLs for recordings
- Reserved bot slots for joining 4 minutes before meetings
- Automatic leave timeouts configuration
- Custom metadata support for tracking

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created and executed an agent with Meeting BaaS bot deployment
blocks
  - [x] Tested bot joining meetings with various configurations
  - [x] Verified recording retrieval and transcript functionality
  - [x] Tested bot removal from ongoing meetings
  - [x] Confirmed data deletion operations work correctly
  - [x] Verified error handling for invalid API keys and bot IDs
  - [x] Tested webhook URL configuration

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-22 15:03:35 +00:00
Swifty
7c908c10b8 feat(blocks): Add DataForSEO keyword research blocks (#10711)
<!-- Clearly explain the need for these changes: -->
This PR adds two new DataForSEO blocks for keyword research
functionality, enabling users to get keyword suggestions and related
keywords using the DataForSEO Labs API.



https://github.com/user-attachments/assets/55b3f64b-20b2-4c6d-b307-01327d476fe2

[DataForSeo
Poc_v3.json](https://github.com/user-attachments/files/21916605/DataForSeo.Poc_v3.json)


### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Added `DataForSeoKeywordSuggestionsBlock` for getting keyword
suggestions from DataForSEO Labs
- Added `DataForSeoRelatedKeywordsBlock` for getting related keywords
from DataForSEO Labs
- Implemented proper Pydantic models (`KeywordSuggestion` and
`RelatedKeyword`) for type-safe outputs
- Added mockable private methods (`_fetch_keyword_suggestions` and
`_fetch_related_keywords`) for better testability
- Included comprehensive test mocks to allow testing without actual API
credentials
- Both blocks support optional SERP info and clickstream data
- Added DataForSEO provider configuration using the SDK's
ProviderBuilder pattern

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run block tests for DataForSeoKeywordSuggestionsBlock
  - [x] Run block tests for DataForSeoRelatedKeywordsBlock
  - [x] Verify mocks work correctly without API credentials
  - [x] Confirm proper Pydantic model serialization
  - [x] Run poetry format and fix any linting issues
2025-08-21 21:17:34 +00:00
Reinier van der Leer
f4538d6f5a build(backend): Change base image to debian:13-slim w/ Python 3.13 (#10654)
- Resolves #10653

The objective is to move to a base image with fewer active
vulnerabilities. Hence the choice for `debian:13-slim` (0 high, 1
medium, 21 low severity), a huge improvement compared to our current
base image `python:3.11.10-slim-bookworm` (4 high, 11 medium, 15 low
severity).

### Changes 🏗️

- Change backend base image to `debian:13-slim`
  - Use Python 3.13
- Fix now-deprecated use of class property in `AppProcess` and
`BaseAppService`
- Expand backend CI matrix to run with Python 3.11 through 3.13
- Update Python version constraint in `pyproject.toml` to include Python
3.13

Also, unrelated:
- Update `autogpt-platform-backend` package version to `v0.6.22`, the
latest release

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI passes
  - [x] No new errors in deployment logs
  - [x] Everything seems to work normally in deployment
2025-08-20 19:34:33 +00:00
Abhimanyu Yadav
2610c4579f feat(platform/dashboard): Enable editing for agent submissions (#10545)
- resolves -
https://github.com/Significant-Gravitas/AutoGPT/issues/10511

In this PR, I’ve added backend endpoints and a frontend UI for edit
functionality on the Agent Dashboard. Now, users can update their store
submission, if status is `PENDING` or `APPROVED`, but not for `REJECTED`
and `DRAFT`. When users make changes to a pending status submission, the
changes are made to the same version. However, when users make changes
to an approved status submission, a new store listing version is
created.

Backend works something like this: 

<img width="866" height="832" alt="Screenshot 2025-08-15 at 9 39 02 AM"
src="https://github.com/user-attachments/assets/209c60ac-8350-43c1-ba4c-7378d95ecba7"
/>

### Changes
- I’ve updated the `StoreSubmission` view to include `video_url` and
`categories`.
- I’ve added a new frontend UI for editing submissions.
- I’ve created an endpoint for editing submissions.
- I’ve added more end-to-end tests to ensure the edit submission
functionality works as expected.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] I have checked manually, everything is working perfectly.
  - [x] All e2e tests are also passing.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: neo <neo.dowithless@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-08-20 02:49:29 +00:00
Abhimanyu Yadav
0c09b0c459 chore(api): remove launch darkly feature flags from api key endpoints (#10694)
Some API key endpoints have the Launch Darkly feature flag enabled,
while others don’t. To ensure consistency and remove the API key flag
from the Launch Darkly dashboard, I’m also removing it from the left
endpoints.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Everything is working fine locally
2025-08-19 17:10:43 +00:00
Abhimanyu Yadav
1105e6c0d2 tests(frontend): e2e tests for api key page (#10683)
I’ve added three tests for the API keys page:

- The test checks if the user is redirected to the login page when
they’re not authenticated.
- The test verifies that a new API key is created successfully.
- The test ensures that an existing API key can be revoked.

<img width="470" height="143" alt="Screenshot 2025-08-19 at 10 56 19 AM"
src="https://github.com/user-attachments/assets/d27bf736-61ec-435b-a6c4-820e4f3a5e2f"
/>

I’ve also removed the feature flag from the `delete_api_key` endpoint,
so we can use it on CI and in the local environment.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] tests are working perfectly locally.

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2025-08-19 16:04:15 +00:00
Swifty
650be0d1f7 fix(integration): FirecrawlExtractBlock returns 400 Invalid JSON schema when output_schema is passed as a string (#10669)
When the FirecrawlExtractBlock receives an output_schema, we currently
declare the field as a str.
Pydantic therefore serialises the JSON‐looking value into a string and
the Firecrawl API rejects the request with:

`400 Bad Request – Invalid JSON schema. path: ['schema']`

Direct curl requests work because the same structure is sent as a proper
JSON object.

### Changes 🏗️

- Changed the output_schema to dict instead of str

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test firebase.extract(..., schema) works with dict rather than str

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-19 07:04:04 +00:00
Zamil Majdy
312cb0227f fix(backend/credit): prevent double-application of transactions due to race condition (#10672)
<!-- Clearly explain the need for these changes: -->
## 🚨 CRITICAL: Double Transaction Bug

**Critical Issue:** Top-up transactions were being applied TWICE to user
balances, causing severe accounting errors.

**Example:**
- User with $160 balance tops up $50
- Expected: $210 balance  
- Actual: $260 balance (extra $50 incorrectly credited)

This compromises the financial integrity of our credit system and
requires immediate fix.

### Changes 🏗️

1. **Added double-checked locking pattern in `_enable_transaction`**
(backend/data/credit.py)
- Added transaction re-check INSIDE the locked transaction block (lines
294-298)
- Prevents race condition when concurrent requests try to activate the
same transaction
- Ensures transaction can only be activated once, even with webhook
retries

2. **Enhanced error messages in Stripe webhook handler**
(backend/server/routers/v1.py)
- Added detailed error messages for better debugging of webhook failures
- Helps identify issues with payload validation or signature
verification

### Root Cause Analysis 🔍

**TOCTOU (Time-of-Check to Time-of-Use) Race Condition:**

The original code checked `transaction.isActive` outside the database
lock. Between this check and acquiring the lock, another concurrent
request (webhook retry or duplicate) could enter, causing both to
proceed with activation.

**Sequence:**
1. Request A: Checks `isActive=False` 
2. Request B: Checks `isActive=False`  (webhook retry)  
3. Request A: Acquires lock, activates transaction, adds $50
4. Request B: Waits for lock, then ALSO adds $50 

**Contributing Factors:**
- Stripe webhook retry mechanism
- `@func_retry` decorator (up to 5 attempts)
- No database-level unique constraint on active transactions
- Missing atomicity between check and update

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified the double-check prevents duplicate transaction
activation
- [x] Tested concurrent webhook calls - only one succeeds in activating
transaction
  - [x] Confirmed balance is only incremented once per transaction
- [x] Verified idempotency - multiple calls with same transaction_key
are safe
  - [x] All existing credit system tests pass
  - [x] Tested webhook error handling with invalid payloads/signatures

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes required - this is a code-only fix*
2025-08-18 17:16:08 +00:00
Reinier van der Leer
5da5c2ecd6 Merge branch 'master' into dev 2025-08-18 16:42:59 +02:00
Reinier van der Leer
ba65fee862 hotfix(backend/executor): Fix propagation of passed-in credentials to sub-agents (#10668)
This should fix sub-agent execution issues with passed-in credentials after a crucial data path was removed in #10568.

Additionally, some of the changes are to ensure the `credentials_input_schema` gets refreshed correctly when saving a new version of a graph in the builder.

### Changes 🏗️

- Include `graph_credentials_inputs` in `nodes_input_masks` passed into sub-agent execution
- Fix credentials input schema in `update_graph` and `get_library_agent_by_graph_id` return
- Improve error message on sub-graph validation failure

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Import agent with sub-agent(s) with required credentials inputs & run it -> should work
2025-08-18 16:42:28 +02:00
Zamil Majdy
542f951dd8 Merge branch 'master' of https://github.com/Significant-Gravitas/AutoGPT into dev 2025-08-18 07:08:39 +00:00
Zamil Majdy
72938590f2 hotfix: reduce scheduler max_workers to match database pool size (#10665)
## Summary
- Fixes scheduler pod crashes during peak scheduling periods (e.g.,
03:00:00)
- Reduces APScheduler ThreadPoolExecutor max_workers from 10 to 3
(matching scheduler_db_pool_size)
- Prevents event loop saturation that blocks health checks and causes
pod restarts

## Root Cause Analysis
During peak scheduling periods, multiple jobs execute simultaneously and
compete for the shared event loop through `run_async()`. This creates a
resource bottleneck where:

1. **ThreadPoolExecutor** runs up to 10 jobs concurrently
2. Each job calls `run_async()` which submits to the **same event loop**
that FastAPI health check needs
3. **Health check blocks** waiting for event loop availability 
4. **Liveness probe fails** after 5 consecutive timeouts (50s)
5. **Pod gets killed** with SIGKILL (exit code 137)
6. **Executions orphaned** - created in DB but never published to
RabbitMQ

## Solution
Match `max_workers` to `scheduler_db_pool_size` (3) to prevent more
concurrent jobs than the system can handle without blocking critical
health checks.

## Evidence
- Pod restart at exactly 03:05:48 when executions
e47cd564-ed87-4a52-999b-40804c41537a and
eae69811-4c7c-4cd5-b084-41872293185b were created
- 7 scheduled jobs triggered simultaneously at 03:00:00
- Health check normally responds in 0.007s but times out during high
concurrency
- Exit code 137 indicates SIGKILL from liveness probe failure

## Test Plan
- [ ] Monitor scheduler pod stability during peak scheduling periods
- [ ] Verify no executions remain QUEUED without being published to
RabbitMQ
- [ ] Confirm health checks remain responsive under load
- [ ] Check that job execution still works correctly with reduced
concurrency

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-18 05:49:39 +00:00
Zamil Majdy
32513b26ab Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-08-17 08:18:15 +07:00
Zamil Majdy
bf92e7dbc8 hotfix(backend/executor): Fix RabbitMQ channel retry logic in executor (#10661)
## Summary
**HOTFIX for production** - Fixes executor being stuck in infinite retry
loop when RabbitMQ channels are closed
- Ensures proper reconnection by checking channel state before
attempting to consume messages
- Prevents accumulation of thousands of retry attempts (was seeing 7000+
retries)

## Changes
The executor was stuck repeatedly failing with "Channel is closed"
errors because the `continuous_retry` decorator was attempting to reuse
closed channels instead of creating new ones.

Added channel state checks (`is_ready`) before connecting in both:
- `_consume_execution_run()` 
- `_consume_execution_cancel()`

When a channel is not ready (closed), the code now:
1. Disconnects the client (safe operation, checks if already
disconnected)
2. Establishes a fresh connection with new channel
3. Proceeds with message consumption

## Test plan
- [x] Verified the disconnect() method is safe to call on already
disconnected clients
- [x] Confirmed is_ready property checks both connection and channel
state
- [ ] Deploy to environment and verify executors reconnect properly
after channel failures
- [ ] Monitor logs to ensure no more "Channel is closed" retry loops

## Related Issues
Fixes critical production issue where:
- Executor pods show repeated "Channel is closed" errors
- 757 messages stuck in `graph_execution_queue`
- 102,286 messages in `failed_notifications` queue
- RabbitMQ logs show connections being closed due to missed heartbeats

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-16 17:06:06 -05:00
Nicholas Tindle
6bb6a081a2 feat(backend): add support for v0 by Vercel models and credentials (#10641)
## Summary
This PR adds support for v0 by Vercel's Model API to the AutoGPT
platform, enabling users to leverage v0's framework-aware AI models
optimized for React and Next.js code generation.

v0 provides OpenAI-compatible endpoints with models specifically trained
for frontend development, making them ideal for generating UI components
and web applications.

### Changes 🏗️

#### Backend Changes
- **Added v0 Provider**: Added `V0 = "v0"` to `ProviderName` enum in
`/backend/backend/integrations/providers.py`
- **Added v0 Models**: Added three v0 models to `LlmModel` enum in
`/backend/backend/blocks/llm.py`:
- `V0_1_5_MD = "v0-1.5-md"` - Everyday tasks and UI generation (128K
context, 64K output)
- `V0_1_5_LG = "v0-1.5-lg"` - Advanced reasoning (512K context, 64K
output)
  - `V0_1_0_MD = "v0-1.0-md"` - Legacy model (128K context, 64K output)
- **Implemented v0 Provider**: Added v0 support in `llm_call()` function
using OpenAI-compatible client with base URL `https://api.v0.dev/v1`
- **Added Credentials Support**: Created `v0_credentials` in
`/backend/backend/integrations/credentials_store.py` with UUID
`c4e6d1a0-3b5f-4789-a8e2-9b123456789f`
- **Cost Configuration**: Added model costs in
`/backend/backend/data/block_cost_config.py`:
  - v0-1.5-md: 1 credit
  - v0-1.5-lg: 2 credits
  - v0-1.0-md: 1 credit

#### Configuration Changes
- **Settings**: Added `v0_api_key` field to `Secrets` class in
`/backend/backend/util/settings.py`
- **Environment Variables**: Added `V0_API_KEY=` to
`/backend/.env.default`

### Features
-  Full OpenAI-compatible API support
-  Tool/function calling support
-  JSON response format support
-  Framework-aware completions optimized for React/Next.js
-  Large context windows (up to 512K tokens)
-  Integrated with platform credit system

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run existing block tests to ensure no regressions: `poetry run
pytest backend/blocks/test/test_block.py`
  - [x] Verify AITextGeneratorBlock works with v0 models
  - [x] Confirm all model metadata is correctly configured
  - [x] Validate cost configuration is properly set up
  - [x] Check that v0_credentials has a valid UUID4

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
  - Added `V0_API_KEY=` to `/backend/.env.default`
- [x] `docker-compose.yml` is updated or already compatible with my
changes
  - No changes needed - uses existing environment variable patterns
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Configuration Requirements
Users need to:
1. Obtain a v0 API key from [v0.app](https://v0.app) (requires Premium
or Team plan)
2. Add `V0_API_KEY=your-api-key` to their `.env` file

### API Documentation
- v0 API Docs: https://v0.app/docs/api
- Model API Docs: https://v0.app/docs/api/model

### Testing
All existing tests pass with the new v0 integration:
```bash
poetry run pytest backend/blocks/test/test_block.py::test_available_blocks -k "AITextGeneratorBlock" -xvs
# Result: PASSED
```
2025-08-15 05:59:43 +00:00
Nicholas Tindle
df20b70f44 feat(blocks): Enrichlayer integration (#9924)
<!-- Clearly explain the need for these changes: -->

We want to support ~~proxy curl~~ enrichlayer as an integration, and
this is a baseline way to get there

### Changes 🏗️
- Adds some subset of proxycurl blocks based on the API docs:
~~https://nubela.co/proxycurl/docs#people-api-person-profile-endpoint~~
https://enrichlayer.com/docs/pc/#people-api
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] manually test the blocks with an API key
  - [x] make sure the automated tests pass

---------

Co-authored-by: SwiftyOS <craigswift13@gmail.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: majdyz <zamil@agpt.co>
2025-08-15 05:57:09 +00:00
Nicholas Tindle
21faf1b677 fix(backend): update and fix weekly summary email (#10343)
<!-- Clearly explain the need for these changes: -->

Our weekly summary emails are currently broken, hard-coded, and so ugly.

### Changes 🏗️
Update the email template to look better
Update the way we queue messages to work after other changes have
occurred

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test by sending a self email with the cron job set to every
minute, so you can see what it would look like

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-14 15:39:13 +00:00
Zamil Majdy
4bfeddc03d feat(platform/docker): add frontend service to docker-compose with env config improvements (#10615)
## Summary
This PR adds the frontend service to the Docker Compose configuration,
enabling `docker compose up` to run the complete stack, including the
frontend. It also implements comprehensive environment variable
improvements, unified .env file support, and fixes Docker networking
issues.

## Key Changes

### 🐳 Docker Compose Improvements
- **Added frontend service** to `docker-compose.yml` and
`docker-compose.platform.yml`
- **Production build**: Uses `pnpm build + serve` instead of dev server
for better stability and lower memory usage
- **Service dependencies**: Frontend now waits for backend services
(`rest_server`, `websocket_server`) to be ready
- **YAML anchors**: Implemented DRY configuration to avoid duplicating
environment values

### 📁 Unified .env File Support
- **Frontend .env loading**: Automatically loads `.env` file during
Docker build and runtime
- **Backend .env loading**: Optional `.env` file support with fallback
to sensible defaults in `settings.py`
- **Single source of truth**: All `NEXT_PUBLIC_*` and API keys can be
defined in respective `.env` files
- **Docker integration**: Updated `.dockerignore` to include `.env`
files in build context
- **Git tracking**: Frontend and backend `.env` files are now trackable
(removed from gitignore)

### 🔧 Environment Variable Architecture
- **Dual environment strategy**: 
- Server-side code uses Docker service names
(`http://rest_server:8006/api`)
  - Client-side code uses localhost URLs (`http://localhost:8006/api`)
- **Comprehensive config**: Added build args and runtime environment
variables
- **Network compatibility**: Fixes connection issues between frontend
and backend containers
- **Shared backend variables**: Common environment variables (service
hosts, auth settings) centralized using YAML anchors

### 🛠️ Code Improvements
- **Centralized env-config helper** (`/frontend/src/lib/env-config.ts`)
with server-side priority
- **Updated all frontend code** to use shared environment helpers
instead of direct `process.env` access
- **Consistent API**: All environment variable access now goes through
helper functions
- **Settings.py improvements**: Better defaults for CORS origins and
optional .env file loading

### 🔗 Files Changed
- `docker-compose.yml` & `docker-compose.platform.yml` - Added frontend
service and shared backend env vars
- `frontend/Dockerfile` - Simplified build process to use .env files
directly
- `backend/settings.py` - Optional .env loading and better defaults
- `frontend/src/lib/env-config.ts` - New centralized environment
configuration
- `.dockerignore` - Allow .env files in build context
- `.gitignore` - Updated to allow frontend/backend .env files
- Multiple frontend files - Updated to use env helpers
- Updates to both auto installer scripts to work with the latest setup!

## Benefits
-  **Single command deployment**: `docker compose up` now runs
everything
-  **Better reliability**: Production build reduces memory usage and
crashes
-  **Network compatibility**: Proper container-to-container
communication
-  **Maintainable config**: Centralized environment variable management
with .env files
-  **Development friendly**: Works in both Docker and local development
-  **API key management**: Easy configuration through .env files for
all services
-  **No more manual env vars**: Frontend and backend automatically load
their respective .env files

## Testing
-  Verified Docker service communication works correctly
-  Frontend responds and serves content properly  
-  Environment variables are correctly resolved in both server and
client contexts
-  No connection errors after implementing service dependencies
-  .env file loading works correctly in both build and runtime phases
-  Backend services work with and without .env files present

### Checklist 📋

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-08-14 03:28:18 +00:00
Zamil Majdy
af7d56612d fix(logging): remove uvicorn log config to prevent startup deadlock (#10638)
## Problem
After applying the CloudLoggingHandler fix to use
BackgroundThreadTransport (#10634), scheduler pods entered a new
deadlock during startup when uvicorn reconfigures logging.

## Root Cause
When uvicorn starts with a log_config parameter, it calls
`logging.config.dictConfig()` which:
1. Calls `_clearExistingHandlers()` 
2. Which calls `logging.shutdown()`
3. Which tries to `flush()` all handlers including CloudLoggingHandler
4. CloudLoggingHandler with BackgroundThreadTransport tries to flush its
queue
5. The background worker thread tries to acquire the logging module lock
to check log levels
6. **Deadlock**: shutdown holds lock waiting for flush to complete,
worker thread needs lock to continue

## Thread Dump Evidence
From py-spy analysis of the stuck pod:
- **Thread 21 (FastAPI)**: Stuck in `flush()` waiting for background
thread to drain queue
- **Thread 13 (google.cloud.logging.Worker)**: Waiting for logging lock
in `isEnabledFor()`
- **Thread 1 (MainThread)**: Waiting for logging lock in `getLogger()`
during SQLAlchemy import
- **Threads 30, 31 (Sentry)**: Also waiting for logging lock

## Solution
Set `log_config=None` for all uvicorn servers. This prevents uvicorn
from calling `dictConfig()` and avoids the deadlock entirely.

**Trade-off**: Uvicorn will use its default logging configuration which
may produce duplicate log entries (one from uvicorn, one from the app),
but the application will start successfully without deadlocks.

## Changes
- Set `log_config=None` in all uvicorn.Config() calls
- Remove unused `generate_uvicorn_config` imports

## Testing
- [x] Verified scheduler pods can start and become healthy
- [x] Health checks respond properly  
- [x] No deadlocks during startup
- [x] Application logs still appear (though may be duplicated)

## Related Issues
- Fixes the startup deadlock introduced after #10634
2025-08-14 05:31:47 +07:00
Dmitry
0dd30e275c docs(blocks): Add AI/ML API integration guide and update LLM headers (#10402)
### Summary
Added a new documentation page and images for integrating AI/ML API with
AutoGPT, including step-by-step instructions. Updated LLM block to send
additional headers for requests to aimlapi.com. Improved provider
listing in index.md and added the new guide to mkdocs navigation. Builds
on and extends the integration work from
https://github.com/Significant-Gravitas/AutoGPT/pull/9996


### Changes 🏗️

This PR introduces official support and documentation for using **AI/ML
API** with the **AutoGPT platform**:

* 📄 **Added a new documentation page** `platform/aimlapi.md` with a
detailed step-by-step integration guide.
* 🖼️ **Added 12+ reference images** to `docs/content/imgs/aimlapi/` for
clear visual walkthrough.
* 🧠 **Updated the LLM block** (`llm.py`) to send extra headers
(`X-Project`, `X-Title`, `Referer`) in requests to `aimlapi.com` for
analytics and source attribution.
* 📚 **Improved provider listing** in `index.md` — added section about
AI/ML API models and benefits.
* 🧭 **Added the new guide to the mkdocs navigation** via `mkdocs.yml`.

---

### Checklist 📋

#### For code changes:

* [x] I have clearly listed my changes in the PR description
* [x] I have made a test plan
* [x] I have tested my changes according to the test plan:

  * [x] Successfully authenticated against `api.aimlapi.com`
  * [x] Verified requests use correct headers
* [x] Confirmed `AI Text Generator` block returns completions for all
supported models
* [x] End-to-end tested: created, saved, and ran agent with AI/ML API
successfully
  * [x] Verified outputs render correctly in the Output panel


No breaking changes introduced. Let me know if you'd like this guide
cross-referenced from other onboarding pages. 

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-08-13 18:25:58 +00:00
Bently
2d436caa84 fix(backend/AM): Fix AutoMod api key issue (#10635)
### Changes 🏗️
Calls to the moderation API now strip whitespace from the API key before
including it in the 'X-API-Key' header, preventing authentication issues
due to accidental leading or trailing spaces.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Setup and run the platform with moderation and test it works
2025-08-13 13:47:40 +00:00
Nicholas Tindle
793de77e76 ref(backend): update Gmail blocks to unify architecture and improve email handling (#10588)
## Summary
This PR refactors all Gmail blocks to share a common base class
(`GmailBase`) and adds several improvements to email handling, including
proper HTML content support, async API calls, and fixing the
78-character line wrapping issue for plain text emails.

## Changes

### Architecture Improvements
- **Unified base class**: Created `GmailBase` abstract class that
consolidates common functionality across all Gmail blocks
- **Async API calls**: Converted all Gmail API calls to use
`asyncio.to_thread` for better performance and non-blocking operations
- **Code deduplication**: Moved shared methods like `_build_service`,
`_get_email_body`, `_get_attachments`, and `_get_label_id` to the base
class

### Email Content Handling
- **Smart content type detection**: Added automatic detection of HTML vs
plain text content
- **Fix 78-char line wrapping**: Plain text emails now use a no-wrap
policy (`max_line_length=0`) to prevent Gmail's default 78-character
hard line wrapping
- **Content type parameter**: Added optional `content_type` field to
Send, Draft, Reply, and Forward blocks allowing manual override ("auto",
"plain", or "html")
- **Proper MIME handling**: Created `_make_mime_text` helper function to
properly configure MIME types and policies

### New Features
- **Gmail Forward Block**: Added new `GmailForwardBlock` for forwarding
emails with proper thread preservation
- **Reply improvements**: Reply block now properly reads the original
email content when replying

### Bug Fixes
- Fixed issue where reply block wasn't reading the email it was replying
to
- Fixed attachment handling in multipart messages
- Improved error handling for base64 decoding

## Technical Details

The refactoring introduces:
- `NO_WRAP_POLICY = SMTP.clone(max_line_length=0)` to prevent line
wrapping in plain text emails
- UTF-8 charset support for proper Unicode/emoji handling
- Consistent async patterns using `asyncio.to_thread` for all Gmail API
calls
- Proper HTML to text conversion using html2text library when available

## Testing
All existing tests pass. The changes maintain backward compatibility
while adding new optional parameters.

## Breaking Changes
None - all changes are backward compatible. The new `content_type`
parameter is optional and defaults to "auto" detection.

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-13 02:17:10 +00:00
Zamil Majdy
a2059c6023 refactor(backend): consolidate LaunchDarkly feature flag management (#10632)
This PR consolidates LaunchDarkly feature flag management by moving it
from autogpt_libs to backend and fixing several issues with boolean
handling and configuration management.

### Changes 🏗️

**Code Structure:**
- Move LaunchDarkly client from `autogpt_libs/feature_flag` to
`backend/util/feature_flag.py`
- Delete redundant `config.py` file and merge LaunchDarkly settings into
`backend/util/settings.py`
- Update all imports throughout the codebase to use
`backend.util.feature_flag`
- Move test file to `backend/util/feature_flag_test.py`

**Bug Fixes:**
- Fix `is_feature_enabled` function to properly return boolean values
instead of arbitrary objects that were always evaluating to `True`
- Add proper async/await handling for all `is_feature_enabled` calls
- Add better error handling when LaunchDarkly client is not initialized

**Performance & Architecture:**
- Load Settings at module level instead of creating new instances inside
functions
- Remove unnecessary `sdk_key` parameter from
`initialize_launchdarkly()` function
- Simplify initialization by using centralized settings management

**Configuration:**
- Add `launch_darkly_sdk_key` field to `Secrets` class in settings.py
with proper validation alias
- Remove environment variable fallback in favor of centralized settings

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All existing feature flag tests pass (6/6 tests passing)
  - [x] LaunchDarkly initialization works correctly with settings
  - [x] Boolean feature flags return correct values instead of objects
  - [x] Non-boolean flag values are properly handled with warnings
- [x] Async/await calls work correctly in AutoMod and activity status
generator
  - [x] Code formatting and imports are correct

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Configuration Changes:**
- LaunchDarkly SDK key is now managed through the centralized Settings
system instead of a separate config file
- Uses existing `LAUNCH_DARKLY_SDK_KEY` environment variable (no changes
needed to env files)

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-13 01:15:10 +00:00
Nicholas Tindle
b9c3920227 fix(backend): Support dynamic values_#_* fields in CreateDictionaryBlock (#10587)
## Summary

Fixed Smart Decision Maker's function signature generation to properly
handle dynamic fields (e.g., `values_#_*`, `items_$_*`) when connecting
to any block as a tool.

### Context

When Smart Decision Maker calls other blocks as tools, it needs to
generate OpenAI-compatible function signatures. Previously, when
connected to blocks via dynamic fields (which get merged by the executor
at runtime), the signature generation would fail because blocks don't
inherently know about these dynamic field patterns.

### Changes 🏗️

- **Modified
`SmartDecisionMakerBlock._create_block_function_signature()`** to detect
and handle dynamic fields:
- Detects fields containing `_#_` (dict merge), `_$_` (list merge), or
`_@_` (object merge)
- Provides generic string schema for dynamic fields (OpenAI API
compatible)
  - Falls back gracefully for unknown fields
- **Added comprehensive tests** for dynamic field handling with both
dictionary and list patterns
- **No changes needed to individual blocks** - this solution works
universally

### Why This Approach

Instead of modifying every block to handle dynamic fields (original PR
approach), we handle it centrally in Smart Decision Maker where the
function signatures are generated. This is cleaner and more
maintainable.

### Test Plan 📋

- [x] Created test cases for Smart Decision Maker generating function
signatures with dynamic dict fields (`_#_`)
- [x] Created test cases for Smart Decision Maker generating function
signatures with dynamic list fields (`_$_`)
- [x] Verified Smart Decision Maker can successfully call blocks like
CreateDictionaryBlock via dynamic connections
- [x] All existing Smart Decision Maker tests pass
- [x] Linting and formatting pass

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-12 22:59:56 +00:00
Zamil Majdy
abba10b649 feat(block): Remove paralel tool-call system prompting (#10627)
We're forcing this note to the end of the system prompt SDM block: 
Only provide EXACTLY one function call; multiple tool calls are strictly
prohibited., this is being interpreted by GPT5 as "Only call one tool
per task," which is resulting in many agent runs that only use a tool
once (i.e., useless low low-effort answers)

### Changes 🏗️

Remove parallel tool-call system prompting entirely.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] automated tests.
2025-08-12 12:46:52 +00:00
Zamil Majdy
89eb5d1189 feat(feature-flag): add LaunchDarkly user context and metadata support (#10595)
## Summary

Enable LaunchDarkly feature flags to use rich user context and metadata
for advanced targeting, including user segments, account age, email
domains, and custom attributes. This unlocks LaunchDarkly's powerful
targeting capabilities beyond simple user ID checks.

## Problem

LaunchDarkly feature flags were only receiving basic user IDs,
preventing the use of:
- **Segment-based targeting** (e.g., "employees", "beta users", "new
accounts")
- **Contextual rules** (e.g., account age, email domain, custom
metadata)
- **Advanced LaunchDarkly features** like percentage rollouts by user
attributes

This limited feature flag flexibility and required manual user ID
management for targeting.

## Solution

### 🎯 **LaunchDarkly Context Enhancement**
- **Rich user context**: Send user metadata, segments, account age,
email domain to LaunchDarkly
- **Automatic segmentation**: Users automatically categorized as
"employee", "new_user", "established_user" etc.
- **Custom metadata support**: Any user metadata becomes available for
LaunchDarkly targeting
- **24-hour caching**: Efficient user context retrieval with TTL cache
to reduce database calls

### 📊 **User Context Data**
```python
# Before: Only user ID
context = Context.builder("user-123").build()

# After: Full context with targeting data
context = {
    "email": "user@agpt.co",
    "created_at": "2023-01-15T10:00:00Z",
    "segments": ["employee", "established_user"],
    "email_domain": "agpt.co", 
    "account_age_days": 365,
    "custom_role": "admin"
}
```

### 🏗️ **Required Infrastructure Changes**

To support proper LaunchDarkly serialization, we needed to implement
clean application models:

#### **Application-Layer User Model**
- Created snake_case User model (`created_at`, `email_verified`) for
proper JSON serialization
- LaunchDarkly expects consistent field naming - camelCase Prisma
objects caused validation errors
- Added `User.from_db()` converter to safely transform database objects

#### **HTTP Client Reliability**  
- Fixed HTTP 4xx retry issue that was causing unnecessary load
- Added layer validation to prevent database objects leaking to external
services

#### **Type Safety**
- Eliminated `Any` types and defensive coding patterns
- Proper typing enables better IDE support and catches errors early

## Technical Implementation

### **Core LaunchDarkly Enhancement**
```python
# autogpt_libs/feature_flag/client.py
@async_ttl_cache(maxsize=1000, ttl_seconds=86400)  # 24h cache
async def _fetch_user_context_data(user_id: str) -> dict[str, Any]:
    user = await get_user_by_id(user_id)
    return _build_launchdarkly_context(user)

def _build_launchdarkly_context(user: User) -> dict[str, Any]:
    return {
        "email": user.email,
        "created_at": user.created_at.isoformat(),  # snake_case for serialization
        "segments": determine_user_segments(user),
        "account_age_days": calculate_account_age(user),
        # ... more context data
    }
```

### **User Segmentation Logic**
- **Role-based**: `admin`, `user`, `system` segments
- **Domain-based**: `employee` for @agpt.co emails  
- **Account age**: `new_user` (<7 days), `recent_user` (7-30 days),
`established_user` (>30 days)
- **Custom metadata**: Any user metadata becomes available for targeting

### **Infrastructure Updates**
- `backend/data/model.py`: Application User model with proper
serialization
- `backend/util/service.py`: HTTP client improvements and layer
validation
- Multiple files: Migration to use application models for consistency

## LaunchDarkly Usage Examples

With this enhancement, you can now create LaunchDarkly rules like:

```yaml
# Target employees only
- variation: true
  targets:
    - values: ["employee"]
      contextKind: "user"
      attribute: "segments"

# Target new users for gradual rollout  
- variation: true
  rollout:
    variations:
      - variation: true
        weight: 25000  # 25% of new users
    contextKind: "user" 
    bucketBy: "segments"
    filters:
      - attribute: "segments"
        op: "contains"
        values: ["new_user"]
```

## Performance & Caching

- **24-hour TTL cache**: Dramatically reduces database calls for user
context
- **Graceful fallbacks**: Simple user ID context if database unavailable
- **Efficient caching**: 1000 entry LRU cache with automatic TTL
expiration

## Testing

- [x] LaunchDarkly context includes all expected user attributes
- [x] Segmentation logic correctly categorizes users
- [x] 24-hour cache reduces database load
- [x] Fallback to simple context works when database unavailable
- [x] All existing feature flag functionality preserved
- [x] HTTP retry improvements work correctly

## Breaking Changes

 **No external API changes** - all existing feature flag usage
continues to work

⚠️ **Internal changes only**:
- `get_user_by_id()` returns application User model instead of Prisma
model
- Test utilities need to import User from `backend.data.model`

## Impact

🎯 **Product Impact**:
- **Advanced targeting**: Product teams can now use sophisticated
LaunchDarkly rules
- **Better user experience**: Gradual rollouts, A/B testing, and
segment-based features
- **Operational efficiency**: Reduced need for manual user ID management

🚀 **Performance Impact**:
- **Reduced database load**: 24-hour caching minimizes repeated user
context queries
- **Improved reliability**: Fixed HTTP retry inefficiencies
- **Better monitoring**: Cleaner logs without 4xx retry noise

---

**Primary goal**: Enable rich LaunchDarkly targeting with user context
and segments
**Infrastructure changes**: Required for proper serialization and
reliability

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-12 05:25:56 +00:00
Bently
28d85ad61c feat(backend/AM): Integrate AutoMod content moderation (#10539)
Copy of [feat(backend/AM): Integrate AutoMod content moderation - By
Bentlybro - PR
#10490](https://github.com/Significant-Gravitas/AutoGPT/pull/10490) cos
i messed it up 🤦

Adds AutoMod input and output moderation to the execution flow.
Introduces a new AutoMod manager and models, updates settings for
moderation configuration, and modifies execution result handling to
support moderation-cleared data. Moderation failures now clear sensitive
data and mark executions as failed.

<img width="921" height="816" alt="image"
src="https://github.com/user-attachments/assets/65c0fee8-d652-42bc-9553-ff507bc067c5"
/>


### Changes 🏗️

I have made some small changes to
``autogpt_platform\backend\backend\executor\manager.py`` to send the
needed into to the AutoMod system which collects the data, combines and
makes the api call to AM and based on its reply lets it run or not!

I also had to make small changes to
``autogpt_platform\backend\backend\data\execution.py`` to add checks
that allow me to clear the content from the blocks if it was flagged

I am working on finalizing the AM repo then that will be public

To note: we will want to set this up behind launch darkly first for
testing on the team before we roll it out any more

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Setup and run the platform with ``automod_enabled`` set to False
and it works normally
- [x] Setup and run the platform with ``automod_enabled`` set to True,
set the AM URL and API Key and test it runs safe blocks normally
- [x] Test AM with content that would trigger it to flag and watch it
stop and clear all the blocks outputs

Message @Bentlybro for the URL and an API key to AM for local testing!

## Changes made to Settings.py 

I have added a few new options to the settings.py for AutoMod Config!

```
    # AutoMod configuration
    automod_enabled: bool = Field(
        default=False,
        description="Whether AutoMod content moderation is enabled",
    )
    automod_api_url: str = Field(
        default="",
        description="AutoMod API base URL - Make sure it ends in /api",
    )
    automod_timeout: int = Field(
        default=30,
        description="Timeout in seconds for AutoMod API requests",
    )
    automod_retry_attempts: int = Field(
        default=3,
        description="Number of retry attempts for AutoMod API requests",
    )
    automod_retry_delay: float = Field(
        default=1.0,
        description="Delay between retries for AutoMod API requests in seconds",
    )
    automod_fail_open: bool = Field(
        default=False,
        description="If True, allow execution to continue if AutoMod fails",
    )
    automod_moderate_inputs: bool = Field(
        default=True,
        description="Whether to moderate block inputs",
    )
    automod_moderate_outputs: bool = Field(
        default=True,
        description="Whether to moderate block outputs",
    )
```
and
```
automod_api_key: str = Field(default="", description="AutoMod API key")
```

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-08-11 09:39:28 +00:00
Zamil Majdy
d4b5508ed1 fix(backend): resolve scheduler deadlock and improve health checks (#10589)
## Summary
Fix critical deadlock issue where scheduler pods would freeze completely
and become unresponsive to health checks, causing pod restarts and stuck
QUEUED executions.

## Root Cause Analysis
The scheduler was using `BlockingScheduler` which blocked the main
thread, and when concurrent jobs deadlocked in the async event loop, the
entire process would freeze - unable to respond to health checks or
process any requests.

From crash analysis:
- At 01:18:00, two jobs started executing concurrently
- At 01:18:01.482, last successful health check  
- Process completely froze - no more logs until pod was killed at
01:18:46
- Execution `8174c459-c975-4308-bc01-331ba67f26ab` was created in DB but
never published to RabbitMQ

## Changes Made

### Core Deadlock Fix
- **Switch from BlockingScheduler to BackgroundScheduler**: Prevents
main thread blocking, allows health checks to work even if scheduler
jobs deadlock
- **Make all health_check methods async**: Makes health checks
completely independent of thread pools and more resilient to blocking
operations

### Enhanced Monitoring & Debugging  
- **Add execution timing**: Track and log how long each graph execution
takes to create and publish
- **Warn on slow operations**: Alert when operations take >10 seconds,
indicating resource contention
- **Enhanced error logging**: Include elapsed time and exception types
in error messages
- **Better APScheduler event listeners**: Add listeners for missed jobs
and max instances with actionable messages

### Files Modified
- `backend/executor/scheduler.py` - Switch to BackgroundScheduler, async
health_check, timing monitoring
- `backend/util/service.py` - Base async health_check method
- `backend/executor/database.py` - Async health_check override  
- `backend/notifications/notifications.py` - Async health_check override

## Test Plan
- [x] All existing tests pass (914 passed, 1 failed unrelated connection
issue)
- [x] Scheduler starts correctly with BackgroundScheduler
- [x] Health checks respond properly under load
- [x] Enhanced logging provides visibility into execution timing

## Impact
- **Prevents pod freezes**: Scheduler remains responsive even when jobs
deadlock
- **Better observability**: Clear visibility into slow operations and
failures
- **No dropped executions**: Jobs won't get stuck in QUEUED state due to
process freezes
- **Faster incident response**: Health checks and logs provide
actionable debugging info

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-08-09 02:41:10 +00:00
Nicholas Tindle
0116866199 feat(backend): add more discord blocks support (#10586)
# Enhanced Discord Integration Blocks

Introduces new blocks for sending DMs, embeds, files, and replies in
Discord, as well as blocks for retrieving user and channel information.
Enhances existing message blocks with additional metadata fields and
server/channel identification. Improves test coverage and input/output
schemas for all Discord-related blocks.

Co-Authored-By: Claude <claude@users.noreply.github.com>

## Why These Changes Are Needed 🎯

The existing Discord integration was limited to basic message sending
and reading. Users needed more sophisticated Discord functionality to
build comprehensive automation workflows:

1. **Limited messaging options** - Could only send plain text to
channels, no DMs, embeds, or file attachments
2. **Poor graph connectivity** - Blocks didn't output IDs needed for
chaining operations (e.g., couldn't reply to a message after sending it)
3. **No user management** - Couldn't get user information or send direct
messages
4. **Type safety issues** - Discord.py's incomplete type hints caused
linting errors
5. **No channel resolution** - Had to manually find channel IDs instead
of using names

### Changes 🏗️

#### New Blocks Added
- **SendDiscordDMBlock** - Send direct messages to users via their
Discord ID
- **SendDiscordEmbedBlock** - Create rich embedded messages with images,
fields, and formatting
- **SendDiscordFileBlock** - Upload any file type (images, PDFs, videos,
etc.) using MediaFileType
- **ReplyToDiscordMessageBlock** - Reply to specific messages in threads
- **DiscordUserInfoBlock** - Retrieve user profile information
(username, avatar, creation date, etc.)
- **DiscordChannelInfoBlock** - Resolve channel names to IDs and get
channel metadata

#### Enhanced Existing Blocks
- **ReadDiscordMessagesBlock**:
- Now outputs: `message_id`, `channel_id`, `user_id` (previously missing
all IDs)
- Enables workflows like: read message → reply to it, or read message →
DM the author
  
- **SendDiscordMessageBlock**:
- Now outputs: `message_id`, `channel_id` (previously had no outputs
except status)
  - Enables tracking sent messages and replying to them later

#### Technical Improvements
- **MediaFileType Support**: SendDiscordFileBlock accepts data URIs,
URLs, or local paths
- **Defensive Programming**: Added runtime type checks for Discord.py's
incomplete typing
- **ID Passthrough**: DiscordUserInfoBlock passes through user_id for
chaining
- **Better Error Messages**: Clear feedback when operations fail (e.g.,
"Channel cannot receive messages")
- **Channel Flexibility**: Blocks accept both channel names and IDs

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

#### Test Plan 🧪
- [x] **Import and initialization**: All 8 Discord blocks import and
initialize without errors
- [x] **Type checking**: `poetry run format` passes with no type errors
- [x] **Interface connectivity**: Verified blocks can chain together:
- [x] ReadDiscordMessages → ReplyToDiscordMessage (via message_id,
channel_id)
  - [x] ReadDiscordMessages → SendDiscordDM (via user_id)
- [x] SendDiscordMessage → ReplyToDiscordMessage (via message_id,
channel_id)
  - [x] DiscordUserInfo → SendDiscordDM (via user_id passthrough)
  - [x] DiscordChannelInfo → SendDiscordEmbed/File (via channel_id)
- [x] **MediaFileType handling**: SendDiscordFileBlock correctly
processes:
  - [x] Data URIs (base64 encoded files)
  - [x] URLs (downloads from web)
  - [x] Local paths (from other blocks)
- [x] **Defensive checks**: Verified error handling for:
  - [x] Non-text channels (forums, categories)
  - [x] Private/DM channels without guilds
  - [x] Missing attributes on channel objects
- [x] **Mock test data**: All blocks have appropriate test
inputs/outputs defined

## Example Workflows Now Possible 🚀

1. **Auto-reply to mentions**: Read messages → Check if bot mentioned →
Reply in thread
2. **File distribution**: Generate report → Send as PDF to Discord
channel
3. **User notifications**: Get user info → Check if online → Send DM
with alert
4. **Cross-platform sync**: Receive email attachment → Forward to
Discord channel
5. **Rich notifications**: Create embed with thumbnail → Add fields →
Send to announcement channel

## Breaking Changes ⚠️

None - all changes are backward compatible. Existing workflows using
SendDiscordMessageBlock and ReadDiscordMessagesBlock will continue to
work, they just now have additional outputs available.

## Dependencies 📦

No new dependencies added. Uses existing:
- `discord.py` (already in project)
- `aiohttp` (already in project)
- Backend utilities: `MediaFileType`, `store_media_file` (already in
project)

---------

Co-authored-by: Claude <claude@users.noreply.github.com>
2025-08-08 18:45:04 +00:00
Bently
b68e490868 fix(backend): correct LLM configurations (#10585)
## Summary
Corrects the context window for GPT5_CHAT, fixes provider for
CLAUDE_4_1_OPUS from 'openai' to 'anthropic', and adds a 600s timeout to
the Anthropic client call in llm_call.

## Changes 🏗️
- changed gpt5's context limit to be smaller, 16k
- changed claude's provider from openai to anthropic
- Adding a 600s timeout to the Anthropic client call

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] test all models and they work
2025-08-08 15:45:18 +00:00