Compare commits

..

121 Commits

Author SHA1 Message Date
Otto
a286b1d06e refactor: address review feedback
- Move helpers after search_agents function
- Simplify list_all logic by setting query='' early
- Update find_library_agent description to 'Search for or list'
2026-02-17 18:09:40 +00:00
Otto
2b0654b9e5 fix(copilot): handle 'all' keyword in find_library_agent tool
When users ask CoPilot to 'show all my agents', the LLM was passing
the literal string 'all' as a search query, which matched no agents.

Changes:
- Make query parameter optional in FindLibraryAgentTool
- Add _LIST_ALL_KEYWORDS set for special keywords ('all', '*', 'everything', 'any', '')
- When query matches a list-all keyword, pass None to list_library_agents
- Update response messages to reflect 'list all' vs 'search' behavior

Fixes SECRT-2002
2026-02-17 18:09:09 +00:00
Reinier van der Leer
d23248f065 feat(backend/copilot): Copilot Executor Microservice (#12057)
Uncouple Copilot task execution from the REST API server. This should
help performance and scalability, and allows task execution to continue
regardless of the state of the user's connection.

- Resolves #12023

### Changes 🏗️

- Add `backend.copilot.executor`->`CoPilotExecutor` (setup similar to
`backend.executor`->`ExecutionManager`).

This executor service uses RabbitMQ-based task distribution, and sticks
with the existing Redis Streams setup for task output. It uses a cluster
lock mechanism to ensure a task is only executed by one pod, and the
`DatabaseManager` for pooled DB access.

- Add `backend.data.db_accessors` for automatic choice of direct/proxied
DB access

Chat requests now flow: API → RabbitMQ → CoPilot Executor → Redis
Streams → SSE Client. This enables horizontal scaling of chat processing
and isolates long-running LLM operations from the API service.

- Move non-API Copilot stuff into `backend.copilot` (from
`backend.api.features.chat`)
  - Updated import paths for all usages

- Move `backend.executor.database` to `backend.data.db_manager` and add
methods for copilot executor
  - Updated import paths for all usages
- Make `backend.copilot.db` RPC-compatible (-> DB ops return ~~Prisma~~
Pydantic models)
  - Make `backend.data.workspace` RPC-compatible
  - Make `backend.data.graphs.get_store_listed_graphs` RPC-compatible

DX:
- Add `copilot_executor` service to Docker setup

Config:
- Add `Config.num_copilot_workers` (default 5) and
`Config.copilot_executor_port` (default 8008)
- Remove unused `Config.agent_server_port`

> [!WARNING]
> **This change adds a new microservice to the system, with entrypoint
`backend.copilot.executor`.**
> The `docker compose` setup has been updated, but if you run the
Platform on something else, you'll have to update your deployment config
to include this new service.
>
> When running locally, the `CoPilotExecutor` uses port 8008 by default.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Copilot works
    - [x] Processes messages when triggered
    - [x] Can use its tools

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-17 16:15:28 +00:00
Bently
905373a712 fix(frontend): use singleton Shiki highlighter for code syntax highlighting (#12144)
## Summary
Addresses SENTRY-1051: Shiki warning about multiple highlighter
instances.

## Problem
The `@streamdown/code` package creates a **new Shiki highlighter for
each language** encountered. When users view AI chat responses with code
blocks in multiple languages (JavaScript, Python, JSON, YAML, etc.),
this creates 10+ highlighter instances, triggering Shiki's warning:

> "10 instances have been created. Shiki is supposed to be used as a
singleton, consider refactoring your code to cache your highlighter
instance"

This causes memory bloat and performance degradation.

## Solution
Introduced a custom code highlighting plugin that properly implements
the singleton pattern:

### New files:
- `src/lib/shiki-highlighter.ts` - Singleton highlighter management
- `src/lib/streamdown-code-plugin.ts` - Drop-in replacement for
`@streamdown/code`

### Key features:
- **Single shared highlighter** - One instance serves all code blocks
- **Preloaded common languages** - JS, TS, Python, JSON, Bash, YAML,
etc.
- **Lazy loading** - Additional languages loaded on demand
- **Result caching** - Avoids re-highlighting identical code blocks

### Changes:
- Added `shiki` as direct dependency
- Updated `message.tsx` to use the new plugin

## Testing
- [ ] Verify code blocks render correctly in AI chat
- [ ] Confirm no Shiki singleton warnings in console
- [ ] Test with multiple languages in same conversation

## Related
- Linear: SENTRY-1051
- Sentry: Multiple Shiki instances warning

<!-- greptile_comment -->

<details><summary><h3>Greptile Summary</h3></summary>

Replaced `@streamdown/code` with a custom singleton-based Shiki
highlighter implementation to resolve memory bloat from creating
multiple highlighter instances per language. The new implementation
creates a single shared highlighter with preloaded common languages (JS,
TS, Python, JSON, etc.) and lazy-loads additional languages on demand.
Results are cached to avoid re-highlighting identical code blocks.

**Key changes:**
- Added `shiki` v3.21.0 as a direct dependency
- Created `shiki-highlighter.ts` with singleton pattern and language
management utilities
- Created `streamdown-code-plugin.ts` as a drop-in replacement for
`@streamdown/code`
- Updated `message.tsx` to import from the new plugin instead of
`@streamdown/code`

The implementation follows React best practices with async highlighting
and callback-based notifications. The cache key uses code length +
prefix/suffix for efficient lookups on large code blocks.
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- Safe to merge with minor considerations for edge cases
- The implementation is solid with proper singleton pattern, caching,
and async handling. The code is well-structured and addresses the stated
problem. However, there's a subtle potential race condition in the
callback handling where multiple concurrent requests for the same cache
key could trigger duplicate highlight operations before the first
completes. The cache key generation using prefix/suffix could
theoretically cause false cache hits for large files with identical
prefixes and suffixes. Despite these edge cases, the implementation
should work correctly for the vast majority of use cases.
- No files require special attention
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant UI as Streamdown Component
    participant Plugin as Custom Code Plugin
    participant Cache as Token Cache
    participant Singleton as Shiki Highlighter (Singleton)
    participant Callbacks as Pending Callbacks

    UI->>Plugin: highlight(code, lang)
    Plugin->>Cache: Check cache key
    
    alt Cache hit
        Cache-->>Plugin: Return cached result
        Plugin-->>UI: Return highlighted tokens
    else Cache miss
        Plugin->>Callbacks: Register callback
        Plugin->>Singleton: Get highlighter instance
        
        alt First call
            Singleton->>Singleton: Create highlighter with preloaded languages
        end
        
        Singleton-->>Plugin: Return highlighter
        
        alt Language not loaded
            Plugin->>Singleton: Load language dynamically
        end
        
        Plugin->>Singleton: codeToTokens(code, lang, themes)
        Singleton-->>Plugin: Return tokens
        Plugin->>Cache: Store result
        Plugin->>Callbacks: Notify all waiting callbacks
        Callbacks-->>UI: Async callback with result
    end
```
</details>


<sub>Last reviewed commit: 96c793b</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-17 12:15:53 +00:00
Otto
ee9d39bc0f refactor(copilot): Replace legacy delete dialog with molecules/Dialog (#12136)
## Summary
Updates the session delete confirmation in CoPilot to use the new
`Dialog` component from `molecules/Dialog` instead of the legacy
`DeleteConfirmDialog`.

## Changes
- **ChatSidebar**: Use Dialog component for delete confirmation
(desktop)
- **CopilotPage**: Use Dialog component for delete confirmation (mobile)

## Behavior
- Dialog stays **open** during deletion with loading state on button
- Cancel button **disabled** while delete is in progress
- Delete button shows **loading spinner** during deletion
- Dialog only closes on successful delete or when cancel is clicked (if
not deleting)

## Screenshots
*Dialog uses the same styling as other molecules/Dialog instances in the
app*

## Requested by
@0ubbe

<!-- greptile_comment -->

<details><summary><h3>Greptile Summary</h3></summary>

Replaces the legacy `DeleteConfirmDialog` component with the new
`molecules/Dialog` component for session delete confirmations in both
desktop (ChatSidebar) and mobile (CopilotPage) views. The new
implementation maintains the same behavior: dialog stays open during
deletion with a loading state on the delete button and disabled cancel
button, closing only on successful deletion or cancel click.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- This is a straightforward component replacement that maintains the
same behavior and UX. The Dialog component API is properly used with
controlled state, the loading states are correctly implemented, and both
mobile and desktop views are handled consistently. The changes are
well-tested patterns used elsewhere in the codebase.
- No files require special attention
</details>


<details><summary><h3>Flowchart</h3></summary>

```mermaid
flowchart TD
    A[User clicks delete button] --> B{isMobile?}
    B -->|Yes| C[CopilotPage Dialog]
    B -->|No| D[ChatSidebar Dialog]
    
    C --> E[Set sessionToDelete state]
    D --> E
    
    E --> F[Dialog opens with controlled.isOpen]
    F --> G{User action?}
    
    G -->|Cancel| H{isDeleting?}
    H -->|No| I[handleCancelDelete: setSessionToDelete null]
    H -->|Yes| J[Cancel button disabled]
    
    G -->|Confirm Delete| K[handleConfirmDelete called]
    K --> L[deleteSession mutation]
    L --> M[isDeleting = true]
    M --> N[Button shows loading spinner]
    M --> O[Cancel button disabled]
    
    L --> P{Mutation result?}
    P -->|Success| Q[Invalidate sessions query]
    Q --> R[Clear sessionId if current]
    R --> S[setSessionToDelete null]
    S --> T[Dialog closes]
    
    P -->|Error| U[Show toast error]
    U --> V[setSessionToDelete null]
    V --> W[Dialog closes]
```
</details>


<sub>Last reviewed commit: 275950c</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2026-02-17 19:12:27 +07:00
Swifty
05aaf7a85e fix(backend): Rename LINEAR_API_KEY to COPILOT_LINEAR_API_KEY to prevent global access (#12143)
The `LINEAR_API_KEY` environment variable name is too generic — it
matches the key name used by integrations/blocks, meaning that if set
globally, it could inadvertently grant all users access to Linear
through the blocks system rather than restricting it to the copilot
feature-request tool.

This renames the setting to `COPILOT_LINEAR_API_KEY` to make it clear
this key is scoped exclusively to the copilot's feature-request
functionality, preventing it from being picked up as a general-purpose
Linear credential.

### Changes 🏗️

- Renamed `linear_api_key` → `copilot_linear_api_key` in `Secrets`
settings model (`backend/util/settings.py`)
- Updated all references in the copilot feature-request tool
(`backend/api/features/chat/tools/feature_requests.py`)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified the rename is consistent across all references (settings
+ feature_requests tool)
  - [x] No other files reference the old `linear_api_key` setting name

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

> **Note:** The env var changes from `LINEAR_API_KEY` to
`COPILOT_LINEAR_API_KEY`. Any deployment using the old name will need to
update accordingly.

<!-- greptile_comment -->

<details><summary><h3>Greptile Summary</h3></summary>

Renamed `LINEAR_API_KEY` to `COPILOT_LINEAR_API_KEY` in settings and the
copilot feature-request tool to prevent unintended access through Linear
blocks.

**Key changes:**
- Updated `Secrets.linear_api_key` → `Secrets.copilot_linear_api_key` in
`backend/util/settings.py`
- Updated all references in
`backend/api/features/chat/tools/feature_requests.py`
- The rename prevents the copilot Linear key from being picked up by the
Linear blocks integration (which uses `LINEAR_API_KEY` via
`ProviderBuilder` in `backend/blocks/linear/_config.py`)

**Issues found:**
- `.env.default` still references `LINEAR_API_KEY` instead of
`COPILOT_LINEAR_API_KEY`
- Frontend styleguide has a hardcoded error message with the old
variable name
</details>


<details><summary><h3>Confidence Score: 3/5</h3></summary>

- Generally safe but requires fixing `.env.default` before deployment
- The code changes are correct and achieve the intended security
improvement by preventing scope leakage. However, the PR is incomplete -
`.env.default` wasn't updated (critical for deployment) and a frontend
error message reference was missed. These issues will cause
configuration problems for anyone deploying with the new variable name.
- Check `autogpt_platform/backend/.env.default` and
`autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx`
- both need updates to match the renamed variable
</details>


<details><summary><h3>Flowchart</h3></summary>

```mermaid
flowchart TD
    A[".env file<br/>COPILOT_LINEAR_API_KEY"] --> B["Secrets model<br/>copilot_linear_api_key"]
    B --> C["feature_requests.py<br/>_get_linear_config()"]
    C --> D["Creates APIKeyCredentials<br/>for copilot feature requests"]
    
    E[".env file<br/>LINEAR_API_KEY"] --> F["ProviderBuilder<br/>in blocks/linear/_config.py"]
    F --> G["Linear blocks integration<br/>for user workflows"]
    
    style A fill:#90EE90
    style B fill:#90EE90
    style C fill:#90EE90
    style D fill:#90EE90
    style E fill:#FFD700
    style F fill:#FFD700
    style G fill:#FFD700
```
</details>


<sub>Last reviewed commit: 86dc57a</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-17 11:16:43 +01:00
Reinier van der Leer
9d4dcbd9e0 fix(backend/docker): Make server last (= default) build stage
Without specifying an explicit build target it would build the `migrate` stage because it is the last stage in the Dockerfile. This caused deployment failures.

- Follow-up to #12124 and 074be7ae
2026-02-16 14:49:30 +01:00
Reinier van der Leer
074be7aea6 fix(backend/docker): Update run commands to match deployment
- Follow-up to #12124

Changes:
- Update `run` commands for all backend services in `docker-compose.platform.yml` to match the deployment commands used in production
- Add trigger on `docker-compose(.platform)?.yml` changes to the Frontend CI workflow
2026-02-16 14:23:29 +01:00
Otto
39d28b24fc ci(backend): Upgrade RabbitMQ from 3.12 (EOL) to 4.1.4 (#12118)
## Summary
Upgrades RabbitMQ from the end-of-life `rabbitmq:3.12-management` to
`rabbitmq:4.1.4`, aligning CI, local dev, and e2e testing with
production.

## Changes

### CI Workflow (`.github/workflows/platform-backend-ci.yml`)
- **Image:** `rabbitmq:3.12-management` → `rabbitmq:4.1.4`
- **Port:** Removed 15672 (management UI) — not used
- **Health check:** Added to prevent flaky tests from race conditions
during startup

### Docker Compose (`docker-compose.platform.yml`,
`docker-compose.test.yaml`)
- **Image:** `rabbitmq:management` → `rabbitmq:4.1.4`
- **Port:** Removed 15672 (management UI) — not used

## Why
- RabbitMQ 3.12 is EOL
- We don't use the management interface, so `-management` variant is
unnecessary
- CI and local dev/e2e should match production (4.1.4)

## Testing
CI validates that backend tests pass against RabbitMQ 4.1.4 on Python
3.11, 3.12, and 3.13.

---
Closes SECRT-1703
2026-02-16 12:45:39 +00:00
Reinier van der Leer
bf79a7748a fix(backend/build): Update stale Poetry usage in Dockerfile (#12124)
[SECRT-2006: Dev deployment failing: poetry not found in container
PATH](https://linear.app/autogpt/issue/SECRT-2006)

- Follow-up to #12090

### Changes 🏗️

- Remove now-broken Poetry path config values
- Remove usage of now-broken `poetry run` in container run command
- Add trigger on `backend/Dockerfile` changes to Frontend CI workflow

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - If it works, CI will pass
2026-02-16 13:54:20 +01:00
Otto
649d4ab7f5 feat(chat): Add delete chat session endpoint and UI (#12112)
## Summary

Adds the ability to delete chat sessions from the CoPilot interface.

## Changes

### Backend
- Add `DELETE /api/chat/sessions/{session_id}` endpoint in `routes.py`
- Returns 204 on success, 404 if not found or not owned by user
- Reuses existing `delete_chat_session` function from `model.py`

### Frontend
- Add delete button (trash icon) that appears on hover for each chat
session
- Add confirmation dialog before deletion using existing
`DeleteConfirmDialog` component
- Refresh session list after successful delete
- Clear current session selection if the deleted session was active
- Update OpenAPI spec with new endpoint

## Testing

1. Hover over a chat session in sidebar → trash icon appears
2. Click trash icon → confirmation dialog
3. Confirm deletion → session removed, list refreshes
4. If deleted session was active, selection is cleared

## Screenshots

Delete button appears on hover, confirmation dialog on click.

## Related Issues

Closes SECRT-1928

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Adds the ability to delete chat sessions from the CoPilot interface — a
new `DELETE /api/chat/sessions/{session_id}` backend endpoint and a
corresponding delete button with confirmation dialog in the
`ChatSidebar` frontend component.

- **Backend route** (`routes.py`): Clean implementation reusing the
existing `delete_chat_session` model function with proper auth guards
and 204/404 responses. No issues.
- **Frontend** (`ChatSidebar.tsx`): Adds hover-visible trash icon per
session, confirmation dialog, mutation with cache invalidation, and
active session clearing on delete. However, it uses a `__legacy__`
component (`DeleteConfirmDialog`) which violates the project's style
guide — new code should use the modern design system components. Error
handling only logs to console without user-facing feedback (project
convention is to use toast notifications for mutation errors).
`isDeleting` is destructured but unused.
- **OpenAPI spec** updated correctly.
- **Unrelated file included**:
`notes/plan-SECRT-1959-graph-edge-desync.md` is a planning document for
a different ticket and should be removed from this PR. The `notes/`
directory is newly introduced and both plan files should be reconsidered
for inclusion.
</details>


<details><summary><h3>Confidence Score: 3/5</h3></summary>

- Functionally correct but has style guide violations and includes
unrelated files that should be addressed before merge.
- The core feature implementation (backend DELETE endpoint and frontend
mutation logic) is sound and follows existing patterns. Score is lowered
because: (1) the frontend uses a legacy component explicitly prohibited
by the project's style guide, (2) mutation errors are not surfaced to
the user, and (3) the PR includes an unrelated planning document for a
different ticket.
- Pay close attention to `ChatSidebar.tsx` for the legacy component
import and error handling, and
`notes/plan-SECRT-1959-graph-edge-desync.md` which should be removed.
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant ChatSidebar as ChatSidebar (Frontend)
    participant ReactQuery as React Query
    participant API as DELETE /api/chat/sessions/{id}
    participant Model as model.delete_chat_session
    participant DB as db.delete_chat_session (Prisma)
    participant Redis as Redis Cache

    User->>ChatSidebar: Click trash icon on session
    ChatSidebar->>ChatSidebar: Show DeleteConfirmDialog
    User->>ChatSidebar: Confirm deletion
    ChatSidebar->>ReactQuery: deleteSession({ sessionId })
    ReactQuery->>API: DELETE /api/chat/sessions/{session_id}
    API->>Model: delete_chat_session(session_id, user_id)
    Model->>DB: delete_many(where: {id, userId})
    DB-->>Model: bool (deleted count > 0)
    Model->>Redis: Delete session cache key
    Model->>Model: Clean up session lock
    Model-->>API: True
    API-->>ReactQuery: 204 No Content
    ReactQuery->>ChatSidebar: onSuccess callback
    ChatSidebar->>ReactQuery: invalidateQueries(sessions list)
    ChatSidebar->>ChatSidebar: Clear sessionId if deleted was active
```
</details>


<sub>Last reviewed commit: 44a92c6</sub>

<!-- greptile_other_comments_section -->

<details><summary><h4>Context used (3)</h4></summary>

- Context from `dashboard` - autogpt_platform/frontend/CLAUDE.md
([source](https://app.greptile.com/review/custom-context?memory=39861924-d320-41ba-a1a7-a8bff44f780a))
- Context from `dashboard` - autogpt_platform/frontend/CONTRIBUTING.md
([source](https://app.greptile.com/review/custom-context?memory=cc4f1b17-cb5c-4b63-b218-c772b48e20ee))
- Context from `dashboard` - autogpt_platform/CLAUDE.md
([source](https://app.greptile.com/review/custom-context?memory=6e9dc5dc-8942-47df-8677-e60062ec8c3a))
</details>


<!-- /greptile_comment -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-16 12:19:18 +00:00
Ubbe
223df9d3da feat(frontend): improve create/edit copilot UX (#12117)
## Changes 🏗️

Make the UX nicer when running long tasks in Copilot, like creating an
agent, editing it or running a task.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and play the game!

<!-- greptile_comment -->

<details><summary><h3>Greptile Summary</h3></summary>

This PR replaces the static progress bar and idle wait screens with an
interactive mini-game across the Create, Edit, and Run Agent copilot
tools. The existing mini-game (a simple runner with projectile-dodge
boss encounters) is significantly overhauled into a two-mode game: a
runner mode with animated tree obstacles and a duel mode featuring a
melee boss fight with attack, guard, and movement mechanics.
Sprite-based rendering replaces the previous shape-drawing approach.

- **Create/Edit/Run Agent UX**: All three tool views now show the
mini-game with contextual overlays during long-running operations,
replacing the progress bar in EditAgent and adding the game to RunAgent
- **Game mechanics overhaul**: Boss encounters changed from
projectile-dodging to melee duel with attack (Z), block (X), movement
(arrows), and jump (Space) controls
- **Sprite rendering**: Added 9 sprite sheet assets for characters,
trees, and boss animations with fallback to shape rendering if images
fail to load
- **UI overlays**: Added React-managed overlay states for idle,
boss-intro, boss-defeated, and game-over screens with continue/retry
buttons
- **Minor issues found**: Unused `isRunActive` variable in
`MiniGame.tsx`, unreachable "leaving" boss phase in `useMiniGame.ts`,
and a missing `expanded` property in `getAccordionMeta` return type
annotation in `EditAgent.tsx`
- **Unused asset**: `archer-shoot.png` is included in the PR but never
imported or referenced in any code
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge — it only affects the copilot mini-game UX
with no backend or data model changes.
- The changes are entirely frontend/cosmetic, scoped to the copilot
tools' waiting UX. The mini-game logic is self-contained in a
canvas-based hook and doesn't affect any application state, API calls,
or routing. The issues found are minor (unused variable, dead code, type
annotation gap, unused asset) and don't impact runtime behavior.
- `useMiniGame.ts` has the most complex logic changes (boss AI, death
animations, sprite rendering) and contains unreachable dead code in the
"leaving" phase handler. `EditAgent.tsx` has a return type annotation
that doesn't include `expanded`.
</details>


<details><summary><h3>Flowchart</h3></summary>

```mermaid
flowchart TD
    A[Game Idle] -->|"Start button"| B[Run Mode]
    B -->|"Jump over trees"| C{Score >= Threshold?}
    C -->|No| B
    C -->|"Yes, obstacles clear"| D[Boss Intro Overlay]
    D -->|"Continue button"| E[Duel Mode]
    E -->|"Attack Z / Guard X / Move ←→"| F{Boss HP <= 0?}
    F -->|No| G{Player hit & not guarding?}
    G -->|No| E
    G -->|Yes| H[Player Death Animation]
    H --> I[Game Over Overlay]
    I -->|"Retry button"| B
    F -->|Yes| J[Boss Death Animation]
    J --> K[Boss Defeated Overlay]
    K -->|"Continue button"| L[Reset Boss & Resume Run]
    L --> B
```
</details>


<sub>Last reviewed commit: ad80e24</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-16 10:53:08 +00:00
Ubbe
187ab04745 refactor(frontend): remove OldAgentLibraryView and NEW_AGENT_RUNS flag (#12088)
## Summary
- Removes the deprecated `OldAgentLibraryView` directory (13 files,
~2200 lines deleted)
- Removes the `NEW_AGENT_RUNS` feature flag from the `Flag` enum and
defaults
- Removes the legacy agent library page at `library/legacy/[id]`
- Moves shared `CronScheduler` components to
`src/components/contextual/CronScheduler/`
- Moves `agent-run-draft-view` and `agent-status-chip` to
`legacy-builder/` (co-located with their only consumer)
- Updates all import paths in consuming files (`AgentInfoStep`,
`SaveControl`, `RunnerInputUI`, `useRunGraph`)

## Test plan
- [x] `pnpm format` passes
- [x] `pnpm types` passes (no TypeScript errors)
- [x] No remaining references to `OldAgentLibraryView`,
`NEW_AGENT_RUNS`, or `new-agent-runs` in the codebase
- [x] Verify `RunnerInputUI` dialog still works in the legacy builder
- [x] Verify `AgentInfoStep` cron scheduling works in the publish modal
- [x] Verify `SaveControl` cron scheduling works in the legacy builder

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

This PR removes deprecated code from the legacy agent library view
system and consolidates the codebase to use the new agent runs
implementation exclusively. The refactor successfully removes ~2200
lines of code across 13 deleted files while properly relocating shared
components.

**Key changes:**
- Removed the entire `OldAgentLibraryView` directory and its 13
component files
- Removed the `NEW_AGENT_RUNS` feature flag from the `Flag` enum and
defaults
- Deleted the legacy agent library page route at `library/legacy/[id]`
- Moved `CronScheduler` components to
`src/components/contextual/CronScheduler/` for shared use across the
application
- Moved `agent-run-draft-view` and `agent-status-chip` to
`legacy-builder/` directory, co-locating them with their only consumer
- Updated `useRunGraph.ts` to import `GraphExecutionMeta` from the
generated API models instead of the deleted custom type definition
- Updated all import paths in consuming components (`AgentInfoStep`,
`SaveControl`, `RunnerInputUI`)

**Technical notes:**
- The new import path for `GraphExecutionMeta`
(`@/app/api/__generated__/models/graphExecutionMeta`) will be generated
when running `pnpm generate:api` from the OpenAPI spec
- All references to the old code have been cleanly removed from the
codebase
- The refactor maintains proper separation of concerns by moving shared
components to contextual locations
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge with minimal risk, pending manual
verification of the UI components mentioned in the test plan
- The refactor is well-structured and all code changes are correct. The
score of 4 (rather than 5) reflects that the PR author has marked three
manual testing items as incomplete in the test plan: verifying
`RunnerInputUI` dialog, `AgentInfoStep` cron scheduling, and
`SaveControl` cron scheduling. While the code changes are sound, these
UI components should be manually tested before merging to ensure the
moved components work correctly in their new locations.
- No files require special attention. The author should complete the
manual testing checklist items for `RunnerInputUI`, `AgentInfoStep`, and
`SaveControl` as noted in the test plan.
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Dev as Developer
    participant FE as Frontend Build
    participant API as Backend API
    participant Gen as Generated Types

    Note over Dev,Gen: Refactor: Remove OldAgentLibraryView & NEW_AGENT_RUNS flag

    Dev->>FE: Delete OldAgentLibraryView (13 files, ~2200 lines)
    Dev->>FE: Remove NEW_AGENT_RUNS from Flag enum
    Dev->>FE: Delete library/legacy/[id]/page.tsx
    
    Dev->>FE: Move CronScheduler → src/components/contextual/
    Dev->>FE: Move agent-run-draft-view → legacy-builder/
    Dev->>FE: Move agent-status-chip → legacy-builder/
    
    Dev->>FE: Update RunnerInputUI import path
    Dev->>FE: Update SaveControl import path
    Dev->>FE: Update AgentInfoStep import path
    
    Dev->>FE: Update useRunGraph.ts
    FE->>Gen: Import GraphExecutionMeta from generated models
    Note over Gen: Type available after pnpm generate:api
    
    Gen-->>API: Uses OpenAPI spec schema
    API-->>FE: Type-safe GraphExecutionMeta model
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-16 18:29:59 +08:00
Abhimanyu Yadav
e2d3c8a217 fix(frontend): Prevent node drag when selecting text in object editor key input (#11955)
## Summary
- Add `nodrag` class to the key name input wrapper in
`WrapIfAdditionalTemplate.tsx`
- This prevents the node from being dragged when users try to select
text in the key name input field
- Follows the same pattern used by other input components like
`TextWidget.tsx`

## Test plan
- [x] Open the new builder
- [x] Add a custom node with an Object input field
- [x] Try to select text in the key name input by clicking and dragging
- [x] Verify that text selection works without moving the block

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-16 06:59:33 +00:00
Eve
647c8ed8d4 feat(backend/blocks): enhance list concatenation with advanced operations (#12105)
## Summary

Enhances the existing `ConcatenateListsBlock` and adds five new
companion blocks for comprehensive list manipulation, addressing issue
#11139 ("Implement block to concatenate lists").

### Changes

- **Enhanced `ConcatenateListsBlock`** with optional deduplication
(`deduplicate`) and None-value filtering (`remove_none`), plus an output
`length` field
- **New `FlattenListBlock`**: Recursively flattens nested list
structures with configurable `max_depth`
- **New `InterleaveListsBlock`**: Round-robin interleaving of elements
from multiple lists
- **New `ZipListsBlock`**: Zips corresponding elements from multiple
lists with support for padding to longest or truncating to shortest
- **New `ListDifferenceBlock`**: Computes set difference between two
lists (regular or symmetric)
- **New `ListIntersectionBlock`**: Finds common elements between two
lists, preserving order

### Helper Utilities

Extracted reusable helper functions for validation, flattening,
deduplication, interleaving, chunking, and statistics computation to
support the blocks and enable future reuse.

### Test Coverage

Comprehensive test suite with 188 test functions across 29 test classes
covering:
- Built-in block test harness validation for all 6 blocks
- Manual edge-case tests for each block (empty inputs, large lists,
mixed types, nested structures)
- Internal method tests for all block classes
- Unit tests for all helper utility functions

Closes #11139

## Test plan

- [x] All files pass Python syntax validation (`ast.parse`)
- [x] Built-in `test_input`/`test_output` tests defined for all blocks
- [x] Manual tests cover edge cases: empty lists, large lists, mixed
types, nested structures, deduplication, None removal
- [x] Helper function tests validate all utility functions independently
- [x] All block IDs are valid UUID4
- [x] Block categories set to `BlockCategory.BASIC` for consistency with
existing list blocks


<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Enhanced `ConcatenateListsBlock` with deduplication and None-filtering
options, and added five new list manipulation blocks
(`FlattenListBlock`, `InterleaveListsBlock`, `ZipListsBlock`,
`ListDifferenceBlock`, `ListIntersectionBlock`) with comprehensive
helper functions and test coverage.

**Key Changes:**
- Enhanced `ConcatenateListsBlock` with `deduplicate` and `remove_none`
options, plus `length` output field
- Added `FlattenListBlock` for recursively flattening nested lists with
configurable `max_depth`
- Added `InterleaveListsBlock` for round-robin element interleaving
- Added `ZipListsBlock` with support for padding/truncation
- Added `ListDifferenceBlock` and `ListIntersectionBlock` for set
operations
- Extracted 12 reusable helper functions for validation, flattening,
deduplication, etc.
- Comprehensive test suite with 188 test functions covering edge cases

**Minor Issues:**
- Helper function `_deduplicate_list` has redundant logic in the `else`
branch that duplicates the `if` branch
- Three helper functions (`_filter_empty_collections`,
`_compute_list_statistics`, `_chunk_list`) are defined but unused -
consider removing unless planned for future use
- The `_make_hashable` function uses `hash(repr(item))` for unhashable
types, which correctly treats structurally identical dicts/lists as
duplicates
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- Safe to merge with minor style improvements recommended
- The implementation is well-structured with comprehensive test coverage
(188 tests), proper error handling, and follows existing block patterns.
All blocks use valid UUID4 IDs and correct categories. The helper
functions provide good code reuse. The minor issues are purely stylistic
(redundant code, unused helpers) and don't affect functionality or
safety.
- No files require special attention - both files are well-tested and
follow project conventions
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant Block as List Block
    participant Helper as Helper Functions
    participant Output
    
    User->>Block: Input (lists/parameters)
    Block->>Helper: _validate_all_lists()
    Helper-->>Block: validation result
    
    alt validation fails
        Block->>Output: error message
    else validation succeeds
        Block->>Helper: _concatenate_lists_simple() / _flatten_nested_list() / etc.
        Helper-->>Block: processed result
        
        opt deduplicate enabled
            Block->>Helper: _deduplicate_list()
            Helper-->>Block: deduplicated result
        end
        
        opt remove_none enabled
            Block->>Helper: _filter_none_values()
            Helper-->>Block: filtered result
        end
        
        Block->>Output: result + length
    end
    
    Output-->>User: Block outputs
```
</details>


<sub>Last reviewed commit: a6d5445</sub>

<!-- greptile_other_comments_section -->

<sub>(2/5) Greptile learns from your feedback when you react with thumbs
up/down!</sub>

<!-- /greptile_comment -->

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-16 05:39:53 +00:00
Zamil Majdy
27d94e395c feat(backend/sdk): enable WebSearch, block WebFetch, consolidate tool constants (#12108)
## Summary
- Enable Claude Agent SDK built-in **WebSearch** tool (Brave Search via
Anthropic API) for the CoPilot SDK agent
- Explicitly **block WebFetch** via `SDK_DISALLOWED_TOOLS`. The agent
uses the SSRF-protected `mcp__copilot__web_fetch` MCP tool instead
- **Consolidate** all tool security constants (`BLOCKED_TOOLS`,
`WORKSPACE_SCOPED_TOOLS`, `DANGEROUS_PATTERNS`, `SDK_DISALLOWED_TOOLS`)
into `tool_adapter.py` as a single source of truth — previously
scattered across `tool_adapter.py`, `security_hooks.py`, and inline in
`service.py`

## Changes
- `tool_adapter.py`: Add `WebSearch` to `_SDK_BUILTIN_TOOLS`, add
`SDK_DISALLOWED_TOOLS`, move security constants here
- `security_hooks.py`: Import constants from `tool_adapter.py` instead
of defining locally
- `service.py`: Use `SDK_DISALLOWED_TOOLS` instead of inline `["Bash"]`

## Test plan
- [x] All 21 security hooks tests pass
- [x] Ruff lint clean
- [x] All pre-commit hooks pass
- [ ] Verify WebSearch works in CoPilot chat (manual test)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Consolidates tool security constants into `tool_adapter.py` as single
source of truth, enables WebSearch (Brave via Anthropic API), and
explicitly blocks WebFetch to prevent SSRF attacks. The change improves
security by ensuring the agent uses the SSRF-protected
`mcp__copilot__web_fetch` tool instead of the built-in WebFetch which
can access internal networks like `localhost:8006`.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes improve security by blocking WebFetch (SSRF risk) while
enabling safe WebSearch. The consolidation of constants into a single
source of truth improves maintainability. All existing tests pass (21
security hooks tests), and the refactoring is straightforward with no
behavioral changes to existing security logic. The only suggestions are
minor improvements: adding a test for WebFetch blocking and considering
a lowercase alias for consistency.
- No files require special attention
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Agent as SDK Agent
    participant Hooks as Security Hooks
    participant TA as tool_adapter.py
    participant MCP as MCP Tools
    
    Note over TA: SDK_DISALLOWED_TOOLS = ["Bash", "WebFetch"]
    Note over TA: _SDK_BUILTIN_TOOLS includes WebSearch
    
    Agent->>Hooks: Request WebSearch (Brave API)
    Hooks->>TA: Check BLOCKED_TOOLS
    TA-->>Hooks: Not blocked
    Hooks-->>Agent: Allowed ✓
    Agent->>Agent: Execute via Anthropic API
    
    Agent->>Hooks: Request WebFetch (SSRF risk)
    Hooks->>TA: Check BLOCKED_TOOLS
    Note over TA: WebFetch in SDK_DISALLOWED_TOOLS
    TA-->>Hooks: Blocked
    Hooks-->>Agent: Denied ✗
    Note over Agent: Use mcp__copilot__web_fetch instead
    
    Agent->>Hooks: Request mcp__copilot__web_fetch
    Hooks->>MCP: Validate (MCP tool, not SDK builtin)
    MCP-->>Hooks: Has SSRF protection
    Hooks-->>Agent: Allowed ✓
    Agent->>MCP: Execute with SSRF checks
```
</details>


<sub>Last reviewed commit: 2d9975f</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-15 06:51:25 +00:00
DEEVEN SERU
b8f5c208d0 Handle errors in Jina ExtractWebsiteContentBlock (#12048)
## Summary
- catch Jina reader client/server errors in ExtractWebsiteContentBlock
and surface a clear error output keyed to the user URL
- guard empty responses to return an explicit error instead of yielding
blank content
- add regression tests covering the happy path and HTTP client failures
via a monkeypatched fetch

## Testing
- not run (pytest unavailable in this environment)

---------

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-13 19:15:09 +00:00
Otto
ca216dfd7f ci(docs-claude-review): Update comments instead of creating new ones (#12106)
## Changes 🏗️

This PR updates the Claude Block Docs Review CI workflow to update
existing comments instead of creating new ones on each push.

### What's Changed:
1. **Concurrency group** - Prevents race conditions if the workflow runs
twice simultaneously
2. **Comment cleanup step** - Deletes any previous Claude review comment
before posting a new one
3. **Marker instruction** - Instructs Claude to include a `<!--
CLAUDE_DOCS_REVIEW -->` marker in its comment for identification

### Why:
Previously, every PR push would create a new review comment, cluttering
the PR with multiple comments. Now only the most recent review is shown.

### Testing:
1. Create a PR that triggers this workflow (modify a file in
`docs/integrations/` or `autogpt_platform/backend/backend/blocks/`)
2. Verify first run creates comment with marker
3. Push another commit
4. Verify old comment is deleted and new comment is created (not
accumulated)

Requested by @Bentlybro

---

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan (will be
tested on merge)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Added concurrency control and comment deduplication to prevent multiple
Claude review comments from accumulating on PRs. The workflow now
deletes previous review comments (identified by `<!-- CLAUDE_DOCS_REVIEW
-->` marker) before posting new ones, and uses concurrency groups to
prevent race conditions.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes are well-contained, follow GitHub Actions best practices,
and use built-in GitHub APIs safely. The concurrency control prevents
race conditions, and the comment cleanup logic uses proper filtering
with `head -1` to handle edge cases. The HTML comment marker approach is
standard and reliable.
- No files require special attention
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant GH as GitHub PR Event
    participant WF as Workflow
    participant API as GitHub API
    participant Claude as Claude Action
    
    GH->>WF: PR opened/synchronized
    WF->>WF: Check concurrency group
    Note over WF: Cancel any in-progress runs<br/>for same PR number
    WF->>API: Query PR comments
    API-->>WF: Return all comments
    WF->>WF: Filter for CLAUDE_DOCS_REVIEW marker
    alt Previous comment exists
        WF->>API: DELETE comment by ID
        API-->>WF: Comment deleted
    else No previous comment
        WF->>WF: Skip deletion
    end
    WF->>Claude: Run code review
    Claude->>API: POST new comment with marker
    API-->>Claude: Comment created
```
</details>


<sub>Last reviewed commit: fb1b436</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-13 16:46:23 +00:00
Zamil Majdy
f9f358c526 feat(mcp): Add MCP tool block with OAuth, tool discovery, and standard credential integration (#12011)
## Summary

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/18e8ef34-d222-453c-8b0a-1b25ef8cf806"
/>


<img width="250" alt="image"
src="https://github.com/user-attachments/assets/ba97556c-09c5-4f76-9f4e-49a2e8e57468"
/>

<img width="250" alt="image"
src="https://github.com/user-attachments/assets/68f7804a-fe74-442d-9849-39a229c052cf"
/>

<img width="250" alt="image"
src="https://github.com/user-attachments/assets/700690ba-f9fe-4726-8871-3bfbab586001"
/>

Full-stack MCP (Model Context Protocol) tool block integration that
allows users to connect to any MCP server, discover available tools,
authenticate via OAuth, and execute tools — all through the standard
AutoGPT credential system.

### Backend

- **MCPToolBlock** (`blocks/mcp/block.py`): New block using
`CredentialsMetaInput` pattern with optional credentials (`default={}`),
supporting both authenticated (OAuth) and public MCP servers. Includes
auto-lookup fallback for backward compatibility.
- **MCP Client** (`blocks/mcp/client.py`): HTTP transport with JSON-RPC
2.0, tool discovery, tool execution with robust error handling
(type-checked error fields, non-JSON response handling)
- **MCP OAuth Handler** (`blocks/mcp/oauth.py`): RFC 8414 discovery,
dynamic per-server OAuth with PKCE, token storage and refresh via
`raise_for_status=True`
- **MCP API Routes** (`api/features/mcp/routes.py`): `discover-tools`,
`oauth/login`, `oauth/callback` endpoints with credential cleanup,
defensive OAuth metadata validation
- **Credential system integration**:
- `CredentialsMetaInput` model_validator normalizes legacy
`"ProviderName.MCP"` format from Python 3.13's `str(StrEnum)` change
- `CredentialsFieldInfo.combine()` supports URL-based credential
discrimination (each MCP server gets its own credential entry)
- `aggregate_credentials_inputs` checks block schema defaults for
credential optionality
- Executor normalizes credential data for both Pydantic and JSON schema
validation paths
  - Chat credential matching handles MCP server URL filtering
- `provider_matches()` helper used consistently for Python 3.13 StrEnum
compatibility
- **Pre-run validation**: `_validate_graph_get_errors` now calls
`get_missing_input()` for custom block-level validation (MCP tool
arguments)
- **Security**: HTML tag stripping loop to prevent XSS bypass, SSRF
protection (removed trusted_origins)

### Frontend

- **MCPToolDialog** (`MCPToolDialog.tsx`): Full tool discovery UI —
enter server URL, authenticate if needed, browse tools, select tool and
configure
- **OAuth popup** (`oauth-popup.ts`): Shared utility supporting
cross-origin MCP OAuth flows with BroadcastChannel + localStorage
fallback
- **Credential integration**: MCP-specific OAuth flow in
`useCredentialsInput`, server URL filtering in `useCredentials`, MCP
callback page
- **CredentialsSelect**: Auto-selects first available credential instead
of defaulting to "None", credentials listed before "None" in dropdown
- **Node rendering**: Dynamic tool input schema rendering on MCP nodes,
proper handling in both legacy and new flow editors
- **Block title persistence**: `customized_name` set at block creation
for both MCP and Agent blocks — no fallback logic needed, titles survive
save/load reliably
- **Stable credential ordering**: Removed `sortByUnsetFirst` that caused
credential inputs to jump when selected

### Tests (~2060 lines)

- Unit tests: block, client, tool execution
- Integration tests: mock MCP server with auth
- OAuth flow tests
- API endpoint tests
- Credential combining/optionality tests
- E2e tests (skipped in CI, run manually)

## Key Design Decisions

1. **Optional credentials via `default={}`**: MCP servers can be public
(no auth) or private (OAuth). The `credentials` field has `default={}`
making it optional at the schema level, so public servers work without
prompting for credentials.

2. **URL-based credential discrimination**: Each MCP server URL gets its
own credential entry in the "Run agent" form (via
`discriminator="server_url"`), so agents using multiple MCP servers
prompt for each independently.

3. **Model-level normalization**: Python 3.13 changed `str(StrEnum)` to
return `"ClassName.MEMBER"`. Rather than scattering fixes across the
codebase, a Pydantic `model_validator(mode="before")` on
`CredentialsMetaInput` handles normalization centrally, and
`provider_matches()` handles lookups.

4. **Credential auto-select**: `CredentialsSelect` component defaults to
the first available credential and notifies the parent state, ensuring
credentials are pre-filled in the "Run agent" dialog without requiring
manual selection.

5. **customized_name for block titles**: Both MCP and Agent blocks set
`customized_name` in metadata at creation time. This eliminates
convoluted runtime fallback logic (`agent_name`, hostname extraction) —
the title is persisted once and read directly.

## Test plan

- [x] Unit/integration tests pass (68 MCP + 11 graph = 79 tests)
- [x] Manual: MCP block with public server (DeepWiki) — no credentials
needed, tools discovered and executable
- [x] Manual: MCP block with OAuth server (Linear, Sentry) — OAuth flow
prompts correctly
- [x] Manual: "Run agent" form shows correct credential requirements per
MCP server
- [x] Manual: Credential auto-selects when exactly one matches,
pre-selects first when multiple exist
- [x] Manual: Credential ordering stays stable when
selecting/deselecting
- [x] Manual: MCP block title persists after save and refresh
- [x] Manual: Agent block title persists after save and refresh (via
customized_name)
- [ ] Manual: Shared agent with MCP block prompts new user for
credentials

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
2026-02-13 16:17:03 +00:00
Zamil Majdy
52b3aebf71 feat(backend/sdk): Claude Agent SDK integration for CoPilot (#12103)
## Summary

Full integration of the **Claude Agent SDK** to replace the existing
one-turn OpenAI-compatible CoPilot implementation with a multi-turn,
tool-using AI agent.

### What changed

**Core SDK Integration** (`chat/sdk/` — new module)
- **`service.py`**: Main orchestrator — spawns Claude Code CLI as a
subprocess per user message, streams responses back via SSE. Handles
conversation history compression, session lifecycle, and error recovery.
- **`response_adapter.py`**: Translates Claude Agent SDK events (text
deltas, tool use, errors, result messages) into the existing CoPilot
`StreamEvent` protocol so the frontend works unchanged.
- **`tool_adapter.py`**: Bridges CoPilot's MCP tools (find_block,
run_block, create_agent, etc.) into the SDK's tool format. Handles
schema conversion and result serialization.
- **`security_hooks.py`**: Pre/Post tool-use hooks that enforce a strict
allowlist of tools, block path traversal, sandbox file operations to
per-session workspace directories, cap sub-agent spawning, and prevent
the model from accessing unauthorized system resources.
- **`transcript.py`**: JSONL transcript I/O utilities for the stateless
`--resume` feature (see below).

**Stateless Multi-Turn Resume** (new)
- Instead of compressing conversation history via LLM on every turn
(lossy and expensive), we capture Claude Code's native JSONL session
transcript via a **Stop hook** callback, persist it in the DB
(`ChatSession.sdkTranscript`), and restore it on the next turn via
`--resume <file>`.
- This preserves full tool call/result context across turns with zero
token overhead for history.
- Feature-flagged via `CLAUDE_AGENT_USE_RESUME` (default: off).
- DB migration: `ALTER TABLE "ChatSession" ADD COLUMN "sdkTranscript"
TEXT`.

**Sandboxed Tool Execution** (`chat/tools/`)
- **`bash_exec.py`**: Sandboxed bash execution using bubblewrap
(`bwrap`) with read-only root filesystem, per-session writable
workspace, resource limits (CPU, memory, file size), and network
isolation.
- **`sandbox.py`**: Shared bubblewrap sandbox infrastructure — generates
`bwrap` command lines with configurable mounts, environment, and
resource constraints.
- **`web_fetch.py`**: URL fetching tool with domain allowlist, size
limits, and content-type filtering.
- **`check_operation_status.py`**: Polling tool for long-running
operations (agent creation, block execution) so the SDK doesn't block
waiting.
- **`find_block.py`** / **`run_block.py`**: Enhanced with category
filtering, optimized response size (removed raw JSON schemas), and
better error handling.

**Security**
- Path traversal prevention: session IDs sanitized, all file ops
confined to workspace dirs, symlink resolution.
- Tool allowlist enforcement via SDK hooks — model cannot call arbitrary
tools.
- Built-in `Bash` tool blocked via `disallowed_tools` to prevent
bypassing sandboxed `bash_exec`.
- Sub-agent (`Task`) spawning capped at configurable limit (default:
10).
- CodeQL-clean path sanitization patterns.

**Streaming & Reconnection**
- SSE stream registry backed by Redis Streams for crash-resilient
reconnection.
- Long-running operation tracking with TTL-based cleanup.
- Atomic message append to prevent race conditions on concurrent writes.

**Configuration** (`config.py`)
- `use_claude_agent_sdk` — master toggle (default: on)
- `claude_agent_model` — model override for SDK path
- `claude_agent_max_buffer_size` — JSON parsing buffer (10MB)
- `claude_agent_max_subtasks` — sub-agent cap (10)
- `claude_agent_use_resume` — transcript-based resume (default: off)
- `thinking_enabled` — extended thinking for Claude models

**Tests**
- `sdk/response_adapter_test.py` — 366 lines covering all event
translation paths
- `sdk/security_hooks_test.py` — 165 lines covering tool blocking, path
traversal, subtask limits
- `chat/model_test.py` — 214 lines covering session model serialization
- `chat/service_test.py` — Integration tests including multi-turn resume
keyword recall
- `tools/find_block_test.py` / `run_block_test.py` — Extended with new
tool behavior tests

## Test plan
- [x] Unit tests pass (`sdk/response_adapter_test.py`,
`security_hooks_test.py`, `model_test.py`)
- [x] Integration test: multi-turn keyword recall via `--resume`
(`service_test.py::test_sdk_resume_multi_turn`)
- [x] Manual E2E: CoPilot chat sessions with tool calls, bash execution,
and multi-turn context
- [x] Pre-commit hooks pass (ruff, isort, black, pyright, flake8)
- [ ] Staging deployment with `claude_agent_use_resume=false` initially
- [ ] Enable resume in staging, verify transcript capture and recall

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

This PR replaces the existing OpenAI-compatible CoPilot with a full
Claude Agent SDK integration, introducing multi-turn conversations,
stateless resume via JSONL transcripts, and sandboxed tool execution.

**Key changes:**
- **SDK integration** (`chat/sdk/`): spawns Claude Code CLI subprocess
per message, translates events to frontend protocol, bridges MCP tools
- **Stateless resume**: captures JSONL transcripts via Stop hook,
persists in `ChatSession.sdkTranscript`, restores with `--resume`
(feature-flagged, default off)
- **Sandboxed execution**: bubblewrap sandbox for bash commands with
filesystem whitelist, network isolation, resource limits
- **Security hooks**: tool allowlist enforcement, path traversal
prevention, workspace-scoped file operations, sub-agent spawn limits
- **Long-running operations**: delegates `create_agent`/`edit_agent` to
existing stream_registry infrastructure for SSE reconnection
- **Feature flag**: `CHAT_USE_CLAUDE_AGENT_SDK` with LaunchDarkly
support, defaults to enabled

**Security issues found:**
- Path traversal validation has logic errors in `security_hooks.py:82`
(tilde expansion order) and `service.py:266` (redundant `..` check)
- Config validator always prefers env var over explicit `False` value
(`config.py:162`)
- Race condition in `routes.py:323` — message persisted before task
registration, could duplicate on retry
- Resource limits in sandbox may fail silently (`sandbox.py:109`)

**Test coverage is strong** with 366 lines for response adapter, 165 for
security hooks, and integration tests for multi-turn resume.
</details>


<details><summary><h3>Confidence Score: 3/5</h3></summary>

- This PR is generally safe but has critical security issues in path
validation that must be fixed before merge
- Score reflects strong architecture and test coverage offset by real
security vulnerabilities: the tilde expansion bug in `security_hooks.py`
could allow sandbox escape, the race condition could cause message
duplication, and the silent ulimit failures could bypass resource
limits. The bubblewrap sandbox and allowlist enforcement are
well-designed, but the path validation bugs need fixing. The transcript
resume feature is properly feature-flagged. Overall the implementation
is solid but the security issues prevent a higher score.
- Pay close attention to
`backend/api/features/chat/sdk/security_hooks.py` (path traversal
vulnerability), `backend/api/features/chat/routes.py` (race condition),
`backend/api/features/chat/tools/sandbox.py` (silent resource limit
failures), and `backend/api/features/chat/sdk/service.py` (redundant
security check)
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Frontend
    participant Routes as routes.py
    participant SDKService as sdk/service.py
    participant ClaudeSDK as Claude Agent SDK CLI
    participant SecurityHooks as security_hooks.py
    participant ToolAdapter as tool_adapter.py
    participant CoPilotTools as tools/*
    participant Sandbox as sandbox.py (bwrap)
    participant DB as Database
    participant Redis as stream_registry

    Frontend->>Routes: POST /chat (user message)
    Routes->>SDKService: stream_chat_completion_sdk()
    
    SDKService->>DB: get_chat_session()
    DB-->>SDKService: session + messages
    
    alt Resume enabled AND transcript exists
        SDKService->>SDKService: validate_transcript()
        SDKService->>SDKService: write_transcript_to_tempfile()
        Note over SDKService: Pass --resume to SDK
    else No resume
        SDKService->>SDKService: _compress_conversation_history()
        Note over SDKService: Inject history into user message
    end
    
    SDKService->>SecurityHooks: create_security_hooks()
    SDKService->>ToolAdapter: create_copilot_mcp_server()
    SDKService->>ClaudeSDK: spawn subprocess with MCP server
    
    loop Streaming Conversation
        ClaudeSDK->>SDKService: AssistantMessage (text/tool_use)
        SDKService->>Frontend: StreamTextDelta / StreamToolInputAvailable
        
        alt Tool Call
            ClaudeSDK->>SecurityHooks: PreToolUse hook
            SecurityHooks->>SecurityHooks: validate path, check allowlist
            alt Tool blocked
                SecurityHooks-->>ClaudeSDK: deny
            else Tool allowed
                SecurityHooks-->>ClaudeSDK: allow
                ClaudeSDK->>ToolAdapter: call MCP tool
                
                alt Long-running tool (create_agent, edit_agent)
                    ToolAdapter->>Redis: register task
                    ToolAdapter->>DB: save OperationPendingResponse
                    ToolAdapter->>ToolAdapter: spawn background task
                    ToolAdapter-->>ClaudeSDK: OperationStartedResponse
                else Regular tool (find_block, bash_exec)
                    ToolAdapter->>CoPilotTools: execute()
                    alt bash_exec
                        CoPilotTools->>Sandbox: run_sandboxed()
                        Sandbox->>Sandbox: build bwrap command
                        Note over Sandbox: Network isolation,<br/>filesystem whitelist,<br/>resource limits
                        Sandbox-->>CoPilotTools: stdout, stderr, exit_code
                    end
                    CoPilotTools-->>ToolAdapter: result
                    ToolAdapter->>ToolAdapter: stash full output
                    ToolAdapter-->>ClaudeSDK: MCP response
                end
                
                SecurityHooks->>SecurityHooks: PostToolUse hook (log)
            end
        end
        
        ClaudeSDK->>SDKService: UserMessage (ToolResultBlock)
        SDKService->>ToolAdapter: pop_pending_tool_output()
        SDKService->>Frontend: StreamToolOutputAvailable
    end
    
    ClaudeSDK->>SecurityHooks: Stop hook
    SecurityHooks->>SDKService: transcript_path callback
    SDKService->>SDKService: read_transcript_file()
    SDKService->>DB: save transcript to session.sdkTranscript
    
    ClaudeSDK->>SDKService: ResultMessage (success)
    SDKService->>Frontend: StreamFinish
    SDKService->>DB: upsert_chat_session()
```
</details>


<sub>Last reviewed commit: 28c1121</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2026-02-13 15:49:03 +00:00
Otto
965b7d3e04 dx: Add PR overlap detection & alert (#12104)
## Summary

Adds an automated workflow that detects potential merge conflicts
between open PRs, helping contributors coordinate proactively.

**Example output:** [See comment on PR
#12057](https://github.com/Significant-Gravitas/AutoGPT/pull/12057#issuecomment-3897330632)

## How it works

1. **Triggered on PR events** — runs when a PR is opened, pushed to, or
reopened
2. **Compares against all open PRs** targeting the same base branch
3. **Detects overlaps** at multiple levels:
   - File overlap (same files modified)
   - Line overlap (same line ranges modified)
   - Actual merge conflicts (attempts real merges)
4. **Posts a comment** on the PR with findings

## Features

- Full file paths with common prefix extraction for readability
- Conflict size (number of conflict regions + lines affected)
- Conflict types (content, added, deleted, modified/deleted, etc.)
- Last-updated timestamps for each PR
- Risk categorization (conflict, medium, low)
- Ignores noise files (openapi.json, lock files)
- Updates existing comment on subsequent pushes (no spam)
- Filters out PRs older than 14 days
- Clone-once optimization for fast merge testing (~48s for 19 PRs)

## Files

- `.github/scripts/detect_overlaps.py` — main detection script
- `.github/workflows/pr-overlap-check.yml` — workflow definition
2026-02-13 15:45:10 +00:00
Bently
c2368f15ff fix(blocks): disable PrintToConsoleBlock (#12100)
## Summary
Disables the Print to Console block as requested by Nick Tindle.

## Changes
- Added `disabled=True` to PrintToConsoleBlock in `basic.py`

## Testing
- Block will no longer appear in the platform UI
- Existing graphs using this block should be checked (block ID:
`f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c`)

Closes OPEN-3000

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Added `disabled=True` parameter to `PrintToConsoleBlock` in `basic.py`
per Nick Tindle's request (OPEN-3000).

- Block follows the same disabling pattern used by other blocks in the
codebase (e.g., `BlockInstallationBlock`, video blocks, Ayrshare blocks)
- Block will no longer appear in the platform UI for new graph creation
- Existing graphs using this block (ID:
`f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c`) will need to be checked for
compatibility
- Comment properly documents the reason for disabling
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- Single-line change that adds a well-documented flag following existing
patterns used throughout the codebase. The change is non-destructive and
only affects UI visibility of the block for new graphs.
- No files require special attention
</details>


<sub>Last reviewed commit: 759003b</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-13 15:20:23 +00:00
dependabot[bot]
9ac3f64d56 chore(deps): bump github/codeql-action from 3 to 4 (#12033)
Bumps [github/codeql-action](https://github.com/github/codeql-action)
from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.32.2</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.1">2.24.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3460">#3460</a></li>
</ul>
<h2>v3.32.1</h2>
<ul>
<li>A warning is now shown in Default Setup workflow logs if a <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registry is configured</a> using a GitHub Personal Access Token
(PAT), but no username is configured. <a
href="https://redirect.github.com/github/codeql-action/pull/3422">#3422</a></li>
<li>Fixed a bug which caused the CodeQL Action to fail when repository
properties cannot successfully be retrieved. <a
href="https://redirect.github.com/github/codeql-action/pull/3421">#3421</a></li>
</ul>
<h2>v3.32.0</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.0">2.24.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3425">#3425</a></li>
</ul>
<h2>v3.31.11</h2>
<ul>
<li>When running a Default Setup workflow with <a
href="https://docs.github.com/en/actions/how-tos/monitor-workflows/enable-debug-logging">Actions
debugging enabled</a>, the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. <a
href="https://redirect.github.com/github/codeql-action/pull/3409">#3409</a></li>
<li>Improved error handling throughout the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3415">#3415</a></li>
<li>Added experimental support for automatically excluding <a
href="https://docs.github.com/en/repositories/working-with-files/managing-files/customizing-how-changed-files-appear-on-github">generated
files</a> from the analysis. This feature is not currently enabled for
any analysis. In the future, it may be enabled by default for some
GitHub-managed analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3318">#3318</a></li>
<li>The changelog extracts that are included with releases of the CodeQL
Action are now shorter to avoid duplicated information from appearing in
Dependabot PRs. <a
href="https://redirect.github.com/github/codeql-action/pull/3403">#3403</a></li>
</ul>
<h2>v3.31.10</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.31.10 - 12 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.9. <a
href="https://redirect.github.com/github/codeql-action/pull/3393">#3393</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.31.10/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.31.9</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.31.9 - 16 Dec 2025</h2>
<p>No user facing changes.</p>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.31.9/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.31.8</h2>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>3.31.8 - 11 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.8. <a
href="https://redirect.github.com/github/codeql-action/pull/3354">#3354</a></li>
</ul>
<p>See the full <a
href="https://github.com/github/codeql-action/blob/v3.31.8/CHANGELOG.md">CHANGELOG.md</a>
for more information.</p>
<h2>v3.31.7</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h2>4.31.11 - 23 Jan 2026</h2>
<ul>
<li>When running a Default Setup workflow with <a
href="https://docs.github.com/en/actions/how-tos/monitor-workflows/enable-debug-logging">Actions
debugging enabled</a>, the CodeQL Action will now use more unique names
when uploading logs from the Dependabot authentication proxy as workflow
artifacts. This ensures that the artifact names do not clash between
multiple jobs in a build matrix. <a
href="https://redirect.github.com/github/codeql-action/pull/3409">#3409</a></li>
<li>Improved error handling throughout the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3415">#3415</a></li>
<li>Added experimental support for automatically excluding <a
href="https://docs.github.com/en/repositories/working-with-files/managing-files/customizing-how-changed-files-appear-on-github">generated
files</a> from the analysis. This feature is not currently enabled for
any analysis. In the future, it may be enabled by default for some
GitHub-managed analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3318">#3318</a></li>
<li>The changelog extracts that are included with releases of the CodeQL
Action are now shorter to avoid duplicated information from appearing in
Dependabot PRs. <a
href="https://redirect.github.com/github/codeql-action/pull/3403">#3403</a></li>
</ul>
<h2>4.31.10 - 12 Jan 2026</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.9. <a
href="https://redirect.github.com/github/codeql-action/pull/3393">#3393</a></li>
</ul>
<h2>4.31.9 - 16 Dec 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.8 - 11 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.8. <a
href="https://redirect.github.com/github/codeql-action/pull/3354">#3354</a></li>
</ul>
<h2>4.31.7 - 05 Dec 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.7. <a
href="https://redirect.github.com/github/codeql-action/pull/3343">#3343</a></li>
</ul>
<h2>4.31.6 - 01 Dec 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.5 - 24 Nov 2025</h2>
<ul>
<li>Update default CodeQL bundle version to 2.23.6. <a
href="https://redirect.github.com/github/codeql-action/pull/3321">#3321</a></li>
</ul>
<h2>4.31.4 - 18 Nov 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.3 - 13 Nov 2025</h2>
<ul>
<li>CodeQL Action v3 will be deprecated in December 2026. The Action now
logs a warning for customers who are running v3 but could be running v4.
For more information, see <a
href="https://github.blog/changelog/2025-10-28-upcoming-deprecation-of-codeql-action-v3/">Upcoming
deprecation of CodeQL Action v3</a>.</li>
<li>Update default CodeQL bundle version to 2.23.5. <a
href="https://redirect.github.com/github/codeql-action/pull/3288">#3288</a></li>
</ul>
<h2>4.31.2 - 30 Oct 2025</h2>
<p>No user facing changes.</p>
<h2>4.31.1 - 30 Oct 2025</h2>
<ul>
<li>The <code>add-snippets</code> input has been removed from the
<code>analyze</code> action. This input has been deprecated since CodeQL
Action 3.26.4 in August 2024 when this removal was announced.</li>
</ul>
<h2>4.31.0 - 24 Oct 2025</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8aac4e47ac"><code>8aac4e4</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3448">#3448</a>
from github/mergeback/v4.32.1-to-main-6bc82e05</li>
<li><a
href="e8d7df4f04"><code>e8d7df4</code></a>
Rebuild</li>
<li><a
href="c1bba77db0"><code>c1bba77</code></a>
Update changelog and version after v4.32.1</li>
<li><a
href="6bc82e05fd"><code>6bc82e0</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3447">#3447</a>
from github/update-v4.32.1-f52cbc830</li>
<li><a
href="42f00f2d33"><code>42f00f2</code></a>
Add a couple of change notes</li>
<li><a
href="cedee6de9f"><code>cedee6d</code></a>
Update changelog for v4.32.1</li>
<li><a
href="f52cbc8309"><code>f52cbc8</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3445">#3445</a>
from github/dependabot/npm_and_yarn/fast-xml-parser-...</li>
<li>See full diff in <a
href="https://github.com/github/codeql-action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github/codeql-action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-13 15:04:05 +00:00
Swifty
5035b69c79 feat(platform): add feature request tools for CoPilot chat (#12102)
Users can now search for existing feature requests and submit new ones
directly through the CoPilot chat interface. Requests are tracked in
Linear with customer need attribution.

### Changes 🏗️

**Backend:**
- Added `SearchFeatureRequestsTool` and `CreateFeatureRequestTool` to
the CoPilot chat tools registry
- Integrated with Linear GraphQL API for searching issues in the feature
requests project, creating new issues, upserting customers, and
attaching customer needs
- Added `linear_api_key` secret to settings for system-level Linear API
access
- Added response models (`FeatureRequestSearchResponse`,
`FeatureRequestCreatedResponse`, `FeatureRequestInfo`) to the tools
models

**Frontend:**
- Added `SearchFeatureRequestsTool` and `CreateFeatureRequestTool` UI
components with full streaming state handling (input-streaming,
input-available, output-available, output-error)
- Added helper utilities for output parsing, type guards, animation
text, and icon rendering
- Wired tools into `ChatMessagesContainer` for rendering in the chat
- Added styleguide examples covering all tool states

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified search returns matching feature requests from Linear
- [x] Verified creating a new feature request creates an issue and
customer need in Linear
- [x] Verified adding a need to an existing issue works via
`existing_issue_id`
  - [x] Verified error states render correctly in the UI
  - [x] Verified styleguide page renders all tool states

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

New secret: `LINEAR_API_KEY` — required for system-level Linear API
operations (defaults to empty string).

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Adds feature request search and creation tools to CoPilot chat,
integrating with Linear's GraphQL API to track user feedback. Users can
now search existing feature requests and submit new ones (or add their
need to existing issues) directly through conversation.

**Key changes:**
- Backend: `SearchFeatureRequestsTool` and `CreateFeatureRequestTool`
with Linear API integration via system-level `LINEAR_API_KEY`
- Frontend: React components with streaming state handling and accordion
UI for search results and creation confirmations
- Models: Added `FeatureRequestSearchResponse` and
`FeatureRequestCreatedResponse` to response types
- Customer need tracking: Upserts customers in Linear and attaches needs
to issues for better feedback attribution

**Issues found:**
- Missing `LINEAR_API_KEY` entry in `.env.default` (required per PR
description checklist)
- Hardcoded project/team IDs reduce maintainability
- Global singleton pattern could cause issues in async contexts
- Using `user_id` as customer name reduces readability in Linear
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- Safe to merge with minor configuration fix required
- The implementation is well-structured with proper error handling, type
safety, and follows existing patterns in the codebase. The missing
`.env.default` entry is a straightforward configuration issue that must
be fixed before deployment but doesn't affect code quality. The other
findings are style improvements that don't impact functionality.
- Verify that `LINEAR_API_KEY` is added to `.env.default` before merging
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant CoPilot UI
    participant LLM
    participant FeatureRequestTool
    participant LinearClient
    participant Linear API

    User->>CoPilot UI: Request feature via chat
    CoPilot UI->>LLM: Send user message
    
    LLM->>FeatureRequestTool: search_feature_requests(query)
    FeatureRequestTool->>LinearClient: query(SEARCH_ISSUES_QUERY)
    LinearClient->>Linear API: POST /graphql (search)
    Linear API-->>LinearClient: searchIssues.nodes[]
    LinearClient-->>FeatureRequestTool: Feature request data
    FeatureRequestTool-->>LLM: FeatureRequestSearchResponse
    
    alt No existing requests found
        LLM->>FeatureRequestTool: create_feature_request(title, description)
        FeatureRequestTool->>LinearClient: mutate(CUSTOMER_UPSERT_MUTATION)
        LinearClient->>Linear API: POST /graphql (upsert customer)
        Linear API-->>LinearClient: customer {id, name}
        LinearClient-->>FeatureRequestTool: Customer data
        
        FeatureRequestTool->>LinearClient: mutate(ISSUE_CREATE_MUTATION)
        LinearClient->>Linear API: POST /graphql (create issue)
        Linear API-->>LinearClient: issue {id, identifier, url}
        LinearClient-->>FeatureRequestTool: Issue data
        
        FeatureRequestTool->>LinearClient: mutate(CUSTOMER_NEED_CREATE_MUTATION)
        LinearClient->>Linear API: POST /graphql (attach need)
        Linear API-->>LinearClient: need {id, issue}
        LinearClient-->>FeatureRequestTool: Need data
        FeatureRequestTool-->>LLM: FeatureRequestCreatedResponse
    else Existing request found
        LLM->>FeatureRequestTool: create_feature_request(title, description, existing_issue_id)
        FeatureRequestTool->>LinearClient: mutate(CUSTOMER_UPSERT_MUTATION)
        LinearClient->>Linear API: POST /graphql (upsert customer)
        Linear API-->>LinearClient: customer {id}
        LinearClient-->>FeatureRequestTool: Customer data
        
        FeatureRequestTool->>LinearClient: mutate(CUSTOMER_NEED_CREATE_MUTATION)
        LinearClient->>Linear API: POST /graphql (attach need to existing)
        Linear API-->>LinearClient: need {id, issue}
        LinearClient-->>FeatureRequestTool: Need data
        FeatureRequestTool-->>LLM: FeatureRequestCreatedResponse
    end
    
    LLM-->>CoPilot UI: Tool response + continuation
    CoPilot UI-->>User: Display result with accordion UI
```
</details>


<sub>Last reviewed commit: af2e093</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-13 15:27:00 +01:00
Otto
86af8fc856 ci: apply E2E CI optimizations to Claude workflows (#12097)
## Summary

Applies the CI performance optimizations from #12090 to Claude Code
workflows.

## Changes

### `claude.yml` & `claude-dependabot.yml`
- **pnpm caching**: Replaced manual `actions/cache` with `setup-node`
built-in `cache: "pnpm"`
- Removes 4 steps (set pnpm store dir, cache step, manual config) → 1
step

### `claude-ci-failure-auto-fix.yml`
- **Added dev environment setup** with optimized caching
- Now Claude can run lint/tests when fixing CI failures (previously
could only edit files)
- Uses the same optimized caching patterns

## Dependency

This PR is based on #12090 and will merge after it.

## Testing

- Workflow YAML syntax validated
- Patterns match proven #12090 implementation
- CI caching changes fail gracefully to uncached builds

## Linear

Fixes [SECRT-1950](https://linear.app/autogpt/issue/SECRT-1950)

## Future Enhancements

E2E test data caching could be added to Claude workflows if needed for
running integration tests. Currently Claude workflows set up a dev
environment but don't run E2E tests by default.

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Applies proven CI performance optimizations to Claude workflows by
simplifying pnpm caching and adding dev environment setup to the
auto-fix workflow.

**Key changes:**
- Replaced manual pnpm cache configuration (4 steps) with built-in
`setup-node` `cache: "pnpm"` support in `claude.yml` and
`claude-dependabot.yml`
- Added complete dev environment setup (Python/Poetry + Node.js/pnpm) to
`claude-ci-failure-auto-fix.yml` so Claude can run linting and tests
when fixing CI failures
- Correctly orders `corepack enable` before `setup-node` to ensure pnpm
is available for caching

The changes mirror the optimizations from PR #12090 and maintain
consistency across all Claude workflows.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes are CI infrastructure optimizations that mirror proven
patterns from PR #12090. The pnpm caching simplification reduces
complexity without changing functionality (caching failures gracefully
fall back to uncached builds). The dev environment setup in the auto-fix
workflow is additive and enables Claude to run linting/tests. All YAML
syntax is correct and the step ordering follows best practices.
- No files require special attention
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant GHA as GitHub Actions
    participant Corepack as Corepack
    participant SetupNode as setup-node@v6
    participant Cache as GHA Cache
    participant pnpm as pnpm

    Note over GHA,pnpm: Before (Manual Caching)
    GHA->>SetupNode: Set up Node.js 22
    SetupNode-->>GHA: Node.js ready
    GHA->>Corepack: Enable corepack
    Corepack-->>GHA: pnpm available
    GHA->>pnpm: Configure store directory
    pnpm-->>GHA: Store path set
    GHA->>Cache: actions/cache (manual key)
    Cache-->>GHA: Cache restored/missed
    GHA->>pnpm: Install dependencies
    pnpm-->>GHA: Dependencies installed

    Note over GHA,pnpm: After (Built-in Caching)
    GHA->>Corepack: Enable corepack
    Corepack-->>GHA: pnpm available
    GHA->>SetupNode: Set up Node.js 22<br/>cache: "pnpm"<br/>cache-dependency-path: pnpm-lock.yaml
    SetupNode->>Cache: Auto-detect pnpm store
    Cache-->>SetupNode: Cache restored/missed
    SetupNode-->>GHA: Node.js + cache ready
    GHA->>pnpm: Install dependencies
    pnpm-->>GHA: Dependencies installed
```
</details>


<sub>Last reviewed commit: f1681a0</sub>

<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
2026-02-13 13:48:04 +00:00
Otto
dfa517300b debug(copilot): Add detailed API error logging (#11942)
## Summary
Adds comprehensive error logging for OpenRouter/OpenAI API errors to
help diagnose issues like provider routing failures, context length
exceeded, rate limits, etc.

## Background
While investigating
[SECRT-1859](https://linear.app/autogpt/issue/SECRT-1859), we found that
when OpenRouter returns errors, the actual error details weren't being
captured or logged. Langfuse traces showed `provider_name: 'unknown'`
and `completion: null` without any insight into WHY all providers
rejected the request.

## Changes
- Add `_extract_api_error_details()` to extract rich information from
API errors including:
  - Status code and request ID
  - Response body (contains OpenRouter's actual error message)
  - OpenRouter-specific headers (provider, model)
  - Rate limit headers
- Add `_log_api_error()` helper that logs errors with context:
  - Session ID for correlation
  - Message count (helps identify context length issues)
  - Model being used
  - Retry count
- Update error handling in `_stream_chat_chunks()` and
`_generate_llm_continuation()` to use new logging
- Extract provider's error message from response body for better user
feedback

## Example log output
```
API error: {
  'error_type': 'APIStatusError',
  'error_message': 'Provider returned error',
  'status_code': 400,
  'request_id': 'req_xxx',
  'response_body': {'error': {'message': 'context_length_exceeded', 'type': 'invalid_request_error'}},
  'openrouter_provider': 'unknown',
  'session_id': '44fbb803-...',
  'message_count': 52,
  'model': 'anthropic/claude-opus-4.5',
  'retry_count': 0
}
```

## Testing
- [ ] Verified code passes linting (black, isort, ruff)
- [ ] Error details are properly extracted from different error types

## Refs
- Linear: SECRT-1859
- Thread:
https://discord.com/channels/1126875755960336515/1467066151002571034

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2026-02-13 13:15:17 +00:00
Reinier van der Leer
43b25b5e2f ci(frontend): Speed up E2E test job (#12090)
The frontend `e2e_test` doesn't have a working build cache setup,
causing really slow builds = slow test jobs. These changes reduce total
test runtime from ~12 minutes to ~5 minutes.

### Changes 🏗️

- Inject build cache config into docker compose config; let `buildx
bake` use GHA cache directly
  - Add `docker-ci-fix-compose-build-cache.py` script
- Optimize `backend/Dockerfile` + root `.dockerignore`
- Replace broken DIY pnpm store caching with `actions/setup-node`
built-in cache management
- Add caching for test seed data created in DB

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI
2026-02-13 11:09:41 +01:00
Swifty
ab0b537cc7 refactor(backend): optimize find_block response size by removing raw JSON schemas (#12020)
### Changes 🏗️

The `find_block` AutoPilot tool was returning ~90K characters per
response (10 blocks). The bloat came from including full JSON Schema
objects (`input_schema`, `output_schema`) with all nested `$defs`,
`anyOf`, and type definitions for every block.

**What changed:**

- **`BlockInfoSummary` model**: Removed `input_schema` (raw JSON
Schema), `output_schema` (raw JSON Schema), and `categories`. Added
`output_fields` (compact field-level summaries matching the existing
`required_inputs` format).
- **`BlockListResponse` model**: Removed `usage_hint` (info now in
`message`).
- **`FindBlockTool._execute()`**: Now extracts compact `output_fields`
from output schema properties instead of including the entire raw
schema. Credentials handling is unchanged.
- **Test**: Added `test_response_size_average_chars_per_block` with
realistic block schemas (HTTP, Email, Claude Code) to measure and assert
response size stays under 2K chars/block.
- **`CLAUDE.md`**: Clarified `dev` vs `master` branching strategy.

**Result:** Average response size reduced from ~9,000 to ~1,300 chars
per block (~85% reduction). This directly reduces LLM token consumption,
latency, and API costs for AutoPilot interactions.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified models import and serialize correctly
- [x] Verified response size: 3,970 chars for 3 realistic blocks (avg
1,323/block)
- [x] Lint (`ruff check`) and type check (`pyright`) pass on changed
files
- [x] Frontend compatibility preserved: `blocks[].name` and `count`
fields retained for `block_list` handler

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2026-02-13 11:08:51 +01:00
dependabot[bot]
9a8c6ad609 chore(libs/deps): bump the production-dependencies group across 1 directory with 4 updates (#12056)
Bumps the production-dependencies group with 4 updates in the
/autogpt_platform/autogpt_libs directory:
[cryptography](https://github.com/pyca/cryptography),
[fastapi](https://github.com/fastapi/fastapi),
[launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk)
and [supabase](https://github.com/supabase/supabase-py).

Updates `cryptography` from 46.0.4 to 46.0.5
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>46.0.5 - 2026-02-10</p>
<pre><code>
* An attacker could create a malicious public key that reveals portions
of your
private key when using certain uncommon elliptic curves (binary curves).
This version now includes additional security checks to prevent this
attack.
This issue only affects binary elliptic curves, which are rarely used in
real-world applications. Credit to **XlabAI Team of Tencent Xuanwu Lab
and
Atuin Automated Vulnerability Discovery Engine** for reporting the
issue.
  **CVE-2026-26007**
* Support for ``SECT*`` binary elliptic curves is deprecated and will be
  removed in the next release.
<p>.. v46-0-4:<br />
</code></pre></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="06e120e682"><code>06e120e</code></a>
bump version for 46.0.5 release (<a
href="https://redirect.github.com/pyca/cryptography/issues/14289">#14289</a>)</li>
<li><a
href="0eebb9dbb6"><code>0eebb9d</code></a>
EC check key on cofactor &gt; 1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/14287">#14287</a>)</li>
<li><a
href="bedf6e186b"><code>bedf6e1</code></a>
fix openssl version on 46 branch (<a
href="https://redirect.github.com/pyca/cryptography/issues/14220">#14220</a>)</li>
<li>See full diff in <a
href="https://github.com/pyca/cryptography/compare/46.0.4...46.0.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `fastapi` from 0.128.0 to 0.128.7
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fastapi/fastapi/releases">fastapi's
releases</a>.</em></p>
<blockquote>
<h2>0.128.7</h2>
<h3>Features</h3>
<ul>
<li> Show a clear error on attempt to include router into itself. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14258">#14258</a>
by <a
href="https://github.com/JavierSanchezCastro"><code>@​JavierSanchezCastro</code></a>.</li>
<li> Replace <code>dict</code> by <code>Mapping</code> on
<code>HTTPException.headers</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/12997">#12997</a>
by <a
href="https://github.com/rijenkii"><code>@​rijenkii</code></a>.</li>
</ul>
<h3>Refactors</h3>
<ul>
<li>♻️ Simplify reading files in memory, do it sequentially instead of
(fake) parallel. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14884">#14884</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Docs</h3>
<ul>
<li>📝 Use <code>dfn</code> tag for definitions instead of
<code>abbr</code> in docs. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14744">#14744</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Tweak comment in test to reference PR. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14885">#14885</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>🔧 Update LLM-prompt for <code>abbr</code> and <code>dfn</code> tags.
PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14747">#14747</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
<li> Test order for the submitted byte Files. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14828">#14828</a>
by <a
href="https://github.com/valentinDruzhinin"><code>@​valentinDruzhinin</code></a>.</li>
<li>🔧 Configure <code>test</code> workflow to run tests with
<code>inline-snapshot=review</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14876">#14876</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h2>0.128.6</h2>
<h3>Fixes</h3>
<ul>
<li>🐛 Fix <code>on_startup</code> and <code>on_shutdown</code>
parameters of <code>APIRouter</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14873">#14873</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h3>Translations</h3>
<ul>
<li>🌐 Update translations for zh (update-outdated). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14843">#14843</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Fix parameterized tests with snapshots. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14875">#14875</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h2>0.128.5</h2>
<h3>Refactors</h3>
<ul>
<li>♻️ Refactor and simplify Pydantic v2 (and v1) compatibility internal
utils. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14862">#14862</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Add inline snapshot tests for OpenAPI before changes from Pydantic
v2. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14864">#14864</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h2>0.128.4</h2>
<h3>Refactors</h3>
<ul>
<li>♻️ Refactor internals, simplify Pydantic v2/v1 utils,
<code>create_model_field</code>, better types for
<code>lenient_issubclass</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14860">#14860</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>♻️ Simplify internals, remove Pydantic v1 only logic, no longer
needed. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14857">#14857</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>♻️ Refactor internals, cleanup unneeded Pydantic v1 specific logic.
PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14856">#14856</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8f82c94de0"><code>8f82c94</code></a>
🔖 Release version 0.128.7</li>
<li><a
href="5bb3423205"><code>5bb3423</code></a>
📝 Update release notes</li>
<li><a
href="6ce5e3e961"><code>6ce5e3e</code></a>
 Tweak comment in test to reference PR (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14885">#14885</a>)</li>
<li><a
href="65da3dde12"><code>65da3dd</code></a>
📝 Update release notes</li>
<li><a
href="81f82fd955"><code>81f82fd</code></a>
🔧 Update LLM-prompt for <code>abbr</code> and <code>dfn</code> tags (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14747">#14747</a>)</li>
<li><a
href="ff721017df"><code>ff72101</code></a>
📝 Update release notes</li>
<li><a
href="ca76a4eba9"><code>ca76a4e</code></a>
📝 Use <code>dfn</code> tag for definitions instead of <code>abbr</code>
in docs (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14744">#14744</a>)</li>
<li><a
href="1133a4594d"><code>1133a45</code></a>
📝 Update release notes</li>
<li><a
href="38f965985e"><code>38f9659</code></a>
 Test order for the submitted byte Files (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14828">#14828</a>)</li>
<li><a
href="3f1cc8f8f5"><code>3f1cc8f</code></a>
📝 Update release notes</li>
<li>Additional commits viewable in <a
href="https://github.com/fastapi/fastapi/compare/0.128.0...0.128.7">compare
view</a></li>
</ul>
</details>
<br />

Updates `launchdarkly-server-sdk` from 9.14.1 to 9.15.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/launchdarkly/python-server-sdk/releases">launchdarkly-server-sdk's
releases</a>.</em></p>
<blockquote>
<h2>v9.15.0</h2>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.14.1...9.15.0">9.15.0</a>
(2026-02-10)</h2>
<h3>Features</h3>
<ul>
<li>Drop support for python 3.9 (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/393">#393</a>)
(<a
href="5b761bd306">5b761bd</a>)</li>
<li>Update ChangeSet to always require a Selector (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/405">#405</a>)
(<a
href="5dc4f81688">5dc4f81</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>Add context manager for clearer, safer locks (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/396">#396</a>)
(<a
href="beca0fa498">beca0fa</a>)</li>
<li>Address potential race condition in FeatureStore update_availability
(<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/391">#391</a>)
(<a
href="31cf4875c3">31cf487</a>)</li>
<li>Allow modifying fdv2 data source options independent of main config
(<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/403">#403</a>)
(<a
href="d78079e7f3">d78079e</a>)</li>
<li>Mark copy_with_new_sdk_key method as deprecated (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/353">#353</a>)
(<a
href="e471ccc3d5">e471ccc</a>)</li>
<li>Prevent immediate polling on recoverable error (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/399">#399</a>)
(<a
href="da565a2dce">da565a2</a>)</li>
<li>Redis store is considered initialized when <code>$inited</code> key
is written (<a
href="e99a27d48f">e99a27d</a>)</li>
<li>Stop FeatureStoreClientWrapper poller on close (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/397">#397</a>)
(<a
href="468afdfef3">468afdf</a>)</li>
<li>Update DataSystemConfig to accept list of synchronizers (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/404">#404</a>)
(<a
href="c73ad14090">c73ad14</a>)</li>
<li>Update reason documentation with inExperiment value (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/401">#401</a>)
(<a
href="cbfc3dd887">cbfc3dd</a>)</li>
<li>Update Redis to write missing <code>$inited</code> key (<a
href="e99a27d48f">e99a27d</a>)</li>
</ul>
<hr />
<p>This PR was generated with <a
href="https://github.com/googleapis/release-please">Release Please</a>.
See <a
href="https://github.com/googleapis/release-please#release-please">documentation</a>.</p>
<!-- raw HTML omitted -->
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/launchdarkly/python-server-sdk/blob/main/CHANGELOG.md">launchdarkly-server-sdk's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.14.1...9.15.0">9.15.0</a>
(2026-02-10)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p><strong>Note:</strong> The following breaking changes apply only to
FDv2 (Flag Delivery v2) early access features, which are not subject to
semantic versioning and may change without a major version bump.</p>
<ul>
<li>Update ChangeSet to always require a Selector (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/405">#405</a>)
(<a
href="5dc4f81688">5dc4f81</a>)
<ul>
<li>The <code>ChangeSetBuilder.finish()</code> method now requires a
<code>Selector</code> parameter.</li>
</ul>
</li>
<li>Update DataSystemConfig to accept list of synchronizers (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/404">#404</a>)
(<a
href="c73ad14090">c73ad14</a>)
<ul>
<li>The <code>DataSystemConfig.synchronizers</code> field now accepts a
list of synchronizers, and the
<code>ConfigBuilder.synchronizers()</code> method accepts variadic
arguments.</li>
</ul>
</li>
</ul>
<h3>Features</h3>
<ul>
<li>Drop support for python 3.9 (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/393">#393</a>)
(<a
href="5b761bd306">5b761bd</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>Add context manager for clearer, safer locks (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/396">#396</a>)
(<a
href="beca0fa498">beca0fa</a>)</li>
<li>Address potential race condition in FeatureStore update_availability
(<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/391">#391</a>)
(<a
href="31cf4875c3">31cf487</a>)</li>
<li>Allow modifying fdv2 data source options independent of main config
(<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/403">#403</a>)
(<a
href="d78079e7f3">d78079e</a>)</li>
<li>Mark copy_with_new_sdk_key method as deprecated (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/353">#353</a>)
(<a
href="e471ccc3d5">e471ccc</a>)</li>
<li>Prevent immediate polling on recoverable error (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/399">#399</a>)
(<a
href="da565a2dce">da565a2</a>)</li>
<li>Redis store is considered initialized when <code>$inited</code> key
is written (<a
href="e99a27d48f">e99a27d</a>)</li>
<li>Stop FeatureStoreClientWrapper poller on close (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/397">#397</a>)
(<a
href="468afdfef3">468afdf</a>)</li>
<li>Update reason documentation with inExperiment value (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/401">#401</a>)
(<a
href="cbfc3dd887">cbfc3dd</a>)</li>
<li>Update Redis to write missing <code>$inited</code> key (<a
href="e99a27d48f">e99a27d</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e542f737a6"><code>e542f73</code></a>
chore(main): release 9.15.0 (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/394">#394</a>)</li>
<li><a
href="e471ccc3d5"><code>e471ccc</code></a>
fix: Mark copy_with_new_sdk_key method as deprecated (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/353">#353</a>)</li>
<li><a
href="5dc4f81688"><code>5dc4f81</code></a>
feat: Update ChangeSet to always require a Selector (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/405">#405</a>)</li>
<li><a
href="f20fffeb1e"><code>f20fffe</code></a>
chore: Remove dead code, clarify names, other cleanup (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/398">#398</a>)</li>
<li><a
href="c73ad14090"><code>c73ad14</code></a>
fix: Update DataSystemConfig to accept list of synchronizers (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/404">#404</a>)</li>
<li><a
href="d78079e7f3"><code>d78079e</code></a>
fix: Allow modifying fdv2 data source options independent of main config
(<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/403">#403</a>)</li>
<li><a
href="e99a27d48f"><code>e99a27d</code></a>
chore: Support persistent data store verification in contract tests (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/402">#402</a>)</li>
<li><a
href="cbfc3dd887"><code>cbfc3dd</code></a>
fix: Update reason documentation with inExperiment value (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/401">#401</a>)</li>
<li><a
href="5a1adbb2de"><code>5a1adbb</code></a>
chore: Update sdk_metadata features (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/400">#400</a>)</li>
<li><a
href="da565a2dce"><code>da565a2</code></a>
fix: Prevent immediate polling on recoverable error (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/399">#399</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.14.1...9.15.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `supabase` from 2.27.2 to 2.28.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/releases">supabase's
releases</a>.</em></p>
<blockquote>
<h2>v2.28.0</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.3...v2.28.0">2.28.0</a>
(2026-02-10)</h2>
<h3>Features</h3>
<ul>
<li><strong>storage:</strong> add list_v2 method to file_api client (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1377">#1377</a>)
(<a
href="259f4ad42d">259f4ad</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>auth:</strong> add missing is_sso_user, deleted_at,
banned_until to User model (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1375">#1375</a>)
(<a
href="7f84a62996">7f84a62</a>)</li>
<li><strong>realtime:</strong> ensure remove_channel removes channel
from channels dict (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1373">#1373</a>)
(<a
href="0923314039">0923314</a>)</li>
<li><strong>realtime:</strong> use pop with default in _handle_message
to prevent KeyError (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1388">#1388</a>)
(<a
href="baea26f7ce">baea26f</a>)</li>
<li><strong>storage3:</strong> replace print() with warnings.warn() for
trailing slash notice (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1380">#1380</a>)
(<a
href="50b099fa06">50b099f</a>)</li>
</ul>
<h2>v2.27.3</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">2.27.3</a>
(2026-02-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)
(<a
href="cc72ed75d4">cc72ed7</a>)</li>
<li>ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)
(<a
href="4267ff1345">4267ff1</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/blob/main/CHANGELOG.md">supabase's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.3...v2.28.0">2.28.0</a>
(2026-02-10)</h2>
<h3>Features</h3>
<ul>
<li><strong>storage:</strong> add list_v2 method to file_api client (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1377">#1377</a>)
(<a
href="259f4ad42d">259f4ad</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>auth:</strong> add missing is_sso_user, deleted_at,
banned_until to User model (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1375">#1375</a>)
(<a
href="7f84a62996">7f84a62</a>)</li>
<li><strong>realtime:</strong> ensure remove_channel removes channel
from channels dict (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1373">#1373</a>)
(<a
href="0923314039">0923314</a>)</li>
<li><strong>realtime:</strong> use pop with default in _handle_message
to prevent KeyError (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1388">#1388</a>)
(<a
href="baea26f7ce">baea26f</a>)</li>
<li><strong>storage3:</strong> replace print() with warnings.warn() for
trailing slash notice (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1380">#1380</a>)
(<a
href="50b099fa06">50b099f</a>)</li>
</ul>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">2.27.3</a>
(2026-02-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)
(<a
href="cc72ed75d4">cc72ed7</a>)</li>
<li>ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)
(<a
href="4267ff1345">4267ff1</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="59e338400b"><code>59e3384</code></a>
chore(main): release 2.28.0 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1378">#1378</a>)</li>
<li><a
href="baea26f7ce"><code>baea26f</code></a>
fix(realtime): use pop with default in _handle_message to prevent
KeyError (#...</li>
<li><a
href="259f4ad42d"><code>259f4ad</code></a>
feat(storage): add list_v2 method to file_api client (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1377">#1377</a>)</li>
<li><a
href="50b099fa06"><code>50b099f</code></a>
fix(storage3): replace print() with warnings.warn() for trailing slash
notice...</li>
<li><a
href="0923314039"><code>0923314</code></a>
fix(realtime): ensure remove_channel removes channel from channels dict
(<a
href="https://redirect.github.com/supabase/supabase-py/issues/1373">#1373</a>)</li>
<li><a
href="7f84a62996"><code>7f84a62</code></a>
fix(auth): add missing is_sso_user, deleted_at, banned_until to User
model (#...</li>
<li><a
href="57dd6e2195"><code>57dd6e2</code></a>
chore(deps): bump the uv group across 1 directory with 3 updates (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1369">#1369</a>)</li>
<li><a
href="c357def670"><code>c357def</code></a>
chore(main): release 2.27.3 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1368">#1368</a>)</li>
<li><a
href="4267ff1345"><code>4267ff1</code></a>
fix: ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)</li>
<li><a
href="cc72ed75d4"><code>cc72ed7</code></a>
fix: deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.28.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Dependency update bumps 4 packages in the production-dependencies group,
including a **critical security patch for `cryptography`**
(CVE-2026-26007) that prevents malicious public key attacks on binary
elliptic curves. The update also includes bug fixes for `fastapi`,
`launchdarkly-server-sdk`, and `supabase`.

- **cryptography** 46.0.4 → 46.0.5: patches CVE-2026-26007, deprecates
SECT* binary curves
- **fastapi** 0.128.0 → 0.128.7: bug fixes, improved error handling,
relaxed Starlette constraint
- **launchdarkly-server-sdk** 9.14.1 → 9.15.0: drops Python 3.9 support
(requires >=3.10), fixes race conditions
- **supabase** 2.27.2/2.27.3 → 2.28.0: realtime fixes, new User model
fields

The lock files correctly resolve all dependencies. Python 3.10+
requirement is already enforced in both packages. However, backend's
`pyproject.toml` still specifies `launchdarkly-server-sdk = "^9.14.1"`
while the lock file uses 9.15.0 (pulled from autogpt_libs dependency),
creating a minor version constraint inconsistency.
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge with one minor style suggestion
- Automated dependency update with critical security patch for
cryptography. All updates are backwards-compatible within semver
constraints. Lock files correctly resolve all dependencies. Python 3.10+
is already enforced. Only minor issue is version constraint
inconsistency in backend's pyproject.toml for launchdarkly-server-sdk,
which doesn't affect functionality but should be aligned for clarity.
- autogpt_platform/backend/pyproject.toml needs launchdarkly-server-sdk
version constraint updated to ^9.15.0
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-13 09:10:11 +00:00
Ubbe
e8c50b96d1 fix(frontend): improve CoPilot chat table styling (#12094)
## Summary
- Remove left and right borders from tables rendered in CoPilot chat
- Increase cell padding (py-3 → py-3.5) for better spacing between text
and lines
- Applies to both Streamdown (main chat) and MarkdownRenderer (tool
outputs)

Design feedback from Olivia to make tables "breathe" more.

## Test plan
- [ ] Open CoPilot chat and trigger a response containing a table
- [ ] Verify tables no longer have left/right borders
- [ ] Verify increased spacing between rows
- [ ] Check both light and dark modes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Improved CoPilot chat table styling by removing left and right borders
and increasing vertical padding from `py-3` to `py-3.5`. Changes apply
to both:
- Streamdown-rendered tables (via CSS selector in `globals.css`)  
- MarkdownRenderer tables (via Tailwind classes)

The changes make tables "breathe" more per design feedback from Olivia.

**Issue Found:**
- The CSS padding value in `globals.css:192` is `0.625rem` (`py-2.5`)
but should be `0.875rem` (`py-3.5`) to match the PR description and the
MarkdownRenderer implementation.
</details>


<details><summary><h3>Confidence Score: 2/5</h3></summary>

- This PR has a logical error that will cause inconsistent table styling
between Streamdown and MarkdownRenderer tables
- The implementation has an inconsistency where the CSS file uses
`py-2.5` padding while the PR description and MarkdownRenderer use
`py-3.5`. This will result in different table padding between the two
rendering systems, contradicting the goal of consistent styling
improvements.
- Pay close attention to `autogpt_platform/frontend/src/app/globals.css`
- the padding value needs to be corrected to match the intended design
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-02-13 09:38:59 +08:00
Ubbe
30e854569a feat(frontend): add exact timestamp tooltip on run timestamps (#12087)
Resolves OPEN-2693: Make exact timestamp of runs accessible through UI.

The NewAgentLibraryView shows relative timestamps ("2 days ago") for
runs and schedules, but unlike the OldAgentLibraryView it didn't show
the exact timestamp on hover. This PR adds a native `title` tooltip so
users can see the full date/time by hovering.

### Changes 🏗️

- Added `descriptionTitle` prop to `SidebarItemCard` that renders as a
`title` attribute on the description text
- `TaskListItem` now passes the exact `run.started_at` timestamp via
`descriptionTitle`
- `ScheduleListItem` now passes the exact `schedule.next_run_time`
timestamp via `descriptionTitle`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [ ] Open an agent in the library view
- [ ] Hover over a run's relative timestamp (e.g. "2 days ago") and
confirm the full date/time tooltip appears
- [ ] Hover over a schedule's relative timestamp and confirm the full
date/time tooltip appears

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Added native tooltip functionality to show exact timestamps in the
library view. The implementation adds a `descriptionTitle` prop to
`SidebarItemCard` that renders as a `title` attribute on the description
text. This allows users to hover over relative timestamps (e.g., "2 days
ago") to see the full date/time.

**Changes:**
- Added optional `descriptionTitle` prop to `SidebarItemCard` component
(SidebarItemCard.tsx:10)
- `TaskListItem` passes `run.started_at` as the tooltip value
(TaskListItem.tsx:84-86)
- `ScheduleListItem` passes `schedule.next_run_time` as the tooltip
value (ScheduleListItem.tsx:32)
- Unrelated fix included: Sentry configuration updated to suppress
cross-origin stylesheet errors (instrumentation-client.ts:25-28)

**Note:** The PR includes two separate commits - the main timestamp
tooltip feature and a Sentry error suppression fix. The PR description
only documents the timestamp feature.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes are straightforward and limited in scope - adding an
optional prop that forwards a native HTML attribute for tooltip
functionality. The Text component already supports forwarding arbitrary
HTML attributes through its spread operator (...rest), ensuring the
`title` attribute works correctly. Both the timestamp tooltip feature
and the Sentry configuration fix are low-risk improvements with no
breaking changes.
- No files require special attention
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant TaskListItem
    participant ScheduleListItem
    participant SidebarItemCard
    participant Text
    participant Browser

    User->>TaskListItem: Hover over run timestamp
    TaskListItem->>SidebarItemCard: Pass descriptionTitle (run.started_at)
    SidebarItemCard->>Text: Render with title attribute
    Text->>Browser: Forward title attribute to DOM
    Browser->>User: Display native tooltip with exact timestamp

    User->>ScheduleListItem: Hover over schedule timestamp
    ScheduleListItem->>SidebarItemCard: Pass descriptionTitle (schedule.next_run_time)
    SidebarItemCard->>Text: Render with title attribute
    Text->>Browser: Forward title attribute to DOM
    Browser->>User: Display native tooltip with exact timestamp
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 09:38:16 +08:00
Ubbe
301d7cbada fix(frontend): suppress cross-origin stylesheet security error (#12086)
## Summary
- Adds `ignoreErrors` to the Sentry client configuration
(`instrumentation-client.ts`) to filter out `SecurityError:
CSSStyleSheet.cssRules getter: Not allowed to access cross-origin
stylesheet` errors
- These errors are caused by Sentry Replay (rrweb) attempting to
serialize DOM snapshots that include cross-origin stylesheets (from
browser extensions or CDN-loaded CSS)
- This was reported via Sentry on production, occurring on any page when
logged in

## Changes
- **`frontend/instrumentation-client.ts`**: Added `ignoreErrors: [/Not
allowed to access cross-origin stylesheet/]` to `Sentry.init()` config

## Test plan
- [ ] Verify the error no longer appears in Sentry after deployment
- [ ] Verify Sentry Replay still works correctly for other errors
- [ ] Verify no regressions in error tracking (other errors should still
be captured)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Adds error filtering to Sentry client configuration to suppress
cross-origin stylesheet security errors that occur when Sentry Replay
(rrweb) attempts to serialize DOM snapshots containing stylesheets from
browser extensions or CDN-loaded CSS. This prevents noise in Sentry
error logs without affecting the capture of legitimate errors.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The change adds a simple error filter to suppress benign cross-origin
stylesheet errors that are caused by Sentry Replay itself. The regex
pattern is specific and only affects client-side error reporting, with
no impact on application functionality or legitimate error capture
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 09:37:54 +08:00
Ubbe
d95aef7665 fix(copilot): stream timeout, long-running tool polling, and CreateAgent UI refresh (#12070)
Agent generation completes on the backend but the UI does not
update/refresh to show the result.

### Changes 🏗️

![Uploading Screenshot 2026-02-13 at 00.44.54.png…]()


- **Stream start timeout (12s):** If the backend doesn't begin streaming
within 12 seconds of submitting a message, the stream is aborted and a
destructive toast is shown to the user.
- **Long-running tool polling:** Added `useLongRunningToolPolling` hook
that polls the session endpoint every 1.5s while a tool output is in an
operating state (`operation_started` / `operation_pending` /
`operation_in_progress`). When the backend completes, messages are
refreshed so the UI reflects the final result.
- **CreateAgent UI improvements:** Replaced the orbit loader / progress
bar with a mini-game, added expanded accordion for saved agents, and
improved the saved-agent card with image, icons, and links that open in
new tabs.
- **Backend tweaks:** Added `image_url` to `CreateAgentToolOutput`,
minor model/service updates for the dummy agent generator.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Send a message and verify the stream starts within 12s or a toast
appears
- [x] Trigger agent creation and verify the UI updates when the backend
completes
- [x] Verify the saved-agent card renders correctly with image, links,
and icons

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 20:06:40 +00:00
Nicholas Tindle
cb166dd6fb feat(blocks): Store sandbox files to workspace (#12073)
Store files created by sandbox blocks (Claude Code, Code Executor) to
the user's workspace for persistence across runs.

### Changes 🏗️

- **New `sandbox_files.py` utility** (`backend/util/sandbox_files.py`)
  - Shared module for extracting files from E2B sandboxes
- Stores files to workspace via `store_media_file()` (includes virus
scanning, size limits)
  - Returns `SandboxFileOutput` with path, content, and `workspace_ref`

- **Claude Code block** (`backend/blocks/claude_code.py`)
  - Added `workspace_ref` field to `FileOutput` schema
  - Replaced inline `_extract_files()` with shared utility
  - Files from working directory now stored to workspace automatically

- **Code Executor block** (`backend/blocks/code_executor.py`)
  - Added `files` output field to `ExecuteCodeBlock.Output`
  - Creates `/output` directory in sandbox before execution
  - Extracts all files (text + binary) from `/output` after execution
- Updated `execute_code()` to support file extraction with
`extract_files` param

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create agent with Claude Code block, have it create a file, verify
`workspace_ref` in output
- [x] Create agent with Code Executor block, write file to `/output`,
verify `workspace_ref` in output
  - [x] Verify files persist in workspace after sandbox disposal
- [x] Verify binary files (images, etc.) work correctly in Code Executor
- [x] Verify existing graphs using `content` field still work (backward
compat)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this is purely additive backend
code.

---

**Related:** Closes SECRT-1931

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Adds automatic extraction and workspace storage of sandbox-written
files (including binaries for code execution), which can affect output
payload size, performance, and file-handling edge cases.
> 
> **Overview**
> **Sandbox blocks now persist generated files to workspace.** A new
shared utility (`backend/util/sandbox_files.py`) extracts files from an
E2B sandbox (scoped by a start timestamp) and stores them via
`store_media_file`, returning `SandboxFileOutput` with `workspace_ref`.
> 
> `ClaudeCodeBlock` replaces its inline file-scraping logic with this
utility and updates the `files` output schema to include
`workspace_ref`.
> 
> `ExecuteCodeBlock` adds a `files` output and extends the executor
mixin to optionally extract/store files (text + binary) when an
`execution_context` is provided; related mocks/tests and docs are
updated accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
343854c0cf. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 15:56:59 +00:00
Swifty
3d31f62bf1 Revert "added feature request tooling"
This reverts commit b8b6c9de23.
2026-02-12 16:39:24 +01:00
Swifty
b8b6c9de23 added feature request tooling 2026-02-12 16:38:17 +01:00
Abhimanyu Yadav
4f6055f494 refactor(frontend): remove default expiration date from API key credentials form (#12092)
### Changes 🏗️

Removed the default expiration date for API keys in the credentials
modal. Previously, API keys were set to expire the next day by default,
but now the expiration date field starts empty, allowing users to
explicitly choose whether they want to set an expiration date.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open the API key credentials modal and verify the expiration date
field is empty by default
  - [x] Test creating an API key with and without an expiration date
  - [x] Verify both scenarios work correctly

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Removed the default expiration date for API key credentials in the
credentials modal. Previously, API keys were automatically set to expire
the next day at midnight. Now the expiration date field starts empty,
allowing users to explicitly choose whether to set an expiration.

- Removed `getDefaultExpirationDate()` helper function that calculated
tomorrow's date
- Changed default `expiresAt` value from calculated date to empty string
- Backend already supports optional expiration (`expires_at?: number`),
so no backend changes needed
- Form submission correctly handles empty expiration by passing
`undefined` to the API
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The changes are straightforward and well-contained. The refactor
removes a helper function and changes a default value. The backend API
already supports optional expiration dates, and the form submission
logic correctly handles empty values by passing undefined. The change
improves UX by not forcing a default expiration date on users.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-12 12:57:06 +00:00
Otto
695a185fa1 fix(frontend): remove fixed min-height from CoPilot message container (#12091)
## Summary

Removes the `min-h-screen` class from `ConversationContent` in
ChatMessagesContainer, which was causing fixed height layout issues in
the CoPilot chat interface.

## Changes

- Removed `min-h-screen` from ConversationContent className

## Linear

Fixes [SECRT-1944](https://linear.app/autogpt/issue/SECRT-1944)

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Removes the `min-h-screen` (100vh) class from `ConversationContent` that
was causing the chat message container to enforce a minimum viewport
height. The parent container already handles height constraints with
`h-full min-h-0` and flexbox layout, so the fixed minimum height was
creating layout conflicts. The component now properly grows within its
flex container using `flex-1`.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with minimal risk
- The change removes a single problematic CSS class that was causing
fixed height layout issues. The parent container already handles height
constraints properly with flexbox, and removing min-h-screen allows the
component to size correctly within its flex parent. This is a targeted,
low-risk bug fix with no logic changes.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-12 12:46:29 +00:00
Reinier van der Leer
113e87a23c refactor(backend): Reduce circular imports (#12068)
I'm getting circular import issues because there is a lot of
cross-importing between `backend.data`, `backend.blocks`, and other
modules. This change reduces block-related cross-imports and thus risk
of breaking circular imports.

### Changes 🏗️

- Strip down `backend.data.block`
- Move `Block` base class and related class/enum defs to
`backend.blocks._base`
  - Move `is_block_auth_configured` to `backend.blocks._utils`
- Move `get_blocks()`, `get_io_block_ids()` etc. to `backend.blocks`
(`__init__.py`)
  - Update imports everywhere
- Remove unused and poorly typed `Block.create()`
  - Change usages from `block_cls.create()` to `block_cls()`
- Improve typing of `load_all_blocks` and `get_blocks`
- Move cross-import of `backend.api.features.library.model` from
`backend/data/__init__.py` to `backend/data/integrations.py`
- Remove deprecated attribute `NodeModel.webhook`
  - Re-generate OpenAPI spec and fix frontend usage
- Eliminate module-level `backend.blocks` import from `blocks/agent.py`
- Eliminate module-level `backend.data.execution` and
`backend.executor.manager` imports from `blocks/helpers/review.py`
- Replace `BlockInput` with `GraphInput` for graph inputs

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI static type-checking + tests should be sufficient for this
2026-02-12 12:07:49 +00:00
Abhimanyu Yadav
d09f1532a4 feat(frontend): replace legacy builder with new flow editor
(#12081)

### Changes 🏗️

This PR completes the migration from the legacy builder to the new Flow
editor by removing all legacy code and feature flags.

**Removed:**
- Old builder view toggle functionality (`BuilderViewTabs.tsx`)
- Legacy debug panel (`RightSidebar.tsx`)
- Feature flags: `NEW_FLOW_EDITOR` and `BUILDER_VIEW_SWITCH`
- `useBuilderView` hook and related view-switching logic

**Updated:**
- Simplified `build/page.tsx` to always render the new Flow editor
- Added CSS styling (`flow.css`) to properly render Phosphor icons in
React Flow handles

**Tests:**
- Skipped e2e test suite in `build.spec.ts` (legacy builder tests)
- Follow-up PR (#12082) will add new e2e tests for the Flow editor

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] Create a new flow and verify it loads correctly
    - [x] Add nodes and connections to verify basic functionality works
    - [x] Verify that node handles render correctly with the new CSS
- [x] Check that the UI is clean without the old debug panel or view
toggles

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
2026-02-12 11:16:01 +00:00
Zamil Majdy
a78145505b fix(copilot): merge split assistant messages to prevent Anthropic API errors (#12062)
## Summary
- When the copilot model responds with both text content AND a
long-running tool call (e.g., `create_agent`), the streaming code
created two separate consecutive assistant messages — one with text, one
with `tool_calls`. This caused Anthropic's API to reject with
`"unexpected tool_use_id found in tool_result blocks"` because the
`tool_result` couldn't find a matching `tool_use` in the immediately
preceding assistant message.
- Added a defensive merge of consecutive assistant messages in
`to_openai_messages()` (fixes existing corrupt sessions too)
- Fixed `_yield_tool_call` to add tool_calls to the existing
current-turn assistant message instead of creating a new one
- Changed `accumulated_tool_calls` assignment to use `extend` to prevent
overwriting tool_calls added by long-running tool flow

## Test plan
- [x] All 23 chat feature tests pass (`backend/api/features/chat/`)
- [x] All 44 prompt utility tests pass (`backend/util/prompt_test.py`)
- [x] All pre-commit hooks pass (ruff, isort, black, pyright)
- [ ] Manual test: create an agent via copilot, then ask a follow-up
question — should no longer get 400 error

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Fixes a critical bug where long-running tool calls (like `create_agent`)
caused Anthropic API 400 errors due to split assistant messages. The fix
ensures tool calls are added to the existing assistant message instead
of creating new ones, and adds a defensive merge function to repair any
existing corrupt sessions.

**Key changes:**
- Added `_merge_consecutive_assistant_messages()` to defensively merge
split assistant messages in `to_openai_messages()`
- Modified `_yield_tool_call()` to append tool calls to the current-turn
assistant message instead of creating a new one
- Changed `accumulated_tool_calls` from assignment to `extend` to
preserve tool calls already added by long-running tool flow

**Impact:** Resolves the issue where users received 400 errors after
creating agents via copilot and asking follow-up questions.
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- Safe to merge with minor verification recommended
- The changes are well-targeted and solve a real API compatibility
issue. The logic is sound: searching backwards for the current assistant
message is correct, and using `extend` instead of assignment prevents
overwriting. The defensive merge in `to_openai_messages()` also fixes
existing corrupt sessions. All existing tests pass according to the PR
description.
- No files require special attention - changes are localized and
defensive
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant StreamAPI as stream_chat_completion
    participant Chunks as _stream_chat_chunks
    participant ToolCall as _yield_tool_call
    participant Session as ChatSession
    
    User->>StreamAPI: Send message
    StreamAPI->>Chunks: Stream chat chunks
    
    alt Text + Long-running tool call
        Chunks->>StreamAPI: Text delta (content)
        StreamAPI->>Session: Append assistant message with content
        Chunks->>ToolCall: Tool call detected
        
        Note over ToolCall: OLD: Created new assistant message<br/>NEW: Appends to existing assistant
        
        ToolCall->>Session: Search backwards for current assistant
        ToolCall->>Session: Append tool_call to existing message
        ToolCall->>Session: Add pending tool result
    end
    
    StreamAPI->>StreamAPI: Merge accumulated_tool_calls
    Note over StreamAPI: Use extend (not assign)<br/>to preserve existing tool_calls
    
    StreamAPI->>Session: to_openai_messages()
    Session->>Session: _merge_consecutive_assistant_messages()
    Note over Session: Defensive: Merges any split<br/>assistant messages
    Session-->>StreamAPI: Merged messages
    
    StreamAPI->>User: Return response
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-12 01:52:17 +00:00
Otto
36aeb0b2b3 docs(blocks): clarify HumanInTheLoop output descriptions for agent builder (#12069)
## Problem

The agent builder (LLM) misinterprets the HumanInTheLoop block outputs.
It thinks `approved_data` and `rejected_data` will yield status strings
like "APPROVED" or "REJECTED" instead of understanding that the actual
input data passes through.

This leads to unnecessary complexity - the agent builder adds comparison
blocks to check for status strings that don't exist.

## Solution

Enriched the block docstring and all input/output field descriptions to
make it explicit that:
1. The output is the actual data itself, not a status string
2. The routing is determined by which output pin fires
3. How to use the block correctly (connect downstream blocks to
appropriate output pins)

## Changes

- Updated block docstring with clear "How it works" and "Example usage"
sections
- Enhanced `data` input description to explain data flow
- Enhanced `name` input description for reviewer context
- Enhanced `approved_data` output to explicitly state it's NOT a status
string
- Enhanced `rejected_data` output to explicitly state it's NOT a status
string
- Enhanced `review_message` output for clarity

## Testing

Documentation-only change to schema descriptions. No functional changes.

Fixes SECRT-1930

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Enhanced documentation for the `HumanInTheLoopBlock` to clarify how
output pins work. The key improvement explicitly states that output pins
(`approved_data` and `rejected_data`) yield the actual input data, not
status strings like "APPROVED" or "REJECTED". This prevents the agent
builder (LLM) from misinterpreting the block's behavior and adding
unnecessary comparison blocks.

**Key changes:**
- Added "How it works" and "Example usage" sections to the block
docstring
- Clarified that routing is determined by which output pin fires, not by
comparing output values
- Enhanced all input/output field descriptions with explicit data flow
explanations
- Emphasized that downstream blocks should be connected to the
appropriate output pin based on desired workflow path

This is a documentation-only change with no functional modifications to
the code logic.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with no risk
- Documentation-only change that accurately reflects the existing code
behavior. No functional changes, no runtime impact, and the enhanced
descriptions correctly explain how the block outputs work based on
verification of the implementation code.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-11 15:43:58 +00:00
Ubbe
2a189c44c4 fix(frontend): API stream issues leaking into prompt (#12063)
## Changes 🏗️

<img width="800" height="621" alt="Screenshot 2026-02-11 at 19 32 39"
src="https://github.com/user-attachments/assets/e97be1a7-972e-4ae0-8dfa-6ade63cf287b"
/>

When the BE API has an error, prevent it from leaking into the stream
and instead handle it gracefully via toast.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and trust the changes

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

This PR fixes an issue where backend API stream errors were leaking into
the chat prompt instead of being handled gracefully. The fix involves
both backend and frontend changes to ensure error events conform to the
AI SDK's strict schema.

**Key Changes:**
- **Backend (`response_model.py`)**: Added custom `to_sse()` method for
`StreamError` that only emits `type` and `errorText` fields, stripping
extra fields like `code` and `details` that cause AI SDK validation
failures
- **Backend (`prompt.py`)**: Added validation step after context
compression to remove orphaned tool responses without matching tool
calls, preventing "unexpected tool_use_id" API errors
- **Frontend (`route.ts`)**: Implemented SSE stream normalization with
`normalizeSSEStream()` and `normalizeSSEEvent()` functions to strip
non-conforming fields from error events before they reach the AI SDK
- **Frontend (`ChatMessagesContainer.tsx`)**: Added toast notifications
for errors and improved error display UI with deduplication logic

The changes ensure a clean separation between internal error metadata
(useful for logging/debugging) and the strict schema required by the AI
SDK on the frontend.
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge with low risk
- The changes are well-structured and address a specific bug with proper
error handling. The dual-layer approach (backend filtering in `to_sse()`
+ frontend normalization) provides defense-in-depth. However, the lack
of automated tests for the new error normalization logic and the
potential for edge cases in SSE parsing prevent a perfect score.
- Pay close attention to
`autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts`
- the SSE normalization logic should be tested with various error
scenarios
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant Frontend as ChatMessagesContainer
    participant Proxy as /api/chat/.../stream
    participant Backend as Backend API
    participant AISDK as AI SDK

    User->>Frontend: Send message
    Frontend->>Proxy: POST with message
    Proxy->>Backend: Forward request with auth
    Backend->>Backend: Process message
    
    alt Success Path
        Backend->>Proxy: SSE stream (text-delta, etc.)
        Proxy->>Proxy: normalizeSSEStream (pass through)
        Proxy->>AISDK: Forward SSE events
        AISDK->>Frontend: Update messages
        Frontend->>User: Display response
    else Error Path
        Backend->>Backend: StreamError.to_sse()
        Note over Backend: Only emit {type, errorText}
        Backend->>Proxy: SSE error event
        Proxy->>Proxy: normalizeSSEEvent()
        Note over Proxy: Strip extra fields (code, details)
        Proxy->>AISDK: {type: "error", errorText: "..."}
        AISDK->>Frontend: error state updated
        Frontend->>Frontend: Toast notification (deduplicated)
        Frontend->>User: Show error UI + toast
    end
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Otto-AGPT <otto@agpt.co>
2026-02-11 22:46:37 +08:00
Abhimanyu Yadav
508759610f fix(frontend): add min-width-0 to ContentCard to prevent overflow (#12060)
### Changes 🏗️

Added `min-w-0` class to the ContentCard component in the ToolAccordion
to prevent content overflow issues. This CSS fix ensures that the card
properly respects its container width constraints and allows text
truncation to work correctly when content is too wide.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that tool content displays correctly in the accordion
- [x] Confirmed that long content properly truncates instead of
overflowing
  - [x] Tested with various screen sizes to ensure responsive behavior

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Added `min-w-0` class to `ContentCard` component to fix text truncation
overflow in grid layouts. This is a standard CSS fix that allows grid
items to shrink below their content size, enabling `truncate` classes on
child elements (`ContentCardTitle`, `ContentCardSubtitle`) to work
correctly. The fix follows the same pattern already used in
`ContentCardHeader` (line 54) and `ToolAccordion` (line 54).
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- Safe to merge with no risk
- Single-line CSS fix that addresses a well-known flexbox/grid layout
issue. The change follows existing patterns in the codebase and is
thoroughly tested. No logic changes, no breaking changes, no side
effects.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-11 21:09:21 +08:00
Otto
062fe1aa70 fix(security): enforce disabled flag on blocks in graph validation (#12059)
## Summary
Blocks marked `disabled=True` (like BlockInstallationBlock) were not
being checked during graph validation, allowing them to be used via
direct API calls despite being hidden from the UI.

This adds a security check in `_validate_graph_get_errors()` to reject
any graph containing disabled blocks.

## Security Advisory
GHSA-4crw-9p35-9x54

## Linear
SECRT-1927

## Changes
- Added `block.disabled` check in graph validation (6 lines)

## Testing
- Graphs with disabled blocks → rejected with clear error message
- Graphs with valid blocks → unchanged behavior

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Adds critical security validation to prevent execution of disabled
blocks (like `BlockInstallationBlock`) via direct API calls. The fix
validates that `block.disabled` is `False` during graph validation in
`_validate_graph_get_errors()` on line 747-750, ensuring disabled blocks
are rejected before graph creation or execution. This closes a
vulnerability where blocks marked disabled in the UI could still be used
through API endpoints.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge and addresses a critical security
vulnerability
- The fix is minimal (6 lines), correctly placed in the validation flow,
includes clear security context (GHSA reference), and follows existing
validation patterns. The check is positioned after block existence
validation and before input validation, ensuring disabled blocks are
caught early in both graph creation and execution paths.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 03:28:19 +00:00
dependabot[bot]
2cd0d4fe0f chore(deps): bump actions/checkout from 4 to 6 (#12034)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>v6-beta by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2298">actions/checkout#2298</a></li>
<li>update readme/changelog for v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2311">actions/checkout#2311</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5.0.0...v6.0.0">https://github.com/actions/checkout/compare/v5.0.0...v6.0.0</a></p>
<h2>v6-beta</h2>
<h2>What's Changed</h2>
<p>Updated persist-credentials to store the credentials under
<code>$RUNNER_TEMP</code> instead of directly in the local git
config.</p>
<p>This requires a minimum Actions Runner version of <a
href="https://github.com/actions/runner/releases/tag/v2.329.0">v2.329.0</a>
to access the persisted credentials for <a
href="https://docs.github.com/en/actions/tutorials/use-containerized-services/create-a-docker-container-action">Docker
container action</a> scenarios.</p>
<h2>v5.0.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v5...v5.0.1">https://github.com/actions/checkout/compare/v5...v5.0.1</a></p>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.1">https://github.com/actions/checkout/compare/v4...v4.3.1</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>v6.0.2</h2>
<ul>
<li>Fix tag handling: preserve annotations and explicit fetch-tags by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2356">actions/checkout#2356</a></li>
</ul>
<h2>v6.0.1</h2>
<ul>
<li>Add worktree support for persist-credentials includeIf by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2327">actions/checkout#2327</a></li>
</ul>
<h2>v6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>v5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>v5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>v4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>v4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="de0fac2e45"><code>de0fac2</code></a>
Fix tag handling: preserve annotations and explicit fetch-tags (<a
href="https://redirect.github.com/actions/checkout/issues/2356">#2356</a>)</li>
<li><a
href="064fe7f331"><code>064fe7f</code></a>
Add orchestration_id to git user-agent when ACTIONS_ORCHESTRATION_ID is
set (...</li>
<li><a
href="8e8c483db8"><code>8e8c483</code></a>
Clarify v6 README (<a
href="https://redirect.github.com/actions/checkout/issues/2328">#2328</a>)</li>
<li><a
href="033fa0dc0b"><code>033fa0d</code></a>
Add worktree support for persist-credentials includeIf (<a
href="https://redirect.github.com/actions/checkout/issues/2327">#2327</a>)</li>
<li><a
href="c2d88d3ecc"><code>c2d88d3</code></a>
Update all references from v5 and v4 to v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2314">#2314</a>)</li>
<li><a
href="1af3b93b68"><code>1af3b93</code></a>
update readme/changelog for v6 (<a
href="https://redirect.github.com/actions/checkout/issues/2311">#2311</a>)</li>
<li><a
href="71cf2267d8"><code>71cf226</code></a>
v6-beta (<a
href="https://redirect.github.com/actions/checkout/issues/2298">#2298</a>)</li>
<li><a
href="069c695914"><code>069c695</code></a>
Persist creds to a separate file (<a
href="https://redirect.github.com/actions/checkout/issues/2286">#2286</a>)</li>
<li><a
href="ff7abcd0c3"><code>ff7abcd</code></a>
Update README to include Node.js 24 support details and requirements (<a
href="https://redirect.github.com/actions/checkout/issues/2248">#2248</a>)</li>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/checkout/compare/v4...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-11 02:25:51 +00:00
dependabot[bot]
1ecae8c87e chore(backend/deps): bump aiofiles from 24.1.0 to 25.1.0 in /autogpt_platform/backend (#12043)
Bumps [aiofiles](https://github.com/Tinche/aiofiles) from 24.1.0 to
25.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Tinche/aiofiles/releases">aiofiles's
releases</a>.</em></p>
<blockquote>
<h2>v25.1.0</h2>
<ul>
<li>Switch to <a href="https://docs.astral.sh/uv/">uv</a> + add Python
v3.14 support. (<a
href="https://redirect.github.com/Tinche/aiofiles/pull/219">#219</a>)</li>
<li>Add <code>ruff</code> formatter and linter. <a
href="https://redirect.github.com/Tinche/aiofiles/pull/216">#216</a></li>
<li>Drop Python 3.8 support. If you require it, use version 24.1.0. <a
href="https://redirect.github.com/Tinche/aiofiles/pull/204">#204</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/danielsmyers"><code>@​danielsmyers</code></a>
made their first contribution in <a
href="https://redirect.github.com/Tinche/aiofiles/pull/185">Tinche/aiofiles#185</a></li>
<li><a
href="https://github.com/stankudrow"><code>@​stankudrow</code></a> made
their first contribution in <a
href="https://redirect.github.com/Tinche/aiofiles/pull/192">Tinche/aiofiles#192</a></li>
<li><a
href="https://github.com/waketzheng"><code>@​waketzheng</code></a> made
their first contribution in <a
href="https://redirect.github.com/Tinche/aiofiles/pull/221">Tinche/aiofiles#221</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Tinche/aiofiles/compare/v24.1.0...v25.1.0">https://github.com/Tinche/aiofiles/compare/v24.1.0...v25.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Tinche/aiofiles/blob/main/CHANGELOG.md">aiofiles's
changelog</a>.</em></p>
<blockquote>
<h2>25.1.0 (2025-10-09)</h2>
<ul>
<li>Switch to <a href="https://docs.astral.sh/uv/">uv</a> + add Python
v3.14 support.
(<a
href="https://redirect.github.com/Tinche/aiofiles/pull/219">#219</a>)</li>
<li>Add <code>ruff</code> formatter and linter.
<a
href="https://redirect.github.com/Tinche/aiofiles/pull/216">#216</a></li>
<li>Drop Python 3.8 support. If you require it, use version 24.1.0.
<a
href="https://redirect.github.com/Tinche/aiofiles/pull/204">#204</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="348f5ef656"><code>348f5ef</code></a>
v25.1.0</li>
<li><a
href="5e1bb8f12b"><code>5e1bb8f</code></a>
docs: update readme to use ruff badge (<a
href="https://redirect.github.com/Tinche/aiofiles/issues/221">#221</a>)</li>
<li><a
href="6fdc25c781"><code>6fdc25c</code></a>
Move to uv. (<a
href="https://redirect.github.com/Tinche/aiofiles/issues/219">#219</a>)</li>
<li><a
href="1989132423"><code>1989132</code></a>
set 'function' as a default fixture loop scope value</li>
<li><a
href="8986452a1b"><code>8986452</code></a>
add the 'asyncio_default_fixture_loop_scope=session' option</li>
<li><a
href="ccab1ff776"><code>ccab1ff</code></a>
update pytest-asyncio==1.0.0</li>
<li><a
href="8727c96f5b"><code>8727c96</code></a>
add PR <a
href="https://redirect.github.com/Tinche/aiofiles/issues/216">#216</a>
into the CHANGELOG</li>
<li><a
href="a9388e5f8d"><code>a9388e5</code></a>
add TID and ignore TID252</li>
<li><a
href="760366489a"><code>7603664</code></a>
remove [ruff].exclude keyval</li>
<li><a
href="7c49a5c5f2"><code>7c49a5c</code></a>
add final newlines</li>
<li>Additional commits viewable in <a
href="https://github.com/Tinche/aiofiles/compare/v24.1.0...v25.1.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiofiles&package-manager=pip&previous-version=24.1.0&new-version=25.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-10 23:32:30 +00:00
dependabot[bot]
659338f90c chore(deps): bump peter-evans/repository-dispatch from 3 to 4 (#12035)
Bumps
[peter-evans/repository-dispatch](https://github.com/peter-evans/repository-dispatch)
from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/peter-evans/repository-dispatch/releases">peter-evans/repository-dispatch's
releases</a>.</em></p>
<blockquote>
<h2>Repository Dispatch v4.0.0</h2>
<p>⚙️ Requires <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Actions
Runner v2.327.1</a> or later if you are using a self-hosted runner for
Node 24 support.</p>
<h2>What's Changed</h2>
<ul>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.8 to
18.19.10 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/306">peter-evans/repository-dispatch#306</a></li>
<li>build(deps): bump peter-evans/repository-dispatch from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/307">peter-evans/repository-dispatch#307</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.10 to
18.19.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/308">peter-evans/repository-dispatch#308</a></li>
<li>build(deps): bump peter-evans/create-pull-request from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/310">peter-evans/repository-dispatch#310</a></li>
<li>build(deps): bump peter-evans/slash-command-dispatch from 3 to 4 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/309">peter-evans/repository-dispatch#309</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.14 to
18.19.15 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/311">peter-evans/repository-dispatch#311</a></li>
<li>build(deps-dev): bump prettier from 3.2.4 to 3.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/312">peter-evans/repository-dispatch#312</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.15 to
18.19.17 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/313">peter-evans/repository-dispatch#313</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.17 to
18.19.18 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/314">peter-evans/repository-dispatch#314</a></li>
<li>build(deps-dev): bump eslint-plugin-github from 4.10.1 to 4.10.2 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/316">peter-evans/repository-dispatch#316</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.18 to
18.19.21 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/317">peter-evans/repository-dispatch#317</a></li>
<li>build(deps-dev): bump eslint from 8.56.0 to 8.57.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/318">peter-evans/repository-dispatch#318</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.21 to
18.19.22 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/319">peter-evans/repository-dispatch#319</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.22 to
18.19.24 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/320">peter-evans/repository-dispatch#320</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.24 to
18.19.26 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/321">peter-evans/repository-dispatch#321</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.26 to
18.19.29 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/322">peter-evans/repository-dispatch#322</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.29 to
18.19.31 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/323">peter-evans/repository-dispatch#323</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.31 to
18.19.33 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/324">peter-evans/repository-dispatch#324</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.33 to
18.19.34 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/325">peter-evans/repository-dispatch#325</a></li>
<li>build(deps-dev): bump prettier from 3.2.5 to 3.3.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/326">peter-evans/repository-dispatch#326</a></li>
<li>build(deps-dev): bump prettier from 3.3.1 to 3.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/327">peter-evans/repository-dispatch#327</a></li>
<li>build(deps-dev): bump braces from 3.0.2 to 3.0.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/328">peter-evans/repository-dispatch#328</a></li>
<li>build(deps-dev): bump ws from 7.5.9 to 7.5.10 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/329">peter-evans/repository-dispatch#329</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.34 to
18.19.38 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/330">peter-evans/repository-dispatch#330</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.38 to
18.19.39 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/332">peter-evans/repository-dispatch#332</a></li>
<li>build(deps-dev): bump prettier from 3.3.2 to 3.3.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/334">peter-evans/repository-dispatch#334</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.39 to
18.19.41 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/335">peter-evans/repository-dispatch#335</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.1.3 to 5.2.1 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/336">peter-evans/repository-dispatch#336</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.41 to
18.19.42 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/337">peter-evans/repository-dispatch#337</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.42 to
18.19.43 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/338">peter-evans/repository-dispatch#338</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.43 to
18.19.44 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/339">peter-evans/repository-dispatch#339</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.44 to
18.19.45 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/340">peter-evans/repository-dispatch#340</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.45 to
18.19.47 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/341">peter-evans/repository-dispatch#341</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.47 to
18.19.50 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/343">peter-evans/repository-dispatch#343</a></li>
<li>build(deps): bump peter-evans/create-pull-request from 6 to 7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/342">peter-evans/repository-dispatch#342</a></li>
<li>build(deps-dev): bump eslint from 8.57.0 to 8.57.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/344">peter-evans/repository-dispatch#344</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.50 to
18.19.53 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/345">peter-evans/repository-dispatch#345</a></li>
<li>build(deps-dev): bump <code>@​vercel/ncc</code> from 0.38.1 to
0.38.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/346">peter-evans/repository-dispatch#346</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/347">peter-evans/repository-dispatch#347</a></li>
<li>build(deps): bump <code>@​actions/core</code> from 1.10.1 to 1.11.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/349">peter-evans/repository-dispatch#349</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.53 to
18.19.54 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/348">peter-evans/repository-dispatch#348</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/350">peter-evans/repository-dispatch#350</a></li>
<li>build(deps): bump <code>@​actions/core</code> from 1.11.0 to 1.11.1
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/351">peter-evans/repository-dispatch#351</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.54 to
18.19.55 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/352">peter-evans/repository-dispatch#352</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/353">peter-evans/repository-dispatch#353</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.55 to
18.19.56 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/354">peter-evans/repository-dispatch#354</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="28959ce8df"><code>28959ce</code></a>
Fix node version in actions.yml (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/433">#433</a>)</li>
<li><a
href="25d29c2bbf"><code>25d29c2</code></a>
build(deps-dev): bump <code>@​types/node</code> in the npm group (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/432">#432</a>)</li>
<li><a
href="830136c664"><code>830136c</code></a>
build(deps): bump the github-actions group with 3 updates (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/431">#431</a>)</li>
<li><a
href="2c856c63fe"><code>2c856c6</code></a>
ci: update dependabot config</li>
<li><a
href="66739071c2"><code>6673907</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.127 to
18.19.129 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/429">#429</a>)</li>
<li><a
href="952a211c1e"><code>952a211</code></a>
build(deps): bump peter-evans/repository-dispatch from 3 to 4 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/428">#428</a>)</li>
<li><a
href="5fc4efd1a4"><code>5fc4efd</code></a>
docs: update readme</li>
<li><a
href="a628c95fd1"><code>a628c95</code></a>
feat: v4 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/427">#427</a>)</li>
<li><a
href="de78ac1a71"><code>de78ac1</code></a>
build(deps-dev): bump <code>@​vercel/ncc</code> from 0.38.3 to 0.38.4
(<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/425">#425</a>)</li>
<li><a
href="f49fa7f26b"><code>f49fa7f</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.124 to
18.19.127 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/426">#426</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/peter-evans/repository-dispatch/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/repository-dispatch&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-10 21:28:23 +00:00
Abhimanyu Yadav
4df5b7bde7 refactor(frontend): remove defaultExpanded prop from ToolAccordion components (#12054)
### Changes

- Removed `defaultExpanded` prop from `ToolAccordion` in CreateAgent,
EditAgent, RunAgent, and RunBlock components to streamline the code and
improve readability.

### Impact

- This refactor enhances maintainability by reducing complexity in the
component structure while preserving existing functionality.

### Changes 🏗️

- Removed conditional expansion logic from all tool components
- Simplified ToolAccordion implementation across all affected components

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create and run agents with various tools to verify accordion
behavior works correctly
    - [x] Verify that UI components expand and collapse as expected
    - [x] Test with different output types to ensure proper rendering

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-02-11 00:22:01 +08:00
Otto
017a00af46 feat(copilot): Enable extended thinking for Claude models (#12052)
## Summary

Enables Anthropic's extended thinking feature for Claude models in
CoPilot via OpenRouter. This keeps the model's chain-of-thought
reasoning internal rather than outputting it to users.

## Problem

The CoPilot prompt was designed for a thinking agent (with
`<internal_reasoning>` tags), but extended thinking wasn't enabled on
the API side. This caused the model to output its reasoning as regular
text, leaking internal analysis to users.

## Solution

Added thinking configuration to the OpenRouter `extra_body` for
Anthropic models:
```python
extra_body["provider"] = {
    "anthropic": {
        "thinking": {
            "type": "enabled",
            "budget_tokens": config.thinking_budget_tokens,
        }
    }
}
```

## Configuration

New settings in `ChatConfig`:
| Setting | Default | Description |
|---------|---------|-------------|
| `thinking_enabled` | `True` | Enable extended thinking for Claude
models |
| `thinking_budget_tokens` | `10000` | Token budget for thinking
(1000-100000) |

## Changes

- `config.py`: Added `thinking_enabled` and `thinking_budget_tokens`
settings
- `service.py`: Added thinking config to all 3 places where `extra_body`
is built for LLM calls

## Testing

- Verify CoPilot responses no longer include internal reasoning text
- Check that Claude's extended thinking is working (should see thinking
tokens in usage)
- Confirm non-Anthropic models are unaffected

## Related

Discussion:
https://discord.com/channels/1126875755960336515/1126875756925046928/1470779843552612607

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2026-02-10 16:18:05 +01:00
Reinier van der Leer
52650eed1d refactor(frontend/auth): Move /copilot auth check to middleware (#12053)
These "is the user authenticated, and should they be?" checks should not
be spread across the codebase, it's complex enough as it is. :')

- Follow-up to #12050

### Changes 🏗️

- Revert "fix(frontend): copilot redirect logout (#12050)"
- Add `/copilot` to `PROTECTED_PAGES` in `@/lib/supabase/helpers`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Trivial change, we know this works for other pages
2026-02-10 14:43:33 +00:00
dependabot[bot]
81c1524658 chore(backend/deps): bump the production-dependencies group in /autogpt_platform/backend with 2 updates (#12037)
Bumps the production-dependencies group in /autogpt_platform/backend
with 2 updates: [fastapi](https://github.com/fastapi/fastapi) and
[langfuse](https://github.com/langfuse/langfuse).

Updates `fastapi` from 0.128.5 to 0.128.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fastapi/fastapi/releases">fastapi's
releases</a>.</em></p>
<blockquote>
<h2>0.128.6</h2>
<h3>Fixes</h3>
<ul>
<li>🐛 Fix <code>on_startup</code> and <code>on_shutdown</code>
parameters of <code>APIRouter</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14873">#14873</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h3>Translations</h3>
<ul>
<li>🌐 Update translations for zh (update-outdated). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14843">#14843</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Fix parameterized tests with snapshots. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14875">#14875</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fbca586c1d"><code>fbca586</code></a>
📝 Update release notes</li>
<li><a
href="4e879799dd"><code>4e87979</code></a>
📝 Update release notes</li>
<li><a
href="0a4033aeee"><code>0a4033a</code></a>
🔖 Release version 0.128.6</li>
<li><a
href="ed2512a5ec"><code>ed2512a</code></a>
🐛 Fix <code>on_startup</code> and <code>on_shutdown</code> parameters of
<code>APIRouter</code> (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14873">#14873</a>)</li>
<li><a
href="0c0f6332e2"><code>0c0f633</code></a>
📝 Update release notes</li>
<li><a
href="227cb85a03"><code>227cb85</code></a>
 Fix parameterized tests with snapshots (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14875">#14875</a>)</li>
<li><a
href="cd31576d57"><code>cd31576</code></a>
📝 Update release notes</li>
<li><a
href="376e108580"><code>376e108</code></a>
🌐 Update translations for zh (update-outdated) (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14843">#14843</a>)</li>
<li>See full diff in <a
href="https://github.com/fastapi/fastapi/compare/0.128.5...0.128.6">compare
view</a></li>
</ul>
</details>
<br />

Updates `langfuse` from 3.13.0 to 3.14.1
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/langfuse/langfuse/commits">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Otto <otto@agpt.co>
2026-02-10 13:32:48 +00:00
Ubbe
f2ead70f3d fix(frontend): copilot redirect logout (#12050)
## Changes 🏗️

Redirect to `/login` if the user is not authenticated and tries to
access `/copilot`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and tested
2026-02-10 21:39:11 +08:00
Abhimanyu Yadav
7d4c020a9b feat(chat): implement AI SDK integration with custom streaming response handling (#11901)
### Changes 🏗️

- Added AI SDK integration for chat streaming with proper message
handling
- Implemented custom to_sse method in StreamToolOutputAvailable to
exclude non-spec fields
- Modified stream_chat_completion to reuse message IDs for tool call
continuations
- Created new Copilot 2.0 UI with AI SDK React components
- Added streamdown and related packages for markdown rendering
- Built reusable conversation and message components for the chat
interface
- Added support for tool output display in the chat UI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Start a new chat session and verify streaming works correctly
  - [x] Test tool calls and verify they display properly in the UI
  - [x] Verify message continuations don't create duplicate messages
  - [x] Test markdown rendering with code blocks and other formatting
  - [x] Verify the UI is responsive and scrolls correctly

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2026-02-10 21:12:21 +08:00
dependabot[bot]
e596ea87cb chore(libs/deps-dev): bump pytest-cov from 6.2.1 to 7.0.0 in /autogpt_platform/autogpt_libs (#12030)
Bumps [pytest-cov](https://github.com/pytest-dev/pytest-cov) from 6.2.1
to 7.0.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-cov/blob/master/CHANGELOG.rst">pytest-cov's
changelog</a>.</em></p>
<blockquote>
<h2>7.0.0 (2025-09-09)</h2>
<ul>
<li>
<p>Dropped support for subprocesses measurement.</p>
<p>It was a feature added long time ago when coverage lacked a nice way
to measure subprocesses created in tests.
It relied on a <code>.pth</code> file, there was no way to opt-out and
it created bad interations
with <code>coverage's new patch system
&lt;https://coverage.readthedocs.io/en/latest/config.html#run-patch&gt;</code>_
added
in <code>7.10
&lt;https://coverage.readthedocs.io/en/7.10.6/changes.html#version-7-10-0-2025-07-24&gt;</code>_.</p>
<p>To migrate to this release you might need to enable the suprocess
patch, example for <code>.coveragerc</code>:</p>
<p>.. code-block:: ini</p>
<p>[run]
patch = subprocess</p>
<p>This release also requires at least coverage 7.10.6.</p>
</li>
<li>
<p>Switched packaging to have metadata completely in
<code>pyproject.toml</code> and use <code>hatchling
&lt;https://pypi.org/project/hatchling/&gt;</code>_ for
building.
Contributed by Ofek Lev in
<code>[#551](https://github.com/pytest-dev/pytest-cov/issues/551)
&lt;https://github.com/pytest-dev/pytest-cov/pull/551&gt;</code>_
with some extras in
<code>[#716](https://github.com/pytest-dev/pytest-cov/issues/716)
&lt;https://github.com/pytest-dev/pytest-cov/pull/716&gt;</code>_.</p>
</li>
<li>
<p>Removed some not really necessary testing deps like
<code>six</code>.</p>
</li>
</ul>
<h2>6.3.0 (2025-09-06)</h2>
<ul>
<li>Added support for markdown reports.
Contributed by Marcos Boger in
<code>[#712](https://github.com/pytest-dev/pytest-cov/issues/712)
&lt;https://github.com/pytest-dev/pytest-cov/pull/712&gt;</code>_
and <code>[#714](https://github.com/pytest-dev/pytest-cov/issues/714)
&lt;https://github.com/pytest-dev/pytest-cov/pull/714&gt;</code>_.</li>
<li>Fixed some formatting issues in docs.
Anonymous contribution in
<code>[#706](https://github.com/pytest-dev/pytest-cov/issues/706)
&lt;https://github.com/pytest-dev/pytest-cov/pull/706&gt;</code>_.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="224d8964ca"><code>224d896</code></a>
Bump version: 6.3.0 → 7.0.0</li>
<li><a
href="73424e3999"><code>73424e3</code></a>
Cleanup the docs a bit.</li>
<li><a
href="36f1cc2967"><code>36f1cc2</code></a>
Bump pins in template.</li>
<li><a
href="f299c590a6"><code>f299c59</code></a>
Bump the github-actions group with 2 updates</li>
<li><a
href="25f0b2e0cd"><code>25f0b2e</code></a>
Update docs/config.rst</li>
<li><a
href="bb23eacc55"><code>bb23eac</code></a>
Improve configuration docs</li>
<li><a
href="a19531e91e"><code>a19531e</code></a>
Switch from build/pre-commit to uv/prek - this should make this
faster.</li>
<li><a
href="82f9993910"><code>82f9993</code></a>
Update changelog.</li>
<li><a
href="211b5cd41c"><code>211b5cd</code></a>
Fix links.</li>
<li><a
href="97aadd74bc"><code>97aadd7</code></a>
Update some ci config, reformat and apply some lint fixes.</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-cov/compare/v6.2.1...v7.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pytest-cov&package-manager=pip&previous-version=6.2.1&new-version=7.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Otto <otto@agpt.co>
2026-02-10 11:22:25 +00:00
Otto
81f8290f01 debug(backend/db): Add diagnostic logging for vector type errors (#12024)
Adds diagnostic logging when the `type vector does not exist` error
occurs in raw SQL queries.

## Problem

We're seeing intermittent "type vector does not exist" errors on
dev-behave ([Sentry
issue](https://significant-gravitas.sentry.io/issues/7205929979/)). The
pgvector extension should be in the search_path, but occasionally
queries fail to resolve the vector type.

## Solution

When a query fails with this specific error, we now log:
- `SHOW search_path` - what schemas are being searched
- `SELECT current_schema()` - the active schema
- `SELECT current_user, session_user, current_database()` - connection
context

This diagnostic info will help identify why the vector extension isn't
visible in certain cases.

## Changes

- Added `_log_vector_error_diagnostics()` helper function in
`backend/data/db.py`
- Wrapped SQL execution in try/except to catch and diagnose vector type
errors
- Original exception is re-raised after logging (no behavior change)

## Testing

This is observational/diagnostic code. It will be validated by waiting
for the error to occur naturally on dev and checking the logs.

## Rollout

Once we've captured diagnostic logs and identified the root cause, this
logging can be removed or reduced in verbosity.
2026-02-10 07:35:13 +00:00
Reinier van der Leer
6467f6734f debug(backend/chat): Add timing logging to chat stream generation mechanism (#12019)
[SECRT-1912: Investigate & eliminate chat session start
latency](https://linear.app/autogpt/issue/SECRT-1912)

### Changes 🏗️

- Add timing logs to `backend.api.features.chat` in `routes.py`,
`service.py`, and `stream_registry.py`
- Remove unneeded DB join in `create_chat_session`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI checks
2026-02-09 14:05:29 +00:00
Otto
5a30d11416 refactor(copilot): Code cleanup and deduplication (#11950)
## Summary

Code cleanup of the AI Copilot codebase - rebased onto latest dev.

## Changes

### New Files
- `backend/util/validation.py` - UUID validation helpers
- `backend/api/features/chat/tools/helpers.py` - Shared tool utilities

### Credential Matching Consolidation  
- Added shared utilities to `utils.py`
- Refactored `run_block._check_block_credentials()` with discriminator
support
- Extracted `_resolve_discriminated_credentials()` for multi-provider
handling

### Routes Cleanup
- Extracted `_create_stream_generator()` and `SSE_RESPONSE_HEADERS`

### Tool Files Cleanup
- Updated `run_agent.py` and `run_block.py` to use shared helpers

**WIP** - This PR will be updated incrementally.
2026-02-09 13:43:55 +00:00
Bently
1f4105e8f9 fix(frontend): Handle object values in FileInput component (#11948)
Fixes
[#11800](https://github.com/Significant-Gravitas/AutoGPT/issues/11800)

## Problem
The FileInput component crashed with `TypeError: e.startsWith is not a
function` when the value was an object (from external API) instead of a
string.

## Example Input Object
When using the external API
(`/external-api/v1/graphs/{id}/execute/{version}`), file inputs can be
passed as objects:

```json
{
  "node_input": {
    "input_image": {
      "name": "image.jpeg",
      "type": "image/jpeg",
      "size": 131147,
      "data": "/9j/4QAW..."
    }
  }
}
```

## Changes
- Updated `getFileLabelFromValue()` to handle object format: `{ name,
type, size, data }`
- Added type guards for string vs object values
- Graceful fallback for edge cases (null, undefined, empty object)

## Test cases verified
- Object with name: returns filename
- Object with type only: extracts and formats MIME type
- String data URI: parses correctly
- String file path: extracts extension
- Edge cases: returns "File" fallback
2026-02-09 10:25:08 +00:00
Bently
caf9ff34e6 fix(backend): Handle stale RabbitMQ channels on connection drop (#11929)
### Changes 🏗️

Fixes
[**AUTOGPT-SERVER-1TN**](https://autoagpt.sentry.io/issues/?query=AUTOGPT-SERVER-1TN)
(~39K events since Feb 2025) and related connection issues
**6JC/6JD/6JE/6JF** (~6K combined).

#### Problem

When the RabbitMQ TCP connection drops (network blip, server restart,
etc.):

1. `connect_robust` (aio_pika) automatically reconnects the underlying
AMQP connection
2. But `AsyncRabbitMQ._channel` still references the **old dead
channel**
3. `is_ready` checks `not self._channel.is_closed` — but the channel
object doesn't know the transport is gone
4. `publish_message` tries to use the stale channel →
`ChannelInvalidStateError: No active transport in channel`
5. `@func_retry` retries 5 times, but each retry hits the same stale
channel (it passes `is_ready`)

This means every connection drop generates errors until the process is
restarted.

#### Fix

**New `_ensure_channel()` helper** that resets stale channels before
reconnecting, so `connect()` creates a fresh one instead of
short-circuiting on `is_connected`.

**Explicit `ChannelInvalidStateError` handling in `publish_message`:**
1. First attempt uses `_ensure_channel()` (handles normal staleness)
2. If publish throws `ChannelInvalidStateError`, does a full reconnect
(resets both `_channel` and `_connection`) and retries once
3. `@func_retry` provides additional retry resilience on top

**Simplified `get_channel()`** to use the same resilient helper.

**1 file changed, 62 insertions, 24 deletions.**

#### Impact
- Eliminates ~39K `ChannelInvalidStateError` Sentry events
- RabbitMQ operations self-heal after connection drops without process
restart
- Related transport EOF errors (6JC/6JD/6JE/6JF) should also reduce
2026-02-09 10:24:08 +00:00
Nicholas Tindle
e8fc8ee623 fix(backend): filter graph-only blocks from CoPilot's find_block results (#11892)
Filters out blocks that are unsuitable for standalone execution from
CoPilot's block search and execution. These blocks serve graph-specific
purposes and will either fail, hang, or confuse users when run outside
of a graph context.

**Important:** This does NOT affect the Builder UI which uses
`load_all_blocks()` directly.

### Changes 🏗️

- **find_block.py**: Added `EXCLUDED_BLOCK_TYPES` and
`EXCLUDED_BLOCK_IDS` constants, skip excluded blocks in search results
- **run_block.py**: Added execution guard that returns clear error
message for excluded blocks
- **content_handlers.py**: Added filtering to
`BlockHandler.get_missing_items()` and `get_stats()` to prevent indexing
excluded blocks

**Excluded by BlockType:**
| BlockType | Reason |
|-----------|--------|
| `INPUT` | Graph interface definition - data enters via chat, not graph
inputs |
| `OUTPUT` | Graph interface definition - data exits via chat, not graph
outputs |
| `WEBHOOK` | Wait for external events - would hang forever in CoPilot |
| `WEBHOOK_MANUAL` | Same as WEBHOOK |
| `NOTE` | Visual annotation only - no runtime behavior |
| `HUMAN_IN_THE_LOOP` | Pauses for human approval - CoPilot IS
human-in-the-loop |
| `AGENT` | AgentExecutorBlock requires graph context - use `run_agent`
tool instead |

**Excluded by ID:**
| Block | Reason |
|-------|--------|
| `SmartDecisionMakerBlock` | Dynamically discovers downstream blocks
via graph topology |

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [ ] Search for "input" in CoPilot - should NOT return AgentInputBlock
variants
- [ ] Search for "output" in CoPilot - should NOT return
AgentOutputBlock
- [ ] Search for "webhook" in CoPilot - should NOT return trigger blocks
- [ ] Search for "human" in CoPilot - should NOT return
HumanInTheLoopBlock
- [ ] Search for "decision" in CoPilot - should NOT return
SmartDecisionMakerBlock
- [ ] Verify functional blocks still appear (e.g., "email", "http",
"text")
  - [ ] Verify Builder UI still shows ALL blocks (no regression)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required.

---

Resolves: [SECRT-1831](https://linear.app/autogpt/issue/SECRT-1831)

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Low Risk**
> Behavior change is limited to CoPilot’s block discovery/execution
guards and is covered by new tests; main risk is inadvertently excluding
a block that should be runnable.
> 
> **Overview**
> CoPilot now **filters out graph-only blocks** from `find_block`
results and prevents them from being executed via `run_block`, returning
a clear error when a user attempts to run an excluded block.
> 
> `find_block` introduces explicit exclusion lists (by `BlockType` and a
specific block ID), over-fetches search results to maintain up to 10
usable matches after filtering, and adds debug logging when results are
reduced. New unit tests cover both the search filtering and the
`run_block` execution guard; a minor cleanup removes an unused `pytest`
import in `execution_queue_test.py`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
bc50755dcf. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-09 07:19:43 +00:00
dependabot[bot]
1a16e203b8 chore(deps): Bump actions/setup-node from 4 to 6 (#11213)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4
to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<p><strong>Breaking Changes</strong></p>
<ul>
<li>Limit automatic caching to npm, update workflows and documentation
by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1374">actions/setup-node#1374</a></li>
</ul>
<p><strong>Dependency Upgrades</strong></p>
<ul>
<li>Upgrade ts-jest from 29.1.2 to 29.4.1 and document breaking changes
in v5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1336">#1336</a></li>
<li>Upgrade prettier from 2.8.8 to 3.6.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1334">#1334</a></li>
<li>Upgrade actions/publish-action from 0.3.0 to 0.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1362">#1362</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v5...v6.0.0">https://github.com/actions/setup-node/compare/v5...v6.0.0</a></p>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Enhance caching in setup-node with automatic package manager
detection by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
</ul>
<p>This update, introduces automatic caching when a valid
<code>packageManager</code> field is present in your
<code>package.json</code>. This aims to improve workflow performance and
make dependency management more seamless.
To disable this automatic caching, set <code>package-manager-cache:
false</code></p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
  with:
    package-manager-cache: false
</code></pre>
<ul>
<li>Upgrade action to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​octokit/request-error</code> and
<code>@​actions/github</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1227">actions/setup-node#1227</a></li>
<li>Upgrade uuid from 9.0.1 to 11.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1273">actions/setup-node#1273</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1295">actions/setup-node#1295</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1332">actions/setup-node#1332</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1345">actions/setup-node#1345</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v5.0.0">https://github.com/actions/setup-node/compare/v4...v5.0.0</a></p>
<h2>v4.4.0</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2028fbc5c2"><code>2028fbc</code></a>
Limit automatic caching to npm, update workflows and documentation (<a
href="https://redirect.github.com/actions/setup-node/issues/1374">#1374</a>)</li>
<li><a
href="13427813f7"><code>1342781</code></a>
Bump actions/publish-action from 0.3.0 to 0.4.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1362">#1362</a>)</li>
<li><a
href="89d709d423"><code>89d709d</code></a>
Bump prettier from 2.8.8 to 3.6.2 (<a
href="https://redirect.github.com/actions/setup-node/issues/1334">#1334</a>)</li>
<li><a
href="cd2651c462"><code>cd2651c</code></a>
Bump ts-jest from 29.1.2 to 29.4.1 (<a
href="https://redirect.github.com/actions/setup-node/issues/1336">#1336</a>)</li>
<li><a
href="a0853c2454"><code>a0853c2</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1345">#1345</a>)</li>
<li><a
href="b7234cc9fe"><code>b7234cc</code></a>
Upgrade action to use node24 (<a
href="https://redirect.github.com/actions/setup-node/issues/1325">#1325</a>)</li>
<li><a
href="d7a11313b5"><code>d7a1131</code></a>
Enhance caching in setup-node with automatic package manager detection
(<a
href="https://redirect.github.com/actions/setup-node/issues/1348">#1348</a>)</li>
<li><a
href="5e2628c959"><code>5e2628c</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/setup-node/issues/1332">#1332</a>)</li>
<li><a
href="65beceff8e"><code>65becef</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1295">#1295</a>)</li>
<li><a
href="7e24a656e1"><code>7e24a65</code></a>
Bump uuid from 9.0.1 to 11.1.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1273">#1273</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-node/compare/v4...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=4&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 07:11:21 +00:00
dependabot[bot]
5dae303ce0 chore(frontend/deps): Bump react-window and @types/react-window in /autogpt_platform/frontend (#10943)
Bumps [react-window](https://github.com/bvaughn/react-window) and
[@types/react-window](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/react-window).
These dependencies needed to be updated together.
Updates `react-window` from 1.8.11 to 2.1.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/bvaughn/react-window/releases">react-window's
releases</a>.</em></p>
<blockquote>
<h2>2.1.0</h2>
<p>Improved ARIA support:</p>
<ul>
<li>Add better default ARIA attributes for outer
<code>HTMLDivElement</code></li>
<li>Add optional <code>ariaAttributes</code> prop to row and cell
renderers to simplify better ARIA attributes for user-rendered
cells</li>
<li>Remove intermediate <code>HTMLDivElement</code> from
<code>List</code> and <code>Grid</code>
<ul>
<li>This may enable more/better custom CSS styling</li>
<li>This may also enable adding an optional <code>children</code> prop
to <code>List</code> and <code>Grid</code> for e.g.
overlays/tooltips</li>
</ul>
</li>
<li>Add optional <code>tagName</code> prop; defaults to
<code>&quot;div&quot;</code> but can be changed to e.g.
<code>&quot;ul&quot;</code></li>
</ul>
<pre lang="tsx"><code>// Example of how to use new `ariaAttributes` prop
function RowComponent({
  ariaAttributes,
  index,
  style,
  ...rest
}: RowComponentProps&lt;object&gt;) {
  return (
    &lt;div style={style} {...ariaAttributes}&gt;
      ...
    &lt;/div&gt;
  );
}
</code></pre>
<p>Added optional <code>children</code> prop to better support edge
cases like sticky rows.</p>
<p>Minor changes to <code>onRowsRendered</code> and
<code>onCellsRendered</code> callbacks to make it easier to
differentiate between <em>visible</em> items and items rendered due to
overscan settings. These methods will now receive two params– the first
for <em>visible</em> rows and the second for <em>all</em> rows
(including overscan), e.g.:</p>
<pre lang="ts"><code>function onRowsRendered(
  visibleRows: {
    startIndex: number;
    stopIndex: number;
  },
  allRows: {
    startIndex: number;
    stopIndex: number;
  }
): void {
  // ...
}
<p>function onCellsRendered(<br />
visibleCells: {<br />
columnStartIndex: number;<br />
columnStopIndex: number;<br />
rowStartIndex: number;<br />
rowStopIndex: number;<br />
&lt;/tr&gt;&lt;/table&gt;<br />
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/bvaughn/react-window/blob/master/CHANGELOG.md">react-window's
changelog</a>.</em></p>
<blockquote>
<h2>2.1.0</h2>
<p>Improved ARIA support:</p>
<ul>
<li>Add better default ARIA attributes for outer
<code>HTMLDivElement</code></li>
<li>Add optional <code>ariaAttributes</code> prop to row and cell
renderers to simplify better ARIA attributes for user-rendered
cells</li>
<li>Remove intermediate <code>HTMLDivElement</code> from
<code>List</code> and <code>Grid</code>
<ul>
<li>This may enable more/better custom CSS styling</li>
<li>This may also enable adding an optional <code>children</code> prop
to <code>List</code> and <code>Grid</code> for e.g.
overlays/tooltips</li>
</ul>
</li>
<li>Add optional <code>tagName</code> prop; defaults to
<code>&quot;div&quot;</code> but can be changed to e.g.
<code>&quot;ul&quot;</code></li>
</ul>
<pre lang="tsx"><code>// Example of how to use new `ariaAttributes` prop
function RowComponent({
  ariaAttributes,
  index,
  style,
  ...rest
}: RowComponentProps&lt;object&gt;) {
  return (
    &lt;div style={style} {...ariaAttributes}&gt;
      ...
    &lt;/div&gt;
  );
}
</code></pre>
<p>Added optional <code>children</code> prop to better support edge
cases like sticky rows.</p>
<p>Minor changes to <code>onRowsRendered</code> and
<code>onCellsRendered</code> callbacks to make it easier to
differentiate between <em>visible</em> items and items rendered due to
overscan settings. These methods will now receive two params– the first
for <em>visible</em> rows and the second for <em>all</em> rows
(including overscan), e.g.:</p>
<pre lang="ts"><code>function onRowsRendered(
  visibleRows: {
    startIndex: number;
    stopIndex: number;
  },
  allRows: {
    startIndex: number;
    stopIndex: number;
  }
): void {
  // ...
}
<p>function onCellsRendered(<br />
visibleCells: {<br />
columnStartIndex: number;<br />
columnStopIndex: number;<br />
rowStartIndex: number;<br />
&lt;/tr&gt;&lt;/table&gt;<br />
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1b6840ba35"><code>1b6840b</code></a>
Merge pull request <a
href="https://redirect.github.com/bvaughn/react-window/issues/836">#836</a>
from bvaughn/ARIA-roles</li>
<li><a
href="35f651b615"><code>35f651b</code></a>
Revert accidental change to docs example</li>
<li><a
href="8bce7f555b"><code>8bce7f5</code></a>
onRowsRendered/onCellsRendered separate visible and overscan items</li>
<li><a
href="9f1e8f2f0a"><code>9f1e8f2</code></a>
Support custom tagName for outer element and (optional) children</li>
<li><a
href="7f07ac33cb"><code>7f07ac3</code></a>
Improve ARIA attributes</li>
<li><a
href="7234ec3c09"><code>7234ec3</code></a>
Reduced network waterfalls between routes</li>
<li><a
href="5c431a294f"><code>5c431a2</code></a>
Stronger typing for doc website routes</li>
<li><a
href="c9349a4b7b"><code>c9349a4</code></a>
2.0.1 -&gt; 2.0.2</li>
<li><a
href="6adc6c04a1"><code>6adc6c0</code></a>
Merge pull request <a
href="https://redirect.github.com/bvaughn/react-window/issues/832">#832</a>
from bvaughn/issues/831</li>
<li><a
href="bd562c5734"><code>bd562c5</code></a>
Add tests</li>
<li>Additional commits viewable in <a
href="https://github.com/bvaughn/react-window/compare/1.8.11...2.1.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/react-window` from 1.8.8 to 2.0.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/react-window">compare
view</a></li>
</ul>
</details>
<br />


You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 06:42:47 +00:00
dependabot[bot]
6cbfbdd013 chore(libs/deps-dev): bump the development-dependencies group across 1 directory with 4 updates (#11349)
Bumps the development-dependencies group with 4 updates in the
/autogpt_platform/autogpt_libs directory:
[pyright](https://github.com/RobertCraigie/pyright-python),
[pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio),
[pytest-mock](https://github.com/pytest-dev/pytest-mock) and
[ruff](https://github.com/astral-sh/ruff).

Updates `pyright` from 1.1.404 to 1.1.407
<details>
<summary>Commits</summary>
<ul>
<li><a
href="53e8efb463"><code>53e8efb</code></a>
Pyright NPM Package update to 1.1.407 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/356">#356</a>)</li>
<li><a
href="1d515b7129"><code>1d515b7</code></a>
Pyright NPM Package update to 1.1.406 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/355">#355</a>)</li>
<li><a
href="e211ec8df8"><code>e211ec8</code></a>
Pyright NPM Package update to 1.1.405 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/353">#353</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.404...v1.1.407">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-asyncio` from 1.1.0 to 1.3.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-asyncio/releases">pytest-asyncio's
releases</a>.</em></p>
<blockquote>
<h2>pytest-asyncio 1.3.0</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/1.3.0">1.3.0</a>
- 2025-11-10</h1>
<h2>Removed</h2>
<ul>
<li>Support for Python 3.9 (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1278">#1278</a>)</li>
</ul>
<h2>Added</h2>
<ul>
<li>Support for pytest 9 (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1279">#1279</a>)</li>
</ul>
<h2>Notes for Downstream Packagers</h2>
<ul>
<li>Tested Python versions include free threaded Python 3.14t (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1274">#1274</a>)</li>
<li>Tests are run in the same pytest process, instead of spawning a
subprocess with <code>pytest.Pytester.runpytest_subprocess</code>. This
prevents the test suite from accidentally using a system installation of
pytest-asyncio, which could result in test errors. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1275">#1275</a>)</li>
</ul>
<h2>pytest-asyncio 1.2.0</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/1.2.0">1.2.0</a>
- 2025-09-12</h1>
<h2>Added</h2>
<ul>
<li><code>--asyncio-debug</code> CLI option and
<code>asyncio_debug</code> configuration option to enable asyncio debug
mode for the default event loop. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/980">#980</a>)</li>
<li>A <code>pytest.UsageError</code> for invalid configuration values of
<code>asyncio_default_fixture_loop_scope</code> and
<code>asyncio_default_test_loop_scope</code>. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1189">#1189</a>)</li>
<li>Compatibility with the Pyright type checker (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/731">#731</a>)</li>
</ul>
<h2>Fixed</h2>
<ul>
<li><code>RuntimeError: There is no current event loop in thread
'MainThread'</code> when any test unsets the event loop (such as when
using <code>asyncio.run</code> and <code>asyncio.Runner</code>). (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1177">#1177</a>)</li>
<li>Deprecation warning when decorating an asynchronous fixture with
<code>@pytest.fixture</code> in [strict]{.title-ref} mode. The warning
message now refers to the correct package. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1198">#1198</a>)</li>
</ul>
<h2>Notes for Downstream Packagers</h2>
<ul>
<li>Bump the minimum required version of tox to v4.28. This change is
only relevant if you use the <code>tox.ini</code> file provided by
pytest-asyncio to run tests.</li>
<li>Extend dependency on typing-extensions&gt;=4.12 from Python&lt;3.10
to Python&lt;3.13.</li>
</ul>
<h2>pytest-asyncio 1.1.1</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/v1.1.1">v1.1.1</a>
- 2025-09-12</h1>
<h2>Notes for Downstream Packagers</h2>
<p>- Addresses a build problem with setuptoos-scm &gt;= 9 caused by
invalid setuptools-scm configuration in pytest-asyncio. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1192">#1192</a>)</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2e9695fcf8"><code>2e9695f</code></a>
docs: Compile changelog for v1.3.0</li>
<li><a
href="dd0e9ba3fa"><code>dd0e9ba</code></a>
docs: Reference correct issue in news fragment.</li>
<li><a
href="4c31abe5bf"><code>4c31abe</code></a>
Build(deps): Bump nh3 from 0.3.1 to 0.3.2</li>
<li><a
href="13e94770d7"><code>13e9477</code></a>
Link to migration guides from changelog</li>
<li><a
href="4d2cf3c36f"><code>4d2cf3c</code></a>
tests: handle Python 3.14 DefaultEventLoopPolicy deprecation
warnings</li>
<li><a
href="ee3549b6ef"><code>ee3549b</code></a>
test: Remove obsolete test for the event_loop fixture.</li>
<li><a
href="7a67c82c5a"><code>7a67c82</code></a>
tests: Fix failing test by preventing warning conversion to error.</li>
<li><a
href="a17b689a75"><code>a17b689</code></a>
test: add pytest config to isolated test directories</li>
<li><a
href="18afc9df5a"><code>18afc9d</code></a>
fix(tests): replace runpytest_subprocess with runpytest</li>
<li><a
href="cdc6bd1de7"><code>cdc6bd1</code></a>
Add support for pytest 9 and drop Python 3.9 support</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-asyncio/compare/v1.1.0...v1.3.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-mock` from 3.14.1 to 3.15.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/releases">pytest-mock's
releases</a>.</em></p>
<blockquote>
<h2>v3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/529">#529</a>:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>v3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/pull/524">#524</a>:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/blob/main/CHANGELOG.rst">pytest-mock's
changelog</a>.</em></p>
<blockquote>
<h2>3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><code>[#529](https://github.com/pytest-dev/pytest-mock/issues/529)
&lt;https://github.com/pytest-dev/pytest-mock/issues/529&gt;</code>_:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><code>[#524](https://github.com/pytest-dev/pytest-mock/issues/524)
&lt;https://github.com/pytest-dev/pytest-mock/pull/524&gt;</code>_:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e1b5c62a38"><code>e1b5c62</code></a>
Release 3.15.1</li>
<li><a
href="184eb190d6"><code>184eb19</code></a>
Set <code>spy_return_iter</code> only when explicitly requested (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/537">#537</a>)</li>
<li><a
href="4fa0088a0a"><code>4fa0088</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/536">#536</a>)</li>
<li><a
href="f5aff33ce7"><code>f5aff33</code></a>
Fix test failure with pytest 8+ and verbose mode (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/535">#535</a>)</li>
<li><a
href="adc41873c9"><code>adc4187</code></a>
Bump actions/setup-python from 5 to 6 in the github-actions group (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/533">#533</a>)</li>
<li><a
href="95ad570060"><code>95ad570</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/532">#532</a>)</li>
<li><a
href="e696bf02c1"><code>e696bf0</code></a>
Fix standalone mock support (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/531">#531</a>)</li>
<li><a
href="5b29b03ce9"><code>5b29b03</code></a>
Fix gen-release-notes script</li>
<li><a
href="7d22ef4e56"><code>7d22ef4</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/528">#528</a>
from pytest-dev/release-3.15.0</li>
<li><a
href="90b29f89e2"><code>90b29f8</code></a>
Update CHANGELOG for 3.15.0</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-mock/compare/v3.14.1...v3.15.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.12.11 to 0.14.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.14.4</h2>
<h2>Release Notes</h2>
<p>Released on 2025-11-06.</p>
<h3>Preview features</h3>
<ul>
<li>[formatter] Allow newlines after function headers without docstrings
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21110">#21110</a>)</li>
<li>[formatter] Avoid extra parentheses for long <code>match</code>
patterns with <code>as</code> captures (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21176">#21176</a>)</li>
<li>[<code>refurb</code>] Expand fix safety for keyword arguments and
<code>Decimal</code>s (<code>FURB164</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21259">#21259</a>)</li>
<li>[<code>refurb</code>] Preserve argument ordering in autofix
(<code>FURB103</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20790">#20790</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[server] Fix missing diagnostics for notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21156">#21156</a>)</li>
<li>[<code>flake8-bugbear</code>] Ignore non-NFKC attribute names in
<code>B009</code> and <code>B010</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21131">#21131</a>)</li>
<li>[<code>refurb</code>] Fix false negative for underscores before sign
in <code>Decimal</code> constructor (<code>FURB157</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21190">#21190</a>)</li>
<li>[<code>ruff</code>] Fix false positives on starred arguments
(<code>RUF057</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21256">#21256</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>airflow</code>] extend deprecated argument
<code>concurrency</code> in <code>airflow..DAG</code>
(<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21220">#21220</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Improve <code>extend</code> docs (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21135">#21135</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix typo in <code>C416</code>
documentation (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21184">#21184</a>)</li>
<li>Revise Ruff setup instructions for Zed editor (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20935">#20935</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Make <code>ruff analyze graph</code> work with jupyter notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21161">#21161</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/musicinmybrain"><code>@​musicinmybrain</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/tjkuson"><code>@​tjkuson</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/renovate"><code>@​renovate</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/gauthsvenkat"><code>@​gauthsvenkat</code></a></li>
<li><a
href="https://github.com/LoicRiegel"><code>@​LoicRiegel</code></a></li>
</ul>
<h2>Install ruff 0.14.4</h2>
<h3>Install prebuilt binaries via shell script</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.14.4</h2>
<p>Released on 2025-11-06.</p>
<h3>Preview features</h3>
<ul>
<li>[formatter] Allow newlines after function headers without docstrings
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21110">#21110</a>)</li>
<li>[formatter] Avoid extra parentheses for long <code>match</code>
patterns with <code>as</code> captures (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21176">#21176</a>)</li>
<li>[<code>refurb</code>] Expand fix safety for keyword arguments and
<code>Decimal</code>s (<code>FURB164</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21259">#21259</a>)</li>
<li>[<code>refurb</code>] Preserve argument ordering in autofix
(<code>FURB103</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20790">#20790</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[server] Fix missing diagnostics for notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21156">#21156</a>)</li>
<li>[<code>flake8-bugbear</code>] Ignore non-NFKC attribute names in
<code>B009</code> and <code>B010</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21131">#21131</a>)</li>
<li>[<code>refurb</code>] Fix false negative for underscores before sign
in <code>Decimal</code> constructor (<code>FURB157</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21190">#21190</a>)</li>
<li>[<code>ruff</code>] Fix false positives on starred arguments
(<code>RUF057</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21256">#21256</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>airflow</code>] extend deprecated argument
<code>concurrency</code> in <code>airflow..DAG</code>
(<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21220">#21220</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Improve <code>extend</code> docs (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21135">#21135</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix typo in <code>C416</code>
documentation (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21184">#21184</a>)</li>
<li>Revise Ruff setup instructions for Zed editor (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20935">#20935</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Make <code>ruff analyze graph</code> work with jupyter notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21161">#21161</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/musicinmybrain"><code>@​musicinmybrain</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/tjkuson"><code>@​tjkuson</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/renovate"><code>@​renovate</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/gauthsvenkat"><code>@​gauthsvenkat</code></a></li>
<li><a
href="https://github.com/LoicRiegel"><code>@​LoicRiegel</code></a></li>
</ul>
<h2>0.14.3</h2>
<p>Released on 2025-10-30.</p>
<h3>Preview features</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c7ff9826d6"><code>c7ff982</code></a>
Bump 0.14.4 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21306">#21306</a>)</li>
<li><a
href="35640dd853"><code>35640dd</code></a>
Fix main by using <code>infer_expression</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21299">#21299</a>)</li>
<li><a
href="cb2e277482"><code>cb2e277</code></a>
[ty] Understand legacy and PEP 695 <code>ParamSpec</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21139">#21139</a>)</li>
<li><a
href="132d10fb6f"><code>132d10f</code></a>
[ty] Discover site-packages from the environment that ty is installed in
(<a
href="https://redirect.github.com/astral-sh/ruff/issues/21">#21</a>...</li>
<li><a
href="f189aad6d2"><code>f189aad</code></a>
[ty] Make special cases for <code>UnionType</code> slightly narrower (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21276">#21276</a>)</li>
<li><a
href="5517c9943a"><code>5517c99</code></a>
Require ignore 0.4.24 in <code>Cargo.toml</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21292">#21292</a>)</li>
<li><a
href="b5ff96595d"><code>b5ff965</code></a>
[ty] Favour imported symbols over builtin symbols (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21285">#21285</a>)</li>
<li><a
href="c6573b16ac"><code>c6573b1</code></a>
docs: revise Ruff setup instructions for Zed editor (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20935">#20935</a>)</li>
<li><a
href="76127e5fb5"><code>76127e5</code></a>
[ty] Update salsa (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21281">#21281</a>)</li>
<li><a
href="cddc0fedc2"><code>cddc0fe</code></a>
[syntax-error]: no binding for nonlocal PLE0117 as a semantic syntax
error (...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.11...0.14.4">compare
view</a></li>
</ul>
</details>
<br />


You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 04:54:05 +00:00
dependabot[bot]
0c6fa60436 chore(deps): Bump actions/github-script from 7 to 8 (#10870)
Bumps [actions/github-script](https://github.com/actions/github-script)
from 7 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/github-script/releases">actions/github-script's
releases</a>.</em></p>
<blockquote>
<h2>v8.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update Node.js version support to 24.x by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li>README for updating actions/github-script from v7 to v8 by <a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li><a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7.1.0...v8.0.0">https://github.com/actions/github-script/compare/v7.1.0...v8.0.0</a></p>
<h2>v7.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade husky to v9 by <a
href="https://github.com/benelan"><code>@​benelan</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li>Upgrade IA Publish by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/486">actions/github-script#486</a></li>
<li>Fix workflow status badges by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/497">actions/github-script#497</a></li>
<li>Update usage of <code>actions/upload-artifact</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/512">actions/github-script#512</a></li>
<li>Clear up package name confusion by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/514">actions/github-script#514</a></li>
<li>Update dependencies with <code>npm audit fix</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/515">actions/github-script#515</a></li>
<li>Specify that the used script is JavaScript by <a
href="https://github.com/timotk"><code>@​timotk</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li>chore: Add Dependabot for NPM and Actions by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/472">actions/github-script#472</a></li>
<li>Define <code>permissions</code> in workflows and update actions by
<a href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in
<a
href="https://redirect.github.com/actions/github-script/pull/531">actions/github-script#531</a></li>
<li>chore: Add Dependabot for .github/actions/install-dependencies by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/532">actions/github-script#532</a></li>
<li>chore: Remove .vscode settings by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/533">actions/github-script#533</a></li>
<li>ci: Use github/setup-licensed by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/473">actions/github-script#473</a></li>
<li>make octokit instance available as octokit on top of github, to make
it easier to seamlessly copy examples from GitHub rest api or octokit
documentations by <a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li>Remove <code>octokit</code> README updates for v7 by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/557">actions/github-script#557</a></li>
<li>docs: add &quot;exec&quot; usage examples by <a
href="https://github.com/neilime"><code>@​neilime</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li>Bump ruby/setup-ruby from 1.213.0 to 1.222.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/563">actions/github-script#563</a></li>
<li>Bump ruby/setup-ruby from 1.222.0 to 1.229.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/575">actions/github-script#575</a></li>
<li>Clearly document passing inputs to the <code>script</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/603">actions/github-script#603</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/benelan"><code>@​benelan</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li><a href="https://github.com/timotk"><code>@​timotk</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li><a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li><a href="https://github.com/neilime"><code>@​neilime</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7...v7.1.0">https://github.com/actions/github-script/compare/v7...v7.1.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ed597411d8"><code>ed59741</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/653">#653</a>
from actions/sneha-krip/readme-for-v8</li>
<li><a
href="2dc352e4ba"><code>2dc352e</code></a>
Bold minimum Actions Runner version in README</li>
<li><a
href="01e118c8d0"><code>01e118c</code></a>
Update README for Node 24 runtime requirements</li>
<li><a
href="8b222ac82e"><code>8b222ac</code></a>
Apply suggestion from <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a></li>
<li><a
href="adc0eeac99"><code>adc0eea</code></a>
README for updating actions/github-script from v7 to v8</li>
<li><a
href="20fe497b3f"><code>20fe497</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/637">#637</a>
from actions/node24</li>
<li><a
href="e7b7f222b1"><code>e7b7f22</code></a>
update licenses</li>
<li><a
href="2c81ba05f3"><code>2c81ba0</code></a>
Update Node.js version support to 24.x</li>
<li>See full diff in <a
href="https://github.com/actions/github-script/compare/v7...v8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/github-script&package-manager=github_actions&previous-version=7&new-version=8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Update GitHub Actions workflows to use actions/github-script v8.
> 
> - **CI Workflows**:
>   - Update `actions/github-script` from `v7` to `v8` in:
>     - `.github/workflows/claude-ci-failure-auto-fix.yml`
>     - `.github/workflows/platform-dev-deploy-event-dispatcher.yml`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cfdccf966b. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-09 04:27:07 +00:00
dependabot[bot]
b04e916c23 chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 3 updates (#12005)
Bumps the development-dependencies group with 3 updates in the
/autogpt_platform/backend directory:
[poethepoet](https://github.com/nat-n/poethepoet),
[pytest-watcher](https://github.com/olzhasar/pytest-watcher) and
[ruff](https://github.com/astral-sh/ruff).

Updates `poethepoet` from 0.37.0 to 0.40.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nat-n/poethepoet/releases">poethepoet's
releases</a>.</em></p>
<blockquote>
<h2>0.40.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Allow optional envfiles without warnings by <a
href="https://github.com/cnaples79"><code>@​cnaples79</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/337">nat-n/poethepoet#337</a></li>
<li>Add support for the <code>capture_output</code> option in ref tasks
by <a href="https://github.com/kzrnm"><code>@​kzrnm</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/343">nat-n/poethepoet#343</a></li>
<li>Set uv to quiet mode during shell completion to avoid console spam
by <a href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/338">nat-n/poethepoet#338</a></li>
<li>Support <code>ignore_fail</code> on execution task types and ref
tasks by <a href="https://github.com/nat-n"><code>@​nat-n</code></a> in
<a
href="https://redirect.github.com/nat-n/poethepoet/pull/347">nat-n/poethepoet#347</a></li>
<li>Add choices option to constrain named arguments by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/348">nat-n/poethepoet#348</a></li>
</ul>
<h2>Fixes</h2>
<ul>
<li>Handle SIGHUP and SIGBREAK signals to stop tasks by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/344">nat-n/poethepoet#344</a></li>
<li>Accept string for type name in global executor option by <a
href="https://github.com/kzrnm"><code>@​kzrnm</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/340">nat-n/poethepoet#340</a></li>
</ul>
<h2>Code improvements</h2>
<ul>
<li>Modernize type annotations by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/339">nat-n/poethepoet#339</a></li>
<li>Ensure test virtual environments are always cleaned up by <a
href="https://github.com/kzrnm"><code>@​kzrnm</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/346">nat-n/poethepoet#346</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.39.0...v0.40.0">https://github.com/nat-n/poethepoet/compare/v0.39.0...v0.40.0</a></p>
<h2>0.39.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Add support for uv executor options by <a
href="https://github.com/rochacbruno"><code>@​rochacbruno</code></a> and
<a href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/327">nat-n/poethepoet#327</a>
<ul>
<li>feat: add <a
href="https://poethepoet.natn.io/global_options.html#uv-executor">various
options to the uv executor</a> to be passed to the uv run command</li>
<li>feat: allow task executor to be configure with just the type as a
string</li>
<li>feat executor options to be set at runtime via the new
--executor-opt cli global option</li>
<li>feat: allow inheritance of compatible executor options from global
to task to runtime</li>
<li>refactor: extend PoeOptions to support annotating config fields with
a config_name to parse, separate from the attribute name</li>
<li>refactor: some micro-optimizations to PoeOptions and
AnnotationType</li>
<li>doc: Add <a
href="https://poethepoet.natn.io/guides/tox_replacement_guide.html">guide
for replacing tox with poe + uv</a></li>
<li>doc: tidy up executor docs</li>
<li>doc: fix typo in doc for expr task</li>
<li>test: improve test coverage of PoeOptions</li>
<li>test: disable some test cases on windows that are too flaky</li>
</ul>
</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/rochacbruno"><code>@​rochacbruno</code></a>
made their first contribution in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/327">nat-n/poethepoet#327</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.38.0...v0.39.0">https://github.com/nat-n/poethepoet/compare/v0.38.0...v0.39.0</a></p>
<h2>0.38.0</h2>
<h2>Enhancements</h2>
<ul>
<li>feat: Add parallel task type by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/323">nat-n/poethepoet#323</a></li>
</ul>
<h2>Breaking changes</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0a7247d8f7"><code>0a7247d</code></a>
Bump version to 0.40.0</li>
<li><a
href="312e74a5be"><code>312e74a</code></a>
feat: Add choices option to constrain named arguments (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/348">#348</a>)</li>
<li><a
href="5e0b3e5590"><code>5e0b3e5</code></a>
feat: support ignore_fail on execution task types and ref tasks (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/347">#347</a>)</li>
<li><a
href="a3c97e1e94"><code>a3c97e1</code></a>
test: ensure the test virtual environment is always removed (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/346">#346</a>)</li>
<li><a
href="bc04e2fe18"><code>bc04e2f</code></a>
feat: support <code>capture_output</code> on ref tasks (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/343">#343</a>)</li>
<li><a
href="f7b82ef954"><code>f7b82ef</code></a>
fix: global executor option (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/340">#340</a>)</li>
<li><a
href="8e7b1166a0"><code>8e7b116</code></a>
fix: handle SIGHUP and SIGBREAK signals to stop tasks (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/344">#344</a>)</li>
<li><a
href="8e51f2b79f"><code>8e51f2b</code></a>
refactor: modernize type annotations (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/339">#339</a>)</li>
<li><a
href="72a9225dac"><code>72a9225</code></a>
fix: set uv to quiet during shell completion (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/338">#338</a>)</li>
<li><a
href="c6c7306276"><code>c6c7306</code></a>
feat: allow optional envfiles without warnings (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/337">#337</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/nat-n/poethepoet/compare/v0.37.0...v0.40.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-watcher` from 0.4.3 to 0.6.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/olzhasar/pytest-watcher/releases">pytest-watcher's
releases</a>.</em></p>
<blockquote>
<h2>v0.6.3</h2>
<h3>Features</h3>
<ul>
<li>Add debug mode activated with <code>PTW_DEBUG</code> environment
variable and improve log messages.</li>
</ul>
<h3>Bugfixes</h3>
<ul>
<li>Fix terminal flushing after menu and header prints.</li>
<li>Use monotonic clock for trigger detection to avoid misbehavior on
clock changes.</li>
</ul>
<h2>v0.6.2</h2>
<h3>Bugfixes</h3>
<ul>
<li>Allow specifying blank patterns via CLI</li>
<li>Fix duplicate command entries in menu</li>
</ul>
<h2>v0.6.1</h2>
<h3>Bugfixes</h3>
<ul>
<li>Trigger tests in interactive mode for carriage return character</li>
</ul>
<h3>Improved Documentation</h3>
<ul>
<li>Add contributing guide</li>
</ul>
<h3>Misc</h3>
<ul>
<li>Integrate <a
href="https://towncrier.readthedocs.io/en/stable/index.html">towncrier</a>
into the development process</li>
</ul>
<h2>v0.6.0</h2>
<h2>Features</h2>
<ul>
<li>Add <code>notify-on-failure</code> flag (and config option) to emit
BEL symbol on test suite failure.</li>
</ul>
<h2>Infrastructure</h2>
<ul>
<li>Migrate from poetry to uv.</li>
<li>Remove tox.</li>
</ul>
<h2>v0.5.0</h2>
<h2>Fixes</h2>
<ul>
<li>Merge arguments passed to the runner from config and CLI instead of
overriding.</li>
</ul>
<h2>Changes</h2>
<ul>
<li>Drop support for Python 3.7 &amp; 3.8</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/olzhasar/pytest-watcher/blob/master/CHANGELOG.md">pytest-watcher's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.3">0.6.3</a>
- 2026-01-11</h2>
<h3>Features</h3>
<ul>
<li>Add debug mode activated with <code>PTW_DEBUG</code> environment
variable and improve log messages.</li>
</ul>
<h3>Bugfixes</h3>
<ul>
<li>Fix terminal flushing after menu and header prints.</li>
<li>Use monotonic clock for trigger detection to avoid misbehavior on
clock changes.</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.2">0.6.2</a>
- 2025-12-28</h2>
<h3>Bugfixes</h3>
<ul>
<li>Allow specifying blank patterns via CLI</li>
<li>Fix duplicate command entries in menu</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.1">0.6.1</a>
- 2025-12-26</h2>
<h3>Bugfixes</h3>
<ul>
<li>Trigger tests in interactive mode for carriage return character</li>
</ul>
<h3>Improved Documentation</h3>
<ul>
<li>Add contributing guide</li>
</ul>
<h3>Misc</h3>
<ul>
<li>Integrate <a
href="https://towncrier.readthedocs.io/en/stable/index.html">towncrier</a>
into the development process</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.0">0.6.0</a>
- 2025-12-22</h2>
<h3>Features</h3>
<ul>
<li>Add notify-on-failure flag (and config option) to emit BEL symbol on
test suite failure.</li>
</ul>
<h3>Infrastructure</h3>
<ul>
<li>Migrate from <code>poetry</code> to <code>uv</code>.</li>
<li>Remove <code>tox</code>.</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.5.0">0.5.0</a>
- 2025-12-21</h2>
<h3>Fixes</h3>
<ul>
<li>Merge arguments passed to the runner from config and CLI instead of
overriding.</li>
</ul>
<h3>Changes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c52925b613"><code>c52925b</code></a>
release v0.6.3</li>
<li><a
href="23d49893f7"><code>23d4989</code></a>
Add debug mode. Improve log messages</li>
<li><a
href="e3dffa1cb3"><code>e3dffa1</code></a>
Fix terminal flushing after menu and header prints</li>
<li><a
href="0eeaf6080e"><code>0eeaf60</code></a>
Use monotonic clock for trigger detection</li>
<li><a
href="5ed9d0e262"><code>5ed9d0e</code></a>
Update CHANGELOG. Fix changelog_reader action</li>
<li><a
href="756f005f5d"><code>756f005</code></a>
release v0.6.2</li>
<li><a
href="902aa9e07b"><code>902aa9e</code></a>
Merge pull request <a
href="https://redirect.github.com/olzhasar/pytest-watcher/issues/51">#51</a>
from olzhasar/fix-duplicate-menu</li>
<li><a
href="e6b20d35b9"><code>e6b20d3</code></a>
Allow specifying empty patterns via CLI</li>
<li><a
href="2d522dabf9"><code>2d522da</code></a>
Fix duplicate menu entries</li>
<li><a
href="171e6f1282"><code>171e6f1</code></a>
Fix towncrier CHANGELOG versioning</li>
<li>Additional commits viewable in <a
href="https://github.com/olzhasar/pytest-watcher/compare/v0.4.3...v0.6.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.14.14 to 0.15.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.15.0</h2>
<h2>Release Notes</h2>
<p>Released on 2026-02-03.</p>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.15.0">blog
post</a> for a migration guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p>Ruff now formats your code according to the 2026 style guide. See the
formatter section below or in the blog post for a detailed list of
changes.</p>
</li>
<li>
<p>The linter now supports block suppression comments. For example, to
suppress <code>N803</code> for all parameters in this function:</p>
<pre lang="python"><code># ruff: disable[N803]
def foo(
    legacyArg1,
    legacyArg2,
    legacyArg3,
    legacyArg4,
): ...
# ruff: enable[N803]
</code></pre>
<p>See the <a
href="https://docs.astral.sh/ruff/linter/#block-level">documentation</a>
for more details.</p>
</li>
<li>
<p>The <code>ruff:alpine</code> Docker image is now based on Alpine 3.23
(up from 3.21).</p>
</li>
<li>
<p>The <code>ruff:debian</code> and <code>ruff:debian-slim</code> Docker
images are now based on Debian 13 &quot;Trixie&quot; instead of Debian
12 &quot;Bookworm.&quot;</p>
</li>
<li>
<p>Binaries for the <code>ppc64</code> (64-bit big-endian PowerPC)
architecture are no longer included in our releases. It should still be
possible to build Ruff manually for this platform, if needed.</p>
</li>
<li>
<p>Ruff now resolves all <code>extend</code>ed configuration files
before falling back on a default Python version.</p>
</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-http-call-httpx-in-async-function"><code>blocking-http-call-httpx-in-async-function</code></a>
(<code>ASYNC212</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-path-method-in-async-function"><code>blocking-path-method-in-async-function</code></a>
(<code>ASYNC240</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-input-in-async-function"><code>blocking-input-in-async-function</code></a>
(<code>ASYNC250</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/map-without-explicit-strict"><code>map-without-explicit-strict</code></a>
(<code>B912</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/if-exp-instead-of-or-operator"><code>if-exp-instead-of-or-operator</code></a>
(<code>FURB110</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/single-item-membership-test"><code>single-item-membership-test</code></a>
(<code>FURB171</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/missing-maxsplit-arg"><code>missing-maxsplit-arg</code></a>
(<code>PLC0207</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unnecessary-lambda"><code>unnecessary-lambda</code></a>
(<code>PLW0108</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unnecessary-empty-iterable-within-deque-call"><code>unnecessary-empty-iterable-within-deque-call</code></a>
(<code>RUF037</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/in-empty-collection"><code>in-empty-collection</code></a>
(<code>RUF060</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/legacy-form-pytest-raises"><code>legacy-form-pytest-raises</code></a>
(<code>RUF061</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-octal-permissions"><code>non-octal-permissions</code></a>
(<code>RUF064</code>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.15.0</h2>
<p>Released on 2026-02-03.</p>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.15.0">blog
post</a> for a migration
guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p>Ruff now formats your code according to the 2026 style guide. See the
formatter section below or in the blog post for a detailed list of
changes.</p>
</li>
<li>
<p>The linter now supports block suppression comments. For example, to
suppress <code>N803</code> for all parameters in this function:</p>
<pre lang="python"><code># ruff: disable[N803]
def foo(
    legacyArg1,
    legacyArg2,
    legacyArg3,
    legacyArg4,
): ...
# ruff: enable[N803]
</code></pre>
<p>See the <a
href="https://docs.astral.sh/ruff/linter/#block-level">documentation</a>
for more details.</p>
</li>
<li>
<p>The <code>ruff:alpine</code> Docker image is now based on Alpine 3.23
(up from 3.21).</p>
</li>
<li>
<p>The <code>ruff:debian</code> and <code>ruff:debian-slim</code> Docker
images are now based on Debian 13 &quot;Trixie&quot; instead of Debian
12 &quot;Bookworm.&quot;</p>
</li>
<li>
<p>Binaries for the <code>ppc64</code> (64-bit big-endian PowerPC)
architecture are no longer included in our releases. It should still be
possible to build Ruff manually for this platform, if needed.</p>
</li>
<li>
<p>Ruff now resolves all <code>extend</code>ed configuration files
before falling back on a default Python version.</p>
</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-http-call-httpx-in-async-function"><code>blocking-http-call-httpx-in-async-function</code></a>
(<code>ASYNC212</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-path-method-in-async-function"><code>blocking-path-method-in-async-function</code></a>
(<code>ASYNC240</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-input-in-async-function"><code>blocking-input-in-async-function</code></a>
(<code>ASYNC250</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/map-without-explicit-strict"><code>map-without-explicit-strict</code></a>
(<code>B912</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/if-exp-instead-of-or-operator"><code>if-exp-instead-of-or-operator</code></a>
(<code>FURB110</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/single-item-membership-test"><code>single-item-membership-test</code></a>
(<code>FURB171</code>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ce5f7b6127"><code>ce5f7b6</code></a>
Bump 0.15.0 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23055">#23055</a>)</li>
<li><a
href="b4e40f539c"><code>b4e40f5</code></a>
[ty] Fix <code>__contains__</code> to respect descriptors (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23056">#23056</a>)</li>
<li><a
href="848cb72dc1"><code>848cb72</code></a>
[ty] Fix narrowing of nonlocal variables with conditional assignments
(<a
href="https://redirect.github.com/astral-sh/ruff/issues/22966">#22966</a>)</li>
<li><a
href="da7f33af22"><code>da7f33a</code></a>
[ty] Add a diagnostic for <code>Final</code> without assignment (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23001">#23001</a>)</li>
<li><a
href="e65f9a6b03"><code>e65f9a6</code></a>
Document markdown formatting feature (<a
href="https://redirect.github.com/astral-sh/ruff/issues/22990">#22990</a>)</li>
<li><a
href="c0c1b985c9"><code>c0c1b98</code></a>
Format markdown code blocks with line-by-line regex parse (<a
href="https://redirect.github.com/astral-sh/ruff/issues/22996">#22996</a>)</li>
<li><a
href="9f8f3e196b"><code>9f8f3e1</code></a>
Allow positional-only params with defaults in method overrides (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23037">#23037</a>)</li>
<li><a
href="ef83810e11"><code>ef83810</code></a>
[ty] ecosystem-analyzer: Support bare git repositories (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23054">#23054</a>)</li>
<li><a
href="54dfee4cb8"><code>54dfee4</code></a>
Customize where the <code>fix_title</code> sub-diagnostic appears (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23044">#23044</a>)</li>
<li><a
href="b53460799b"><code>b534607</code></a>
2026 Ruff Formatter Style (<a
href="https://redirect.github.com/astral-sh/ruff/issues/22735">#22735</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.14.14...0.15.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-09 04:26:58 +00:00
dependabot[bot]
1a32ba7d9a chore(deps): bump urllib3 from 2.5.0 to 2.6.0 in /autogpt_platform/backend (#11607)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.5.0 to 2.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/urllib3/urllib3/releases">urllib3's
releases</a>.</em></p>
<blockquote>
<h2>2.6.0</h2>
<h2>🚀 urllib3 is fundraising for HTTP/2 support</h2>
<p><a
href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3
is raising ~$40,000 USD</a> to release HTTP/2 support and ensure
long-term sustainable maintenance of the project after a sharp decline
in financial support. If your company or organization uses Python and
would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and
thousands of other projects <a
href="https://opencollective.com/urllib3">please consider contributing
financially</a> to ensure HTTP/2 support is developed sustainably and
maintained for the long-haul.</p>
<p>Thank you for your support.</p>
<h2>Security</h2>
<ul>
<li>Fixed a security issue where streaming API could improperly handle
highly compressed HTTP content (&quot;decompression bombs&quot;) leading
to excessive resource consumption even when a small amount of data was
requested. Reading small chunks of compressed data is safer and much
more efficient now. (CVE-2025-66471 reported by <a
href="https://github.com/Cycloctane"><code>@​Cycloctane</code></a>, 8.9
High, GHSA-2xpw-w6gg-jr37)</li>
<li>Fixed a security issue where an attacker could compose an HTTP
response with virtually unlimited links in the
<code>Content-Encoding</code> header, potentially leading to a denial of
service (DoS) attack by exhausting system resources during decoding. The
number of allowed chained encodings is now limited to 5. (CVE-2025-66418
reported by <a
href="https://github.com/illia-v"><code>@​illia-v</code></a>, 8.9 High,
GHSA-gm62-xv2j-4w53)</li>
</ul>
<blockquote>
<p>[!IMPORTANT]</p>
<ul>
<li>If urllib3 is not installed with the optional
<code>urllib3[brotli]</code> extra, but your environment contains a
Brotli/brotlicffi/brotlipy package anyway, make sure to upgrade it to at
least Brotli 1.2.0 or brotlicffi 1.2.0.0 to benefit from the security
fixes and avoid warnings. Prefer using <code>urllib3[brotli]</code> to
install a compatible Brotli package automatically.</li>
<li>If you use custom decompressors, please make sure to update them to
respect the changed API of
<code>urllib3.response.ContentDecoder</code>.</li>
</ul>
</blockquote>
<h2>Features</h2>
<ul>
<li>Enabled retrieval, deletion, and membership testing in
<code>HTTPHeaderDict</code> using bytes keys. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3653">#3653</a>)</li>
<li>Added host and port information to string representations of
<code>HTTPConnection</code>. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3666">#3666</a>)</li>
<li>Added support for Python 3.14 free-threading builds explicitly. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3696">#3696</a>)</li>
</ul>
<h2>Removals</h2>
<ul>
<li>Removed the <code>HTTPResponse.getheaders()</code> method in favor
of <code>HTTPResponse.headers</code>. Removed the
<code>HTTPResponse.getheader(name, default)</code> method in favor of
<code>HTTPResponse.headers.get(name, default)</code>. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3622">#3622</a>)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Fixed redirect handling in <code>urllib3.PoolManager</code> when an
integer is passed for the retries parameter. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3649">#3649</a>)</li>
<li>Fixed <code>HTTPConnectionPool</code> when used in Emscripten with
no explicit port. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3664">#3664</a>)</li>
<li>Fixed handling of <code>SSLKEYLOGFILE</code> with expandable
variables. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3700">#3700</a>)</li>
</ul>
<h2>Misc</h2>
<ul>
<li>Changed the <code>zstd</code> extra to install
<code>backports.zstd</code> instead of <code>zstandard</code> on Python
3.13 and before. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3693">#3693</a>)</li>
<li>Improved the performance of content decoding by optimizing
<code>BytesQueueBuffer</code> class. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3710">#3710</a>)</li>
<li>Allowed building the urllib3 package with newer setuptools-scm v9.x.
(<a
href="https://redirect.github.com/urllib3/urllib3/issues/3652">#3652</a>)</li>
<li>Ensured successful urllib3 builds by setting Hatchling requirement
to ≥ 1.27.0. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3638">#3638</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's
changelog</a>.</em></p>
<blockquote>
<h1>2.6.0 (2025-12-05)</h1>
<h2>Security</h2>
<ul>
<li>Fixed a security issue where streaming API could improperly handle
highly
compressed HTTP content (&quot;decompression bombs&quot;) leading to
excessive resource
consumption even when a small amount of data was requested. Reading
small
chunks of compressed data is safer and much more efficient now.
(<code>GHSA-2xpw-w6gg-jr37
&lt;https://github.com/urllib3/urllib3/security/advisories/GHSA-2xpw-w6gg-jr37&gt;</code>__)</li>
<li>Fixed a security issue where an attacker could compose an HTTP
response with
virtually unlimited links in the <code>Content-Encoding</code> header,
potentially
leading to a denial of service (DoS) attack by exhausting system
resources
during decoding. The number of allowed chained encodings is now limited
to 5.
(<code>GHSA-gm62-xv2j-4w53
&lt;https://github.com/urllib3/urllib3/security/advisories/GHSA-gm62-xv2j-4w53&gt;</code>__)</li>
</ul>
<p>.. caution::</p>
<ul>
<li>
<p>If urllib3 is not installed with the optional
<code>urllib3[brotli]</code> extra, but
your environment contains a Brotli/brotlicffi/brotlipy package anyway,
make
sure to upgrade it to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 to
benefit from the security fixes and avoid warnings. Prefer using
<code>urllib3[brotli]</code> to install a compatible Brotli package
automatically.</p>
</li>
<li>
<p>If you use custom decompressors, please make sure to update them to
respect the changed API of
<code>urllib3.response.ContentDecoder</code>.</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>Enabled retrieval, deletion, and membership testing in
<code>HTTPHeaderDict</code> using bytes keys.
(<code>[#3653](https://github.com/urllib3/urllib3/issues/3653)
&lt;https://github.com/urllib3/urllib3/issues/3653&gt;</code>__)</li>
<li>Added host and port information to string representations of
<code>HTTPConnection</code>.
(<code>[#3666](https://github.com/urllib3/urllib3/issues/3666)
&lt;https://github.com/urllib3/urllib3/issues/3666&gt;</code>__)</li>
<li>Added support for Python 3.14 free-threading builds explicitly.
(<code>[#3696](https://github.com/urllib3/urllib3/issues/3696)
&lt;https://github.com/urllib3/urllib3/issues/3696&gt;</code>__)</li>
</ul>
<h2>Removals</h2>
<ul>
<li>Removed the <code>HTTPResponse.getheaders()</code> method in favor
of <code>HTTPResponse.headers</code>.
Removed the <code>HTTPResponse.getheader(name, default)</code> method in
favor of <code>HTTPResponse.headers.get(name, default)</code>.
(<code>[#3622](https://github.com/urllib3/urllib3/issues/3622)
&lt;https://github.com/urllib3/urllib3/issues/3622&gt;</code>__)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Fixed redirect handling in <code>urllib3.PoolManager</code> when an
integer is passed
for the retries parameter.
(<code>[#3649](https://github.com/urllib3/urllib3/issues/3649)
&lt;https://github.com/urllib3/urllib3/issues/3649&gt;</code>__)</li>
<li>Fixed <code>HTTPConnectionPool</code> when used in Emscripten with
no explicit port.
(<code>[#3664](https://github.com/urllib3/urllib3/issues/3664)
&lt;https://github.com/urllib3/urllib3/issues/3664&gt;</code>__)</li>
<li>Fixed handling of <code>SSLKEYLOGFILE</code> with expandable
variables.
(<code>[#3700](https://github.com/urllib3/urllib3/issues/3700)
&lt;https://github.com/urllib3/urllib3/issues/3700&gt;</code>__)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="720f484b60"><code>720f484</code></a>
Release 2.6.0</li>
<li><a
href="24d7b67eac"><code>24d7b67</code></a>
Merge commit from fork</li>
<li><a
href="c19571de34"><code>c19571d</code></a>
Merge commit from fork</li>
<li><a
href="816fcf0452"><code>816fcf0</code></a>
Bump actions/setup-python from 6.0.0 to 6.1.0 (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3725">#3725</a>)</li>
<li><a
href="18af0a10ef"><code>18af0a1</code></a>
Improve speed of <code>BytesQueueBuffer.get()</code> by using memoryview
(<a
href="https://redirect.github.com/urllib3/urllib3/issues/3711">#3711</a>)</li>
<li><a
href="1f6abac3e6"><code>1f6abac</code></a>
Bump versions of pre-commit hooks (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3716">#3716</a>)</li>
<li><a
href="1c8fbf787b"><code>1c8fbf7</code></a>
Bump actions/checkout from 5.0.0 to 6.0.0 (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3722">#3722</a>)</li>
<li><a
href="7784b9eee9"><code>7784b9e</code></a>
Add Python 3.15 to CI (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3717">#3717</a>)</li>
<li><a
href="0241c9e728"><code>0241c9e</code></a>
Updated docs to reflect change in optional zstd dependency from
<code>zstandard</code> t...</li>
<li><a
href="7afcabb648"><code>7afcabb</code></a>
Expand environment variable of SSLKEYLOGFILE (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3705">#3705</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/urllib3/urllib3/compare/2.5.0...2.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=2.5.0&new-version=2.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/Significant-Gravitas/AutoGPT/network/alerts).

</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 03:39:05 +00:00
dependabot[bot]
deccc26f1f chore(deps): bump actions/cache from 4 to 5 (#11665)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<blockquote>
<p>[!IMPORTANT]
<strong><code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of
<code>2.327.1</code>.</strong></p>
<p>If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<hr />
<h2>What's Changed</h2>
<ul>
<li>Upgrade to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1630">actions/cache#1630</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1684">actions/cache#1684</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4.3.0...v5.0.0">https://github.com/actions/cache/compare/v4.3.0...v5.0.0</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add note on runner versions by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1642">actions/cache#1642</a></li>
<li>Prepare <code>v4.3.0</code> release by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1655">actions/cache#1655</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1642">actions/cache#1642</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4...v4.3.0">https://github.com/actions/cache/compare/v4...v4.3.0</a></p>
<h2>v4.2.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1620">actions/cache#1620</a></li>
<li>Upgrade <code>@actions/cache</code> to <code>4.0.5</code> and move
<code>@protobuf-ts/plugin</code> to dev depdencies by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1634">actions/cache#1634</a></li>
<li>Prepare release <code>4.2.4</code> by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1636">actions/cache#1636</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1620">actions/cache#1620</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4...v4.2.4">https://github.com/actions/cache/compare/v4...v4.2.4</a></p>
<h2>v4.2.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use <code>@​actions/cache</code> 4.0.3 package &amp;
prepare for new release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1577">actions/cache#1577</a>
(SAS tokens for cache entries are now masked in debug logs)</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1577">actions/cache#1577</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4.2.2...v4.2.3">https://github.com/actions/cache/compare/v4.2.2...v4.2.3</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>Changelog</h2>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.
If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<h3>4.3.0</h3>
<ul>
<li>Bump <code>@actions/cache</code> to <a
href="https://redirect.github.com/actions/toolkit/pull/2132">v4.1.0</a></li>
</ul>
<h3>4.2.4</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.5</li>
</ul>
<h3>4.2.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.3 (obfuscates SAS token in
debug logs for cache entries)</li>
</ul>
<h3>4.2.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.2</li>
</ul>
<h3>4.2.1</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.1</li>
</ul>
<h3>4.2.0</h3>
<p>TLDR; The cache backend service has been rewritten from the ground up
for improved performance and reliability. <a
href="https://github.com/actions/cache">actions/cache</a> now integrates
with the new cache service (v2) APIs.</p>
<p>The new service will gradually roll out as of <strong>February 1st,
2025</strong>. The legacy service will also be sunset on the same date.
Changes in these release are <strong>fully backward
compatible</strong>.</p>
<p><strong>We are deprecating some versions of this action</strong>. We
recommend upgrading to version <code>v4</code> or <code>v3</code> as
soon as possible before <strong>February 1st, 2025.</strong> (Upgrade
instructions below).</p>
<p>If you are using pinned SHAs, please use the SHAs of versions
<code>v4.2.0</code> or <code>v3.4.0</code></p>
<p>If you do not upgrade, all workflow runs using any of the deprecated
<a href="https://github.com/actions/cache">actions/cache</a> will
fail.</p>
<p>Upgrading to the recommended versions will not break your
workflows.</p>
<h3>4.1.2</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9255dc7a25"><code>9255dc7</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1686">#1686</a>
from actions/cache-v5.0.1-release</li>
<li><a
href="8ff5423e8b"><code>8ff5423</code></a>
chore: release v5.0.1</li>
<li><a
href="9233019a15"><code>9233019</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1685">#1685</a>
from salmanmkc/node24-storage-blob-fix</li>
<li><a
href="b975f2bb84"><code>b975f2b</code></a>
fix: add peer property to package-lock.json for dependencies</li>
<li><a
href="d0a0e18134"><code>d0a0e18</code></a>
fix: update license files for <code>@​actions/cache</code>,
fast-xml-parser, and strnum</li>
<li><a
href="74de208dcf"><code>74de208</code></a>
fix: update <code>@​actions/cache</code> to ^5.0.1 for Node.js 24
punycode fix</li>
<li><a
href="ac7f1152ea"><code>ac7f115</code></a>
peer</li>
<li><a
href="b0f846b50b"><code>b0f846b</code></a>
fix: update <code>@​actions/cache</code> with storage-blob fix for
Node.js 24 punycode depr...</li>
<li><a
href="a783357455"><code>a783357</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1684">#1684</a>
from actions/prepare-cache-v5-release</li>
<li><a
href="3bb0d78750"><code>3bb0d78</code></a>
docs: highlight v5 runner requirement in releases</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/cache/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 03:28:23 +00:00
dependabot[bot]
9e38bd5b78 chore(backend/deps): bump the production-dependencies group across 1 directory with 8 updates (#12014)
Bumps the production-dependencies group with 8 updates in the
/autogpt_platform/backend directory:

| Package | From | To |
| --- | --- | --- |
| [anthropic](https://github.com/anthropics/anthropic-sdk-python) |
`0.59.0` | `0.79.0` |
| [fastapi](https://github.com/fastapi/fastapi) | `0.128.3` | `0.128.5`
|
| [ollama](https://github.com/ollama/ollama-python) | `0.5.4` | `0.6.1`
|
| [prometheus-client](https://github.com/prometheus/client_python) |
`0.22.1` | `0.24.1` |
| [python-multipart](https://github.com/Kludex/python-multipart) |
`0.0.20` | `0.0.22` |
| [supabase](https://github.com/supabase/supabase-py) | `2.27.2` |
`2.27.3` |
| [tenacity](https://github.com/jd/tenacity) | `9.1.3` | `9.1.4` |
| [tiktoken](https://github.com/openai/tiktoken) | `0.9.0` | `0.12.0` |


Updates `anthropic` from 0.59.0 to 0.79.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/anthropics/anthropic-sdk-python/releases">anthropic's
releases</a>.</em></p>
<blockquote>
<h2>v0.79.0</h2>
<h2>0.79.0 (2026-02-07)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.78.0...v0.79.0">v0.78.0...v0.79.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> enabling fast-mode in claude-opus-4-6 (<a
href="5953ba7b42">5953ba7</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>pass speed parameter through in sync beta count_tokens (<a
href="1dd6119dac">1dd6119</a>)</li>
</ul>
<h2>v0.78.0</h2>
<h2>0.78.0 (2026-02-05)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.1...v0.78.0">v0.77.1...v0.78.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> Release Claude Opus 4.6, adaptive thinking,
and other features (<a
href="3ef1529b45">3ef1529</a>)</li>
</ul>
<h2>v0.77.1</h2>
<h2>0.77.1 (2026-02-03)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.0...v0.77.1">v0.77.0...v0.77.1</a></p>
<h3>Bug Fixes</h3>
<ul>
<li><strong>structured outputs:</strong> send structured output beta
header when format is omitted (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1158">#1158</a>)
(<a
href="258494e2b8">258494e</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>remove claude-code-review workflow (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1338">#1338</a>)
(<a
href="aec4512305">aec4512</a>)</li>
</ul>
<h2>v0.77.0</h2>
<h2>0.77.0 (2026-01-29)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.76.0...v0.77.0">v0.76.0...v0.77.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> add support for Structured Outputs in the
Messages API (<a
href="ad5667774a">ad56677</a>)</li>
<li><strong>api:</strong> migrate sending message format in
output_config rather than output_format (<a
href="af405e473f">af405e4</a>)</li>
<li><strong>client:</strong> add custom JSON encoder for extended type
support (<a
href="7780e90bd2">7780e90</a>)</li>
<li>use output_config for structured outputs (<a
href="82d669db65">82d669d</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/anthropics/anthropic-sdk-python/blob/main/CHANGELOG.md">anthropic's
changelog</a>.</em></p>
<blockquote>
<h2>0.79.0 (2026-02-07)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.78.0...v0.79.0">v0.78.0...v0.79.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> enabling fast-mode in claude-opus-4-6 (<a
href="5953ba7b42">5953ba7</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>pass speed parameter through in sync beta count_tokens (<a
href="1dd6119dac">1dd6119</a>)</li>
</ul>
<h2>0.78.0 (2026-02-05)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.1...v0.78.0">v0.77.1...v0.78.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> Release Claude Opus 4.6, adaptive thinking,
and other features (<a
href="3ef1529b45">3ef1529</a>)</li>
</ul>
<h2>0.77.1 (2026-02-03)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.0...v0.77.1">v0.77.0...v0.77.1</a></p>
<h3>Bug Fixes</h3>
<ul>
<li><strong>structured outputs:</strong> send structured output beta
header when format is omitted (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1158">#1158</a>)
(<a
href="258494e2b8">258494e</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>remove claude-code-review workflow (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1338">#1338</a>)
(<a
href="aec4512305">aec4512</a>)</li>
</ul>
<h2>0.77.0 (2026-01-29)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.76.0...v0.77.0">v0.76.0...v0.77.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> add support for Structured Outputs in the
Messages API (<a
href="ad5667774a">ad56677</a>)</li>
<li><strong>api:</strong> migrate sending message format in
output_config rather than output_format (<a
href="af405e473f">af405e4</a>)</li>
<li><strong>client:</strong> add custom JSON encoder for extended type
support (<a
href="7780e90bd2">7780e90</a>)</li>
<li>use output_config for structured outputs (<a
href="82d669db65">82d669d</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>client:</strong> run formatter (<a
href="2e4ff86d7b">2e4ff86</a>)</li>
<li>remove class causing breaking change (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1333">#1333</a>)
(<a
href="81ee9533d1">81ee953</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cd1b39bf07"><code>cd1b39b</code></a>
release: 0.79.0</li>
<li><a
href="fb52a6a09d"><code>fb52a6a</code></a>
fix: pass speed parameter through in sync beta count_tokens</li>
<li><a
href="b7c2df239d"><code>b7c2df2</code></a>
feat(api): enabling fast-mode in claude-opus-4-6</li>
<li><a
href="7c42e4b04b"><code>7c42e4b</code></a>
Update CHANGELOG.md (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1163">#1163</a>)</li>
<li><a
href="f2b61ed11c"><code>f2b61ed</code></a>
release: 0.78.0</li>
<li><a
href="a4a29cab92"><code>a4a29ca</code></a>
feat(api): manual updates</li>
<li><a
href="3955600d74"><code>3955600</code></a>
release: 0.77.1</li>
<li><a
href="eca8ddfb19"><code>eca8ddf</code></a>
fix(structured outputs): send structured output beta header when format
is om...</li>
<li><a
href="ee44c52131"><code>ee44c52</code></a>
chore: remove claude-code-review workflow (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1338">#1338</a>)</li>
<li><a
href="9c485f6966"><code>9c485f6</code></a>
release: 0.77.0 (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1117">#1117</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.59.0...v0.79.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `fastapi` from 0.128.3 to 0.128.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fastapi/fastapi/releases">fastapi's
releases</a>.</em></p>
<blockquote>
<h2>0.128.5</h2>
<h3>Refactors</h3>
<ul>
<li>♻️ Refactor and simplify Pydantic v2 (and v1) compatibility internal
utils. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14862">#14862</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Add inline snapshot tests for OpenAPI before changes from Pydantic
v2. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14864">#14864</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h2>0.128.4</h2>
<h3>Refactors</h3>
<ul>
<li>♻️ Refactor internals, simplify Pydantic v2/v1 utils,
<code>create_model_field</code>, better types for
<code>lenient_issubclass</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14860">#14860</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>♻️ Simplify internals, remove Pydantic v1 only logic, no longer
needed. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14857">#14857</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>♻️ Refactor internals, cleanup unneeded Pydantic v1 specific logic.
PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14856">#14856</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Translations</h3>
<ul>
<li>🌐 Update translations for fr (outdated pages). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14839">#14839</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
<li>🌐 Update translations for tr (outdated and missing). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14838">#14838</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li>⬆️ Upgrade development dependencies. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14854">#14854</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="dedf1409fe"><code>dedf140</code></a>
🔖 Release version 0.128.5</li>
<li><a
href="79d4dfb37f"><code>79d4dfb</code></a>
📝 Update release notes</li>
<li><a
href="9f4ecf562c"><code>9f4ecf5</code></a>
 Add inline snapshot tests for OpenAPI before changes from Pydantic v2
(<a
href="https://redirect.github.com/fastapi/fastapi/issues/14864">#14864</a>)</li>
<li><a
href="c48539f4c6"><code>c48539f</code></a>
📝 Update release notes</li>
<li><a
href="2e7d3754cd"><code>2e7d375</code></a>
♻️ Refactor and simplify Pydantic v2 (and v1) compatibility internal
utils (#...</li>
<li><a
href="8eac94bd91"><code>8eac94b</code></a>
🔖 Release version 0.128.4</li>
<li><a
href="58cdfc7f4b"><code>58cdfc7</code></a>
📝 Update release notes</li>
<li><a
href="d59fbc3494"><code>d59fbc3</code></a>
♻️ Refactor internals, simplify Pydantic v2/v1 utils,
<code>create_model_field</code>, b...</li>
<li><a
href="cc6ced6345"><code>cc6ced6</code></a>
📝 Update release notes</li>
<li><a
href="cf55bade7e"><code>cf55bad</code></a>
♻️ Simplify internals, remove Pydantic v1 only logic, no longer needed
(<a
href="https://redirect.github.com/fastapi/fastapi/issues/14857">#14857</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/fastapi/fastapi/compare/0.128.3...0.128.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `ollama` from 0.5.4 to 0.6.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/ollama/ollama-python/releases">ollama's
releases</a>.</em></p>
<blockquote>
<h2>v0.6.1</h2>
<h2>What's Changed</h2>
<ul>
<li>client/types: add logprobs support by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/601">ollama/ollama-python#601</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/ollama/ollama-python/compare/v0.6.0...v0.6.1">https://github.com/ollama/ollama-python/compare/v0.6.0...v0.6.1</a></p>
<h2>v0.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>
<p>client: add web search and web crawl capabilities by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/578">ollama/ollama-python#578</a></p>
</li>
<li>
<p>client: load OLLAMA_API_KEY on init by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/583">ollama/ollama-python#583</a></p>
</li>
<li>
<p>client/types: update web search and fetch API by <a
href="https://github.com/npardal"><code>@​npardal</code></a> in <a
href="https://redirect.github.com/ollama/ollama-python/pull/584">ollama/ollama-python#584</a></p>
</li>
<li>
<p>examples: add mcp server for web_search web_crawl by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/585">ollama/ollama-python#585</a></p>
</li>
<li>
<p>examples: gpt oss browser tool by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/588">ollama/ollama-python#588</a></p>
</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/npardal"><code>@​npardal</code></a> made
their first contribution in <a
href="https://redirect.github.com/ollama/ollama-python/pull/584">ollama/ollama-python#584</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/ollama/ollama-python/compare/v0.5.4...v0.6.0">https://github.com/ollama/ollama-python/compare/v0.5.4...v0.6.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0008226fda"><code>0008226</code></a>
client/types: add logprobs support (<a
href="https://redirect.github.com/ollama/ollama-python/issues/601">#601</a>)</li>
<li><a
href="9ddd5f0182"><code>9ddd5f0</code></a>
examples: fix model web search (<a
href="https://redirect.github.com/ollama/ollama-python/issues/589">#589</a>)</li>
<li><a
href="d967f048d9"><code>d967f04</code></a>
examples: gpt oss browser tool (<a
href="https://redirect.github.com/ollama/ollama-python/issues/588">#588</a>)</li>
<li><a
href="ab49a669cd"><code>ab49a66</code></a>
examples: add mcp server for web_search web_crawl (<a
href="https://redirect.github.com/ollama/ollama-python/issues/585">#585</a>)</li>
<li><a
href="16f344f635"><code>16f344f</code></a>
client/types: update web search and fetch API (<a
href="https://redirect.github.com/ollama/ollama-python/issues/584">#584</a>)</li>
<li><a
href="d0f71bc8b8"><code>d0f71bc</code></a>
client: load OLLAMA_API_KEY on init (<a
href="https://redirect.github.com/ollama/ollama-python/issues/583">#583</a>)</li>
<li><a
href="b22c5fdabb"><code>b22c5fd</code></a>
init: fix export for web_search (<a
href="https://redirect.github.com/ollama/ollama-python/issues/581">#581</a>)</li>
<li><a
href="4d0b81b37a"><code>4d0b81b</code></a>
client: add web search and web crawl capabilities (<a
href="https://redirect.github.com/ollama/ollama-python/issues/578">#578</a>)</li>
<li>See full diff in <a
href="https://github.com/ollama/ollama-python/compare/v0.5.4...v0.6.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `prometheus-client` from 0.22.1 to 0.24.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/prometheus/client_python/releases">prometheus-client's
releases</a>.</em></p>
<blockquote>
<h2>v0.24.1</h2>
<ul>
<li>[Django] Pass correct registry to MultiProcessCollector by <a
href="https://github.com/jelly"><code>@​jelly</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1152">prometheus/client_python#1152</a></li>
</ul>
<h2>v0.24.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add an AIOHTTP exporter by <a
href="https://github.com/Lexicality"><code>@​Lexicality</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1139">prometheus/client_python#1139</a></li>
<li>Add remove_matching() method for metric label deletion by <a
href="https://github.com/hazel-shen"><code>@​hazel-shen</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1121">prometheus/client_python#1121</a></li>
<li>fix(multiprocess): avoid double-building child metric names (<a
href="https://redirect.github.com/prometheus/client_python/issues/1035">#1035</a>)
by <a href="https://github.com/hazel-shen"><code>@​hazel-shen</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1146">prometheus/client_python#1146</a></li>
<li>Don't interleave histogram metrics in multi-process collector by <a
href="https://github.com/cjwatson"><code>@​cjwatson</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1148">prometheus/client_python#1148</a></li>
<li>Relax registry type annotations for exposition by <a
href="https://github.com/cjwatson"><code>@​cjwatson</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1149">prometheus/client_python#1149</a></li>
<li>Added compression support in pushgateway by <a
href="https://github.com/ritesh-avesha"><code>@​ritesh-avesha</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1144">prometheus/client_python#1144</a></li>
<li>Add Django exporter (<a
href="https://redirect.github.com/prometheus/client_python/issues/1088">#1088</a>)
by <a href="https://github.com/Chadys"><code>@​Chadys</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1143">prometheus/client_python#1143</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus/client_python/compare/v0.23.1...v0.24.0">https://github.com/prometheus/client_python/compare/v0.23.1...v0.24.0</a></p>
<h2>v0.23.1</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: use tuples instead of packaging Version by <a
href="https://github.com/efiop"><code>@​efiop</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1136">prometheus/client_python#1136</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/efiop"><code>@​efiop</code></a> made
their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1136">prometheus/client_python#1136</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus/client_python/compare/v0.23.0...v0.23.1">https://github.com/prometheus/client_python/compare/v0.23.0...v0.23.1</a></p>
<h2>v0.23.0</h2>
<h2>What's Changed</h2>
<ul>
<li>UTF-8 Content Negotiation by <a
href="https://github.com/ywwg"><code>@​ywwg</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1102">prometheus/client_python#1102</a></li>
<li>Re include test data by <a
href="https://github.com/mgorny"><code>@​mgorny</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1113">prometheus/client_python#1113</a></li>
<li>Improve parser performance by <a
href="https://github.com/csmarchbanks"><code>@​csmarchbanks</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1117">prometheus/client_python#1117</a></li>
<li>Add support to <code>write_to_textfile</code> for custom tmpdir by
<a
href="https://github.com/aadityadhruv"><code>@​aadityadhruv</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1115">prometheus/client_python#1115</a></li>
<li>OM text exposition for NH by <a
href="https://github.com/vesari"><code>@​vesari</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1087">prometheus/client_python#1087</a></li>
<li>Fix bug which caused metric publishing to not accept query string
parameters in ASGI app by <a
href="https://github.com/hacksparr0w"><code>@​hacksparr0w</code></a> in
<a
href="https://redirect.github.com/prometheus/client_python/pull/1125">prometheus/client_python#1125</a></li>
<li>Emit native histograms only when OM 2.0.0 is requested by <a
href="https://github.com/vesari"><code>@​vesari</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1128">prometheus/client_python#1128</a></li>
<li>fix: remove space after comma in openmetrics exposition by <a
href="https://github.com/theSuess"><code>@​theSuess</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1132">prometheus/client_python#1132</a></li>
<li>Fix issue parsing double spaces after # HELP/# TYPE by <a
href="https://github.com/csmarchbanks"><code>@​csmarchbanks</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1134">prometheus/client_python#1134</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/mgorny"><code>@​mgorny</code></a> made
their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1113">prometheus/client_python#1113</a></li>
<li><a
href="https://github.com/aadityadhruv"><code>@​aadityadhruv</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1115">prometheus/client_python#1115</a></li>
<li><a
href="https://github.com/hacksparr0w"><code>@​hacksparr0w</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1125">prometheus/client_python#1125</a></li>
<li><a href="https://github.com/theSuess"><code>@​theSuess</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1132">prometheus/client_python#1132</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus/client_python/compare/v0.22.1...v0.23.0">https://github.com/prometheus/client_python/compare/v0.22.1...v0.23.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f417f6ea8f"><code>f417f6e</code></a>
Release 0.24.1</li>
<li><a
href="6f0e967c1f"><code>6f0e967</code></a>
Pass correct registry to MultiProcessCollector (<a
href="https://redirect.github.com/prometheus/client_python/issues/1152">#1152</a>)</li>
<li><a
href="c5024d310f"><code>c5024d3</code></a>
Release 0.24.0</li>
<li><a
href="e1cdc203b1"><code>e1cdc20</code></a>
Add Django exporter (<a
href="https://redirect.github.com/prometheus/client_python/issues/1088">#1088</a>)
(<a
href="https://redirect.github.com/prometheus/client_python/issues/1143">#1143</a>)</li>
<li><a
href="7b99592094"><code>7b99592</code></a>
Added compression support in pushgateway (<a
href="https://redirect.github.com/prometheus/client_python/issues/1144">#1144</a>)</li>
<li><a
href="13df12421e"><code>13df124</code></a>
Relax registry type annotations for exposition (<a
href="https://redirect.github.com/prometheus/client_python/issues/1149">#1149</a>)</li>
<li><a
href="a264ec0d85"><code>a264ec0</code></a>
Don't interleave histogram metrics in multi-process collector (<a
href="https://redirect.github.com/prometheus/client_python/issues/1148">#1148</a>)</li>
<li><a
href="e8f8bae655"><code>e8f8bae</code></a>
fix(multiprocess): avoid double-building child metric names (<a
href="https://redirect.github.com/prometheus/client_python/issues/1035">#1035</a>)
(<a
href="https://redirect.github.com/prometheus/client_python/issues/1146">#1146</a>)</li>
<li><a
href="1783ca87ac"><code>1783ca8</code></a>
Add support for Python 3.14 (<a
href="https://redirect.github.com/prometheus/client_python/issues/1142">#1142</a>)</li>
<li><a
href="378510b8ae"><code>378510b</code></a>
Add remove_matching() method for metric label deletion (<a
href="https://redirect.github.com/prometheus/client_python/issues/1121">#1121</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/prometheus/client_python/compare/v0.22.1...v0.24.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `python-multipart` from 0.0.20 to 0.0.22
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/releases">python-multipart's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.0.22</h2>
<h2>What's Changed</h2>
<ul>
<li>Drop directory path from filename in <code>File</code> <a
href="9433f4bbc9">9433f4b</a>.</li>
</ul>
<hr />
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.21...0.0.22">https://github.com/Kludex/python-multipart/compare/0.0.21...0.0.22</a></p>
<h2>Version 0.0.21</h2>
<h2>What's Changed</h2>
<ul>
<li>Add support for Python 3.14 and drop EOL 3.8 and 3.9 by <a
href="https://github.com/hugovk"><code>@​hugovk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/216">Kludex/python-multipart#216</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/waketzheng"><code>@​waketzheng</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/203">Kludex/python-multipart#203</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.21">https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.21</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md">python-multipart's
changelog</a>.</em></p>
<blockquote>
<h2>0.0.22 (2026-01-25)</h2>
<ul>
<li>Drop directory path from filename in <code>File</code> <a
href="9433f4bbc9">9433f4b</a>.</li>
</ul>
<h2>0.0.21 (2025-12-17)</h2>
<ul>
<li>Add support for Python 3.14 and drop EOL 3.8 and 3.9 <a
href="https://redirect.github.com/Kludex/python-multipart/pull/216">#216</a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bea7bbb290"><code>bea7bbb</code></a>
Version 0.0.22 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/222">#222</a>)</li>
<li><a
href="0fb59a9df0"><code>0fb59a9</code></a>
chore: add return type on test (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/221">#221</a>)</li>
<li><a
href="9433f4bbc9"><code>9433f4b</code></a>
Merge commit from fork</li>
<li><a
href="d5c91ecb0a"><code>d5c91ec</code></a>
Bump the github-actions group with 2 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/219">#219</a>)</li>
<li><a
href="5a90631b48"><code>5a90631</code></a>
bump uv (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/218">#218</a>)</li>
<li><a
href="1f72955602"><code>1f72955</code></a>
Version 0.0.21 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/217">#217</a>)</li>
<li><a
href="47ecfed353"><code>47ecfed</code></a>
Add support for Python 3.14 and drop EOL 3.8 and 3.9 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/216">#216</a>)</li>
<li><a
href="f18b70941b"><code>f18b709</code></a>
Bump the github-actions group across 1 directory with 4 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/214">#214</a>)</li>
<li><a
href="b388e9a7a8"><code>b388e9a</code></a>
chore: use depedency-groups in <code>pyproject.toml</code> (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/212">#212</a>)</li>
<li><a
href="6113e75097"><code>6113e75</code></a>
Bump the github-actions group across 1 directory with 3 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/210">#210</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.22">compare
view</a></li>
</ul>
</details>
<br />

Updates `supabase` from 2.27.2 to 2.27.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/releases">supabase's
releases</a>.</em></p>
<blockquote>
<h2>v2.27.3</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">2.27.3</a>
(2026-02-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)
(<a
href="cc72ed75d4">cc72ed7</a>)</li>
<li>ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)
(<a
href="4267ff1345">4267ff1</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/blob/main/CHANGELOG.md">supabase's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">2.27.3</a>
(2026-02-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)
(<a
href="cc72ed75d4">cc72ed7</a>)</li>
<li>ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)
(<a
href="4267ff1345">4267ff1</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c357def670"><code>c357def</code></a>
chore(main): release 2.27.3 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1368">#1368</a>)</li>
<li><a
href="4267ff1345"><code>4267ff1</code></a>
fix: ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)</li>
<li><a
href="cc72ed75d4"><code>cc72ed7</code></a>
fix: deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)</li>
<li><a
href="9d3620da64"><code>9d3620d</code></a>
chore(realtime): move most 'info' level logs into 'debug' (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1358">#1358</a>)</li>
<li><a
href="30f5e84022"><code>30f5e84</code></a>
Upgrade GitHub Actions for Node 24 compatibility (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1357">#1357</a>)</li>
<li><a
href="1df3afcd7c"><code>1df3afc</code></a>
chore(ci): add python package to ci matrix (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1351">#1351</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `tenacity` from 9.1.3 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/jd/tenacity/releases">tenacity's
releases</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix <code>retry()</code> annotations with async <code>sleep=</code>
function by <a
href="https://github.com/Zac-HD"><code>@​Zac-HD</code></a> in <a
href="https://redirect.github.com/jd/tenacity/pull/555">jd/tenacity#555</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/jd/tenacity/compare/9.1.3...9.1.4">https://github.com/jd/tenacity/compare/9.1.3...9.1.4</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d4e868d6b8"><code>d4e868d</code></a>
Fix <code>retry()</code> annotations with async <code>sleep=</code>
function (<a
href="https://redirect.github.com/jd/tenacity/issues/555">#555</a>)</li>
<li>See full diff in <a
href="https://github.com/jd/tenacity/compare/9.1.3...9.1.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `tiktoken` from 0.9.0 to 0.12.0
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/openai/tiktoken/blob/main/CHANGELOG.md">tiktoken's
changelog</a>.</em></p>
<blockquote>
<h2>[v0.12.0]</h2>
<ul>
<li>Build wheels for Python 3.14</li>
<li>Build musllinux aarch64 wheels</li>
<li>Support for free-threaded Python</li>
<li>Update version of <code>pyo3</code> and <code>rustc-hash</code></li>
<li>Avoid use of <code>blobfile</code> for reading local files</li>
<li>Recognise <code>gpt-5</code> model identifier</li>
<li>Minor performance improvement for file reading</li>
</ul>
<h2>[v0.11.0]</h2>
<ul>
<li>Support for <code>GPT-5</code></li>
<li>Update version of <code>pyo3</code></li>
<li>Use new Rust edition</li>
<li>Fix special token handling in <code>encode_to_numpy</code></li>
<li>Better error handling</li>
<li>Improvements to private APIs</li>
</ul>
<h2>[v0.10.0]</h2>
<ul>
<li>Support for newer models</li>
<li>Improvements to private APIs</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="97e49cbadd"><code>97e49cb</code></a>
Release 0.12.0</li>
<li><a
href="948549882b"><code>9485498</code></a>
Partial sync of codebase (<a
href="https://redirect.github.com/openai/tiktoken/issues/451">#451</a>)</li>
<li><a
href="00ff187f59"><code>00ff187</code></a>
Add GPT-5 model support with o200k_base encoding (<a
href="https://redirect.github.com/openai/tiktoken/issues/440">#440</a>)</li>
<li><a
href="5ee89ca1fa"><code>5ee89ca</code></a>
chore: update dependencies (<a
href="https://redirect.github.com/openai/tiktoken/issues/449">#449</a>)</li>
<li><a
href="2ab6d3706d"><code>2ab6d37</code></a>
Support the free-threaded build (<a
href="https://redirect.github.com/openai/tiktoken/issues/443">#443</a>)</li>
<li><a
href="82dc3bbacc"><code>82dc3bb</code></a>
bump PyO3 version (<a
href="https://redirect.github.com/openai/tiktoken/issues/444">#444</a>)</li>
<li><a
href="eedc856364"><code>eedc856</code></a>
Partial sync of codebase</li>
<li><a
href="5818d56626"><code>5818d56</code></a>
Partial sync of codebase</li>
<li><a
href="3591ff175d"><code>3591ff1</code></a>
Sync codebase</li>
<li><a
href="4560a8896f"><code>4560a88</code></a>
Sync codebase (<a
href="https://redirect.github.com/openai/tiktoken/issues/389">#389</a>)</li>
<li>See full diff in <a
href="https://github.com/openai/tiktoken/compare/0.9.0...0.12.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-09 03:28:22 +00:00
Otto
a329831b0b feat(backend): Add ClamAV scanning for local file paths (#11988)
## Context

From PR #11796 review discussion. Files processed by the video blocks
(downloads, uploads, generated videos) should be scanned through ClamAV
for malware detection.

## Problem

`store_media_file()` in `backend/util/file.py` already scans:
- `workspace://` references
- Cloud storage paths  
- Data URIs (`data:...`)
- HTTP/HTTPS URLs

**But local file paths were NOT scanned.** The `else` branch only
verified the file exists.

This gap affected video processing blocks (e.g., `LoopVideoBlock`,
`AddAudioToVideoBlock`) that:
1. Download/receive input media
2. Process it locally (loop, add audio, etc.)
3. Write output to temp directory
4. Call `store_media_file(output_filename, ...)` with a local path →
**skipped virus scanning**

## Solution

Added virus scanning to the local file path branch:
```python
# Virus scan the local file before any further processing
local_content = target_path.read_bytes()
if len(local_content) > MAX_FILE_SIZE_BYTES:
    raise ValueError(...)
await scan_content_safe(local_content, filename=sanitized_file)
```

## Changes

- `backend/util/file.py` - Added ~7 lines to scan local files
(consistent with other input types)
- `backend/util/file_test.py` - Added 2 test cases for local file
scanning

## Risk Assessment

- **Low risk:** Single point of change, follows existing pattern
- **Backwards compatible:** No API changes
- **Fail-safe:** If scanning fails, file is rejected (existing behavior)

Closes SECRT-1904

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-09 00:24:18 +00:00
dependabot[bot]
98dd1a9480 chore(libs/deps): Bump cryptography from 45.0.6 to 46.0.1 in /autogpt_platform/autogpt_libs (#10968)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.6
to 46.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>46.0.1 - 2025-09-16</p>
<pre><code>
* Fixed an issue where users installing via ``pip`` on Python 3.14
development
  versions would not properly install a dependency.
* Fixed an issue building the free-threaded macOS 3.14 wheels.
<p>.. _v46-0-0:</p>
<p>46.0.0 - 2025-09-16<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for Python 3.7 has
been removed.</li>
<li>Support for OpenSSL &lt; 3.0 is deprecated and will be removed in
the next
release.</li>
<li>Support for <code>x86_64</code> macOS (including publishing wheels)
is deprecated
and will be removed in two releases. We will switch to publishing an
<code>arm64</code> only wheel for macOS.</li>
<li>Support for 32-bit Windows (including publishing wheels) is
deprecated
and will be removed in two releases. Users should move to a 64-bit
Python installation.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.3.</li>
<li>We now build <code>ppc64le</code> <code>manylinux</code> wheels and
publish them to PyPI.</li>
<li>We now build <code>win_arm64</code> (Windows on Arm) wheels and
publish them to PyPI.</li>
<li>Added support for free-threaded Python 3.14.</li>
<li>Removed the deprecated <code>get_attribute_for_oid</code> method on
:class:<code>~cryptography.x509.CertificateSigningRequest</code>. Users
should use
:meth:<code>~cryptography.x509.Attributes.get_attribute_for_oid</code>
instead.</li>
<li>Removed the deprecated <code>CAST5</code>, <code>SEED</code>,
<code>IDEA</code>, and <code>Blowfish</code>
classes from the cipher module. These are still available in
:doc:<code>/hazmat/decrepit/index</code>.</li>
<li>In X.509, when performing a PSS signature with a SHA-3 hash, it is
now
encoded with the official NIST SHA3 OID.</li>
</ul>
<p>.. _v45-0-7:</p>
<p>45.0.7 - 2025-09-01</p>
<pre><code>
* Added a function to support an upcoming ``pyOpenSSL`` release.
<p>.. _v45-0-6:<br />
</code></pre></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e735cfc275"><code>e735cfc</code></a>
release 46.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13450">#13450</a>)</li>
<li><a
href="4e457ffba4"><code>4e457ff</code></a>
Explicitly specify python in mac uv build invocation (<a
href="https://redirect.github.com/pyca/cryptography/issues/13447">#13447</a>)</li>
<li><a
href="2726efdb6d"><code>2726efd</code></a>
Depend on CFFI 2.0.0 or newer on Python &gt; 3.8 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13448">#13448</a>)</li>
<li><a
href="62230623d1"><code>6223062</code></a>
release 46.0.0 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13446">#13446</a>)</li>
<li><a
href="563c4915b0"><code>563c491</code></a>
Update comment for pyopenssl-release tag (<a
href="https://redirect.github.com/pyca/cryptography/issues/13445">#13445</a>)</li>
<li><a
href="d2f6f7face"><code>d2f6f7f</code></a>
Bump downstream dependencies in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/13439">#13439</a>)</li>
<li><a
href="e7ab02bd67"><code>e7ab02b</code></a>
we'll ship this with 3.5.3 why not (<a
href="https://redirect.github.com/pyca/cryptography/issues/13442">#13442</a>)</li>
<li><a
href="0b68a4bffb"><code>0b68a4b</code></a>
Another pair of bump dependencies fix (<a
href="https://redirect.github.com/pyca/cryptography/issues/13444">#13444</a>)</li>
<li><a
href="e076d08ee4"><code>e076d08</code></a>
Attempt to fix commit message for bump downstreams (<a
href="https://redirect.github.com/pyca/cryptography/issues/13440">#13440</a>)</li>
<li><a
href="6835ce899e"><code>6835ce8</code></a>
Put correct version bounds for pyenchant in pins (<a
href="https://redirect.github.com/pyca/cryptography/issues/13441">#13441</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/45.0.6...46.0.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=45.0.6&new-version=46.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-08 23:40:15 +00:00
dependabot[bot]
9c7c598c7d chore(deps): bump peter-evans/create-pull-request from 7 to 8 (#11663)
Bumps
[peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request)
from 7 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/peter-evans/create-pull-request/releases">peter-evans/create-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>Create Pull Request v8.0.0</h2>
<h2>What's new in v8</h2>
<ul>
<li>Requires <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Actions
Runner v2.327.1</a> or later if you are using a self-hosted runner for
Node 24 support.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>chore: Update checkout action version to v6 by <a
href="https://github.com/yonas"><code>@​yonas</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4258">peter-evans/create-pull-request#4258</a></li>
<li>Update actions/checkout references to <a
href="https://github.com/v6"><code>@​v6</code></a> in docs by <a
href="https://github.com/Copilot"><code>@​Copilot</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4259">peter-evans/create-pull-request#4259</a></li>
<li>feat: v8 by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4260">peter-evans/create-pull-request#4260</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/yonas"><code>@​yonas</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4258">peter-evans/create-pull-request#4258</a></li>
<li><a href="https://github.com/Copilot"><code>@​Copilot</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4259">peter-evans/create-pull-request#4259</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v7.0.11...v8.0.0">https://github.com/peter-evans/create-pull-request/compare/v7.0.11...v8.0.0</a></p>
<h2>Create Pull Request v7.0.11</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: restrict remote prune to self-hosted runners by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4250">peter-evans/create-pull-request#4250</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v7.0.10...v7.0.11">https://github.com/peter-evans/create-pull-request/compare/v7.0.10...v7.0.11</a></p>
<h2>Create Pull Request v7.0.10</h2>
<p>⚙️ Fixes an issue where updating a pull request failed when targeting
a forked repository with the same owner as its parent.</p>
<h2>What's Changed</h2>
<ul>
<li>build(deps): bump the github-actions group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4235">peter-evans/create-pull-request#4235</a></li>
<li>build(deps-dev): bump prettier from 3.6.2 to 3.7.3 in the npm group
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4240">peter-evans/create-pull-request#4240</a></li>
<li>fix: provider list pulls fallback for multi fork same owner by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4245">peter-evans/create-pull-request#4245</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/obnyis"><code>@​obnyis</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4064">peter-evans/create-pull-request#4064</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/peter-evans/create-pull-request/compare/v7.0.9...v7.0.10">https://github.com/peter-evans/create-pull-request/compare/v7.0.9...v7.0.10</a></p>
<h2>Create Pull Request v7.0.9</h2>
<p>⚙️ Fixes an <a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/4228">incompatibility</a>
with the recently released <code>actions/checkout@v6</code>.</p>
<h2>What's Changed</h2>
<ul>
<li>~70 dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a></li>
<li>docs: fix workaround description about <code>ready_for_review</code>
by <a href="https://github.com/ybiquitous"><code>@​ybiquitous</code></a>
in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3939">peter-evans/create-pull-request#3939</a></li>
<li>Docs: <code>add-paths</code> default behavior by <a
href="https://github.com/joeflack4"><code>@​joeflack4</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3928">peter-evans/create-pull-request#3928</a></li>
<li>docs: update to create-github-app-token v2 by <a
href="https://github.com/Goooler"><code>@​Goooler</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4063">peter-evans/create-pull-request#4063</a></li>
<li>Fix compatibility with actions/checkout@v6 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4230">peter-evans/create-pull-request#4230</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/joeflack4"><code>@​joeflack4</code></a>
made their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/3928">peter-evans/create-pull-request#3928</a></li>
<li><a href="https://github.com/Goooler"><code>@​Goooler</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4063">peter-evans/create-pull-request#4063</a></li>
<li><a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> made
their first contribution in <a
href="https://redirect.github.com/peter-evans/create-pull-request/pull/4230">peter-evans/create-pull-request#4230</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="98357b18bf"><code>98357b1</code></a>
feat: v8 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/4260">#4260</a>)</li>
<li><a
href="41c0e4b789"><code>41c0e4b</code></a>
Update actions/checkout references to <a
href="https://github.com/v6"><code>@​v6</code></a> in docs (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/4259">#4259</a>)</li>
<li><a
href="994332de4c"><code>994332d</code></a>
chore: Update checkout action version to v6 (<a
href="https://redirect.github.com/peter-evans/create-pull-request/issues/4258">#4258</a>)</li>
<li>See full diff in <a
href="https://github.com/peter-evans/create-pull-request/compare/v7...v8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/create-pull-request&package-manager=github_actions&previous-version=7&new-version=8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-08 23:06:40 +00:00
Nikhil Bhagat
728c40def5 fix(backend): replace multiprocessing queue with thread safe queue in ExecutionQueue (#11618)
<!-- Clearly explain the need for these changes: -->

The `ExecutionQueue` class was using `multiprocessing.Manager().Queue()`
which spawns a subprocess for inter-process communication. However,
analysis showed that `ExecutionQueue` is only accessed from threads
within the same process, not across processes. This caused:
- Unnecessary subprocess spawning per graph execution
- IPC overhead for every queue operation
- Potential resource leaks if Manager processes weren't properly cleaned
up
- Limited scalability when many graphs execute concurrently

### Changes 

<!-- Concisely describe all of the changes made in this pull request:
-->

- Replaced `multiprocessing.Manager().Queue()` with `queue.Queue()` in
`ExecutionQueue` class
- Updated imports: removed `from multiprocessing import Manager` and
`from queue import Empty`, added `import queue`
- Updated exception handling from `except Empty:` to `except
queue.Empty:`
- Added comprehensive docstring explaining the bug and fix

**File changed:** `autogpt_platform/backend/backend/data/execution.py`

### Checklist 

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified `ExecutionQueue` uses `queue.Queue` (not
`multiprocessing.Manager().Queue()`)
- [x] Tested all queue operations: `add()`, `get()`, `empty()`,
`get_or_none()`
- [x] Verified thread-safety with concurrent producer/consumer threads
(100 items)
- [x] Verified multi-producer/consumer scenario (3 producers, 2
consumers, 150 items)
  - [x] Confirmed no subprocess spawning when creating multiple queues
  - [x] Code passes Black formatting check

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

> No configuration changes required - this is a code-only fix with no
external API changes.

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-08 16:28:04 +00:00
dependabot[bot]
cd64562e1b chore(libs/deps): bump the production-dependencies group across 1 directory with 8 updates (#11934)
Bumps the production-dependencies group with 8 updates in the
/autogpt_platform/autogpt_libs directory:

| Package | From | To |
| --- | --- | --- |
| [fastapi](https://github.com/fastapi/fastapi) | `0.116.1` | `0.128.0`
|
| [google-cloud-logging](https://github.com/googleapis/python-logging) |
`3.12.1` | `3.13.0` |
|
[launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk)
| `9.12.0` | `9.14.1` |
| [pydantic](https://github.com/pydantic/pydantic) | `2.11.7` | `2.12.5`
|
| [pydantic-settings](https://github.com/pydantic/pydantic-settings) |
`2.10.1` | `2.12.0` |
| [pyjwt](https://github.com/jpadilla/pyjwt) | `2.10.1` | `2.11.0` |
| [supabase](https://github.com/supabase/supabase-py) | `2.16.0` |
`2.27.2` |
| [uvicorn](https://github.com/Kludex/uvicorn) | `0.35.0` | `0.40.0` |


Updates `fastapi` from 0.116.1 to 0.128.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fastapi/fastapi/releases">fastapi's
releases</a>.</em></p>
<blockquote>
<h2>0.128.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li> Drop support for <code>pydantic.v1</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14609">#14609</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Run performance tests only on Pydantic v2. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14608">#14608</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h2>0.127.1</h2>
<h3>Refactors</h3>
<ul>
<li>🔊 Add a custom <code>FastAPIDeprecationWarning</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14605">#14605</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Docs</h3>
<ul>
<li>📝 Add documentary to website. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14600">#14600</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Translations</h3>
<ul>
<li>🌐 Update translations for de (update-outdated). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14602">#14602</a>
by <a
href="https://github.com/nilslindemann"><code>@​nilslindemann</code></a>.</li>
<li>🌐 Update translations for de (update-outdated). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14581">#14581</a>
by <a
href="https://github.com/nilslindemann"><code>@​nilslindemann</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li>🔧 Update pre-commit to use local Ruff instead of hook. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14604">#14604</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li> Add missing tests for code examples. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14569">#14569</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
<li>👷 Remove <code>lint</code> job from <code>test</code> CI workflow.
PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14593">#14593</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
<li>👷 Update secrets check. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14592">#14592</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>👷 Run CodSpeed tests in parallel to other tests to speed up CI. PR
<a
href="https://redirect.github.com/fastapi/fastapi/pull/14586">#14586</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>🔨 Update scripts and pre-commit to autofix files. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14585">#14585</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h2>0.127.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li>🔊 Add deprecation warnings when using <code>pydantic.v1</code>. PR
<a
href="https://redirect.github.com/fastapi/fastapi/pull/14583">#14583</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Translations</h3>
<ul>
<li>🔧 Add LLM prompt file for Korean, generated from the existing
translations. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14546">#14546</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>🔧 Add LLM prompt file for Japanese, generated from the existing
translations. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14545">#14545</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li>⬆️ Upgrade OpenAI model for translations to gpt-5.2. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14579">#14579</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h2>0.126.0</h2>
<h3>Upgrades</h3>
<ul>
<li> Drop support for Pydantic v1, keeping short temporary support for
Pydantic v2's <code>pydantic.v1</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14575">#14575</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8322a4445a"><code>8322a44</code></a>
🔖 Release version 0.128.0</li>
<li><a
href="4b2cfcfd34"><code>4b2cfcf</code></a>
📝 Update release notes</li>
<li><a
href="e300630551"><code>e300630</code></a>
 Drop support for <code>pydantic.v1</code> (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14609">#14609</a>)</li>
<li><a
href="1b3bea8b6b"><code>1b3bea8</code></a>
📝 Update release notes</li>
<li><a
href="34e884156f"><code>34e8841</code></a>
 Run performance tests only on Pydantic v2 (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14608">#14608</a>)</li>
<li><a
href="cd90c78391"><code>cd90c78</code></a>
🔖 Release version 0.127.1</li>
<li><a
href="93f4dfd88b"><code>93f4dfd</code></a>
📝 Update release notes</li>
<li><a
href="535b5daa31"><code>535b5da</code></a>
🔊 Add a custom <code>FastAPIDeprecationWarning</code> (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14605">#14605</a>)</li>
<li><a
href="6b53786f62"><code>6b53786</code></a>
📝 Update release notes</li>
<li><a
href="d98f4eb56e"><code>d98f4eb</code></a>
🔧 Update pre-commit to use local Ruff instead of hook (<a
href="https://redirect.github.com/fastapi/fastapi/issues/14604">#14604</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/fastapi/fastapi/compare/0.116.1...0.128.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `google-cloud-logging` from 3.12.1 to 3.13.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/python-logging/releases">google-cloud-logging's
releases</a>.</em></p>
<blockquote>
<h2>google-cloud-logging 3.13.0</h2>
<h2><a
href="https://github.com/googleapis/python-logging/compare/v3.12.1...v3.13.0">3.13.0</a>
(2025-12-15)</h2>
<h3>Features</h3>
<ul>
<li>Add support for python 3.14 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/1065">#1065</a>)
(<a
href="https://github.com/googleapis/python-logging/commit/6be3df6a">6be3df6a</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/googleapis/python-logging/issues/981">#981</a>)
(<a
href="https://github.com/googleapis/python-logging/commit/70f612c3">70f612c3</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/python-logging/blob/main/CHANGELOG.md">google-cloud-logging's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/python-logging/compare/v3.12.1...v3.13.0">3.13.0</a>
(2025-12-15)</h2>
<h3>Features</h3>
<ul>
<li>Add support for python 3.14 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/1065">#1065</a>)
(<a
href="6be3df6aa9">6be3df6aa94539cd2ab22a4fac55b343862228b2</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/googleapis/python-logging/issues/981">#981</a>)
(<a
href="70f612c328">70f612c3281f1df13f3aba6b19bc4e9397297f3d</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1415883be0"><code>1415883</code></a>
chore: librarian release pull request: 20251215T134006Z (<a
href="https://redirect.github.com/googleapis/python-logging/issues/1066">#1066</a>)</li>
<li><a
href="6be3df6aa9"><code>6be3df6</code></a>
feat: Add support for python 3.14 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/1065">#1065</a>)</li>
<li><a
href="36fb4270b3"><code>36fb427</code></a>
chore(librarian): onboard to librarian (<a
href="https://redirect.github.com/googleapis/python-logging/issues/1061">#1061</a>)</li>
<li><a
href="eb189bf712"><code>eb189bf</code></a>
chore: update Python generator version to 1.25.1 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/1003">#1003</a>)</li>
<li><a
href="a7a28d1b93"><code>a7a28d1</code></a>
test: ignore DeprecationWarning for <code>credentials_file</code>
argument and Python ve...</li>
<li><a
href="70f612c328"><code>70f612c</code></a>
fix: remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/googleapis/python-logging/issues/981">#981</a>)</li>
<li><a
href="e4c445a856"><code>e4c445a</code></a>
chore: Update gapic-generator-python to 1.25.0 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/985">#985</a>)</li>
<li><a
href="14364a534a"><code>14364a5</code></a>
test: Added cleanup of old sink storage buckets (<a
href="https://redirect.github.com/googleapis/python-logging/issues/991">#991</a>)</li>
<li>See full diff in <a
href="https://github.com/googleapis/python-logging/compare/v3.12.1...v3.13.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `launchdarkly-server-sdk` from 9.12.0 to 9.14.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/launchdarkly/python-server-sdk/releases">launchdarkly-server-sdk's
releases</a>.</em></p>
<blockquote>
<h2>v9.14.1</h2>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.14.0...9.14.1">9.14.1</a>
(2025-12-15)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Remove all synchronizers in daemon mode (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/388">#388</a>)
(<a
href="441a5ecb3d">441a5ec</a>)</li>
</ul>
<hr />
<p>This PR was generated with <a
href="https://github.com/googleapis/release-please">Release Please</a>.
See <a
href="https://github.com/googleapis/release-please#release-please">documentation</a>.</p>
<!-- raw HTML omitted -->
<h2>v9.14.0</h2>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.13.1...9.14.0">9.14.0</a>
(2025-12-04)</h2>
<h3>Features</h3>
<ul>
<li>adding data system option to create file datasource intializer (<a
href="e5b121f92a">e5b121f</a>)</li>
<li>adding file data source as an intializer (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/381">#381</a>)
(<a
href="3700d1ddd9">3700d1d</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>Add warning if relying on Redis <code>max_connections</code>
parameter (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/387">#387</a>)
(<a
href="e6395fa531">e6395fa</a>),
closes <a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/386">#386</a></li>
<li>modified initializer behavior to spec (<a
href="064f65c761">064f65c</a>)</li>
</ul>
<hr />
<p>This PR was generated with <a
href="https://github.com/googleapis/release-please">Release Please</a>.
See <a
href="https://github.com/googleapis/release-please#release-please">documentation</a>.</p>
<!-- raw HTML omitted -->
<h2>v9.13.1</h2>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.13.0...9.13.1">9.13.1</a>
(2025-11-19)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Include ldclient.datasystem in docs (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/379">#379</a>)
(<a
href="318c6fea07">318c6fe</a>)</li>
</ul>
<hr />
<p>This PR was generated with <a
href="https://github.com/googleapis/release-please">Release Please</a>.
See <a
href="https://github.com/googleapis/release-please#release-please">documentation</a>.</p>
<!-- raw HTML omitted -->
<h2>v9.13.0</h2>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.12.3...9.13.0">9.13.0</a>
(2025-11-19)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/launchdarkly/python-server-sdk/blob/main/CHANGELOG.md">launchdarkly-server-sdk's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.14.0...9.14.1">9.14.1</a>
(2025-12-15)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Remove all synchronizers in daemon mode (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/388">#388</a>)
(<a
href="441a5ecb3d">441a5ec</a>)</li>
</ul>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.13.1...9.14.0">9.14.0</a>
(2025-12-04)</h2>
<h3>Features</h3>
<ul>
<li>adding data system option to create file datasource intializer (<a
href="e5b121f92a">e5b121f</a>)</li>
<li>adding file data source as an intializer (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/381">#381</a>)
(<a
href="3700d1ddd9">3700d1d</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>Add warning if relying on Redis <code>max_connections</code>
parameter (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/387">#387</a>)
(<a
href="e6395fa531">e6395fa</a>),
closes <a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/386">#386</a></li>
<li>modified initializer behavior to spec (<a
href="064f65c761">064f65c</a>)</li>
</ul>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.13.0...9.13.1">9.13.1</a>
(2025-11-19)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Include ldclient.datasystem in docs (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/379">#379</a>)
(<a
href="318c6fea07">318c6fe</a>)</li>
</ul>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.12.3...9.13.0">9.13.0</a>
(2025-11-19)</h2>
<h3>Features</h3>
<ul>
<li><strong>experimental:</strong> Release EAP support for FDv2 data
system (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/376">#376</a>)
(<a
href="0e7c32b4df">0e7c32b</a>)</li>
</ul>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.12.2...9.12.3">9.12.3</a>
(2025-10-30)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Fix overly generic type hint on File data source (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/365">#365</a>)
(<a
href="52a7499f7c">52a7499</a>),
closes <a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/364">#364</a></li>
</ul>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.12.1...9.12.2">9.12.2</a>
(2025-10-27)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Fix incorrect event count in failure message (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/359">#359</a>)
(<a
href="91f416329b">91f4163</a>)</li>
</ul>
<h2><a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.12.0...9.12.1">9.12.1</a>
(2025-09-30)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="54e62cc706"><code>54e62cc</code></a>
chore(main): release 9.14.1 (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/389">#389</a>)</li>
<li><a
href="441a5ecb3d"><code>441a5ec</code></a>
fix: Remove all synchronizers in daemon mode (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/388">#388</a>)</li>
<li><a
href="7bb537827f"><code>7bb5378</code></a>
chore(main): release 9.14.0 (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/382">#382</a>)</li>
<li><a
href="e6395fa531"><code>e6395fa</code></a>
fix: Add warning if relying on Redis <code>max_connections</code>
parameter (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/387">#387</a>)</li>
<li><a
href="45786a9a7e"><code>45786a9</code></a>
chore: Expose flag change listeners from data system (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/384">#384</a>)</li>
<li><a
href="2b7eedc836"><code>2b7eedc</code></a>
chore: Clean up unused _data_availability (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/383">#383</a>)</li>
<li><a
href="3700d1ddd9"><code>3700d1d</code></a>
feat: adding file data source as an intializer (<a
href="https://redirect.github.com/launchdarkly/python-server-sdk/issues/381">#381</a>)</li>
<li><a
href="04a2c538e5"><code>04a2c53</code></a>
chore: PR comments</li>
<li><a
href="064f65c761"><code>064f65c</code></a>
fix: modified initializer behavior to spec</li>
<li><a
href="e5b121f92a"><code>e5b121f</code></a>
feat: adding data system option to create file datasource
intializer</li>
<li>Additional commits viewable in <a
href="https://github.com/launchdarkly/python-server-sdk/compare/9.12.0...9.14.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `pydantic` from 2.11.7 to 2.12.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.12.5 2025-11-26</h2>
<h2>v2.12.5 (2025-11-26)</h2>
<p>This is the fifth 2.12 patch release, addressing an issue with the
<code>MISSING</code> sentinel and providing several documentation
improvements.</p>
<p>The next 2.13 minor release will be published in a couple weeks, and
will include a new <em>polymorphic serialization</em> feature addressing
the remaining unexpected changes to the <em>serialize as any</em>
behavior.</p>
<ul>
<li>Fix pickle error when using <code>model_construct()</code> on a
model with <code>MISSING</code> as a default value by <a
href="https://github.com/ornariece"><code>@​ornariece</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12522">#12522</a>.</li>
<li>Several updates to the documentation by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a>.</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.12.4...v2.12.5">https://github.com/pydantic/pydantic/compare/v2.12.4...v2.12.5</a></p>
<h2>v2.12.4 2025-11-05</h2>
<h2>v2.12.4 (2025-11-05)</h2>
<p>This is the fourth 2.12 patch release, fixing more regressions, and
reverting a change in the <code>build()</code> method
of the <a
href="https://docs.pydantic.dev/latest/api/networks/"><code>AnyUrl</code>
and Dsn types</a>.</p>
<p>This patch release also fixes an issue with the serialization of IP
address types, when <code>serialize_as_any</code> is used. The next
patch release
will try to address the remaining issues with <em>serialize as any</em>
behavior by introducing a new <em>polymorphic serialization</em>
feature, that
should be used in most cases in place of <em>serialize as any</em>.</p>
<ul>
<li>
<p>Fix issue with forward references in parent <code>TypedDict</code>
classes by <a href="https://github.com/Viicos"><code>@​Viicos</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12427">#12427</a>.</p>
<p>This issue is only relevant on Python 3.14 and greater.</p>
</li>
<li>
<p>Exclude fields with <code>exclude_if</code> from JSON Schema required
fields by <a href="https://github.com/Viicos"><code>@​Viicos</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12430">#12430</a></p>
</li>
<li>
<p>Revert URL percent-encoding of credentials in the
<code>build()</code> method of the <a
href="https://docs.pydantic.dev/latest/api/networks/"><code>AnyUrl</code>
and Dsn types</a> by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1833">pydantic-core#1833</a>.</p>
<p>This was initially considered as a bugfix, but caused regressions and
as such was fully reverted. The next release will include
an opt-in option to percent-encode components of the URL.</p>
</li>
<li>
<p>Add type inference for IP address types by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1868">pydantic-core#1868</a>.</p>
<p>The 2.12 changes to the <code>serialize_as_any</code> behavior made
it so that IP address types could not properly serialize to JSON.</p>
</li>
<li>
<p>Avoid getting default values from defaultdict by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1853">pydantic-core#1853</a>.</p>
<p>This fixes a subtle regression in the validation behavior of the <a
href="https://docs.python.org/3/library/collections.html#collections.defaultdict"><code>collections.defaultdict</code></a>
type.</p>
</li>
<li>
<p>Fix issue with field serializers on nested typed dictionaries by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1879">pydantic-core#1879</a>.</p>
</li>
<li>
<p>Add more <code>pydantic-core</code> builds for the three-threaded
version of Python 3.14 by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1864">pydantic-core#1864</a>.</p>
</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.12.3...v2.12.4">https://github.com/pydantic/pydantic/compare/v2.12.3...v2.12.4</a></p>
<h2>v2.12.3 2025-10-17</h2>
<h2>v2.12.3 (2025-10-17)</h2>
<h3>What's Changed</h3>
<p>This is the third 2.13 patch release, fixing issues related to the
<code>FieldInfo</code> class, and reverting a change to the supported <a
href="https://docs.pydantic.dev/latest/concepts/validators/#model-validators"><em>after</em>
model validator</a> function signatures.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.12.5 (2025-11-26)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.12.5">GitHub
release</a></p>
<p>This is the fifth 2.12 patch release, addressing an issue with the
<code>MISSING</code> sentinel and providing several documentation
improvements.</p>
<p>The next 2.13 minor release will be published in a couple weeks, and
will include a new <em>polymorphic serialization</em> feature addressing
the remaining unexpected changes to the <em>serialize as any</em>
behavior.</p>
<ul>
<li>Fix pickle error when using <code>model_construct()</code> on a
model with <code>MISSING</code> as a default value by <a
href="https://github.com/ornariece"><code>@​ornariece</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12522">#12522</a>.</li>
<li>Several updates to the documentation by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a>.</li>
</ul>
<h2>v2.12.4 (2025-11-05)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.12.4">GitHub
release</a></p>
<p>This is the fourth 2.12 patch release, fixing more regressions, and
reverting a change in the <code>build()</code> method
of the <a
href="https://docs.pydantic.dev/latest/api/networks/"><code>AnyUrl</code>
and Dsn types</a>.</p>
<p>This patch release also fixes an issue with the serialization of IP
address types, when <code>serialize_as_any</code> is used. The next
patch release
will try to address the remaining issues with <em>serialize as any</em>
behavior by introducing a new <em>polymorphic serialization</em>
feature, that
should be used in most cases in place of <em>serialize as any</em>.</p>
<ul>
<li>
<p>Fix issue with forward references in parent <code>TypedDict</code>
classes by <a href="https://github.com/Viicos"><code>@​Viicos</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12427">#12427</a>.</p>
<p>This issue is only relevant on Python 3.14 and greater.</p>
</li>
<li>
<p>Exclude fields with <code>exclude_if</code> from JSON Schema required
fields by <a href="https://github.com/Viicos"><code>@​Viicos</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/12430">#12430</a></p>
</li>
<li>
<p>Revert URL percent-encoding of credentials in the
<code>build()</code> method
of the <a
href="https://docs.pydantic.dev/latest/api/networks/"><code>AnyUrl</code>
and Dsn types</a> by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1833">pydantic-core#1833</a>.</p>
<p>This was initially considered as a bugfix, but caused regressions and
as such was fully reverted. The next release will include
an opt-in option to percent-encode components of the URL.</p>
</li>
<li>
<p>Add type inference for IP address types by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1868">pydantic-core#1868</a>.</p>
<p>The 2.12 changes to the <code>serialize_as_any</code> behavior made
it so that IP address types could not properly serialize to JSON.</p>
</li>
<li>
<p>Avoid getting default values from defaultdict by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1853">pydantic-core#1853</a>.</p>
<p>This fixes a subtle regression in the validation behavior of the <a
href="https://docs.python.org/3/library/collections.html#collections.defaultdict"><code>collections.defaultdict</code></a>
type.</p>
</li>
<li>
<p>Fix issue with field serializers on nested typed dictionaries by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1879">pydantic-core#1879</a>.</p>
</li>
<li>
<p>Add more <code>pydantic-core</code> builds for the three-threaded
version of Python 3.14 by <a
href="https://github.com/davidhewitt"><code>@​davidhewitt</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1864">pydantic-core#1864</a>.</p>
</li>
</ul>
<h2>v2.12.3 (2025-10-17)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.12.3">GitHub
release</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd2d0dd013"><code>bd2d0dd</code></a>
Prepare release v2.12.5</li>
<li><a
href="7d0302ec7e"><code>7d0302e</code></a>
Document security implications when using
<code>create_model()</code></li>
<li><a
href="e9ef980def"><code>e9ef980</code></a>
Fix typo in Standard Library Types documentation</li>
<li><a
href="f2c20c00c2"><code>f2c20c0</code></a>
Add <code>pydantic-docs</code> dev dependency, make use of versioning
blocks</li>
<li><a
href="a76c1aa26f"><code>a76c1aa</code></a>
Update documentation about JSON Schema</li>
<li><a
href="8cbc72ca48"><code>8cbc72c</code></a>
Add documentation about custom <code>__init__()</code></li>
<li><a
href="99eba59906"><code>99eba59</code></a>
Add additional test for <code>FieldInfo.get_default()</code></li>
<li><a
href="c71076988e"><code>c710769</code></a>
Special case <code>MISSING</code> sentinel in
<code>smart_deepcopy()</code></li>
<li><a
href="20a9d771c2"><code>20a9d77</code></a>
Do not delete mock validator/serializer in
<code>rebuild_dataclass()</code></li>
<li><a
href="c86515a3a8"><code>c86515a</code></a>
Update parts of the model and <code>revalidate_instances</code>
documentation</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.11.7...v2.12.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `pydantic-settings` from 2.10.1 to 2.12.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic-settings/releases">pydantic-settings's
releases</a>.</em></p>
<blockquote>
<h2>v2.12.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Support for enum kebab case. by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/686">pydantic/pydantic-settings#686</a></li>
<li>Apply source order: init &gt; env &gt; dotenv &gt; secrets &gt;
defaults and pres… by <a
href="https://github.com/chbndrhnns"><code>@​chbndrhnns</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/688">pydantic/pydantic-settings#688</a></li>
<li>Add NestedSecretsSettings source by <a
href="https://github.com/makukha"><code>@​makukha</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/690">pydantic/pydantic-settings#690</a></li>
<li>Strip non-explicit default values. by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/692">pydantic/pydantic-settings#692</a></li>
<li>Coerce env vars if strict is True. by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/693">pydantic/pydantic-settings#693</a></li>
<li>Restore init kwarg names before returning final state dictionary. by
<a href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/700">pydantic/pydantic-settings#700</a></li>
<li>Drop Python3.9 support by <a
href="https://github.com/hramezani"><code>@​hramezani</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/699">pydantic/pydantic-settings#699</a></li>
<li>Adapt test_protected_namespace_defaults for dev. Pydantic by <a
href="https://github.com/musicinmybrain"><code>@​musicinmybrain</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/637">pydantic/pydantic-settings#637</a></li>
<li>Add Python 3.14 by <a
href="https://github.com/hramezani"><code>@​hramezani</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/704">pydantic/pydantic-settings#704</a></li>
<li>Prepare release 2.12 by <a
href="https://github.com/hramezani"><code>@​hramezani</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/705">pydantic/pydantic-settings#705</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/chbndrhnns"><code>@​chbndrhnns</code></a> made
their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/688">pydantic/pydantic-settings#688</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic-settings/compare/v2.11.0...v2.12.0">https://github.com/pydantic/pydantic-settings/compare/v2.11.0...v2.12.0</a></p>
<h2>v2.11.0</h2>
<h2>What's Changed</h2>
<ul>
<li>CLI Serialize Support by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/643">pydantic/pydantic-settings#643</a></li>
<li>Inspect type aliases to determine if an annotation is complex by <a
href="https://github.com/tselepakis"><code>@​tselepakis</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/644">pydantic/pydantic-settings#644</a></li>
<li>Revert &quot;fix: Respect 'cli_parse_args' from model_config with
settings_customise_sources (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/611">#611</a>)&quot;
by <a href="https://github.com/hramezani"><code>@​hramezani</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/655">pydantic/pydantic-settings#655</a></li>
<li>Remove parsing of command line arguments from
<code>CliSettingsSource.__init__</code>. by <a
href="https://github.com/trygve-baerland"><code>@​trygve-baerland</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/656">pydantic/pydantic-settings#656</a></li>
<li>turn off allow_abbrev on subparsers by <a
href="https://github.com/mroch"><code>@​mroch</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/658">pydantic/pydantic-settings#658</a></li>
<li>CLI Serialization Fixes by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/649">pydantic/pydantic-settings#649</a></li>
<li>Fix PydanticModel type checking. by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/659">pydantic/pydantic-settings#659</a></li>
<li>Avoid env_prefix falling back to env vars without prefix by <a
href="https://github.com/tselepakis"><code>@​tselepakis</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/648">pydantic/pydantic-settings#648</a></li>
<li>Warn if model_config sets unused keys for missing settings sources
by <a href="https://github.com/HomerusJa"><code>@​HomerusJa</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/663">pydantic/pydantic-settings#663</a></li>
<li>Included endpoint_url kwarg in AWSSecretsManagerSettingsSource class
by <a href="https://github.com/adrianohrl"><code>@​adrianohrl</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/664">pydantic/pydantic-settings#664</a></li>
<li>Fix typo (&quot;Accesing&quot;) in the &quot;Adding sources&quot;
docs by <a
href="https://github.com/deepyaman"><code>@​deepyaman</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/668">pydantic/pydantic-settings#668</a></li>
<li>CLI Windows Path Fix by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/669">pydantic/pydantic-settings#669</a></li>
<li>Cli root model support by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/677">pydantic/pydantic-settings#677</a></li>
<li>Snake case conversion in Azure Key Vault by <a
href="https://github.com/AndreuCodina"><code>@​AndreuCodina</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/680">pydantic/pydantic-settings#680</a></li>
<li>Make <code>InitSettingsSource</code> resolution deterministic by <a
href="https://github.com/enrico-stauss"><code>@​enrico-stauss</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/681">pydantic/pydantic-settings#681</a></li>
<li>Update deps by <a
href="https://github.com/hramezani"><code>@​hramezani</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/683">pydantic/pydantic-settings#683</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/tselepakis"><code>@​tselepakis</code></a> made
their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/644">pydantic/pydantic-settings#644</a></li>
<li><a
href="https://github.com/trygve-baerland"><code>@​trygve-baerland</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/656">pydantic/pydantic-settings#656</a></li>
<li><a href="https://github.com/mroch"><code>@​mroch</code></a> made
their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/658">pydantic/pydantic-settings#658</a></li>
<li><a href="https://github.com/HomerusJa"><code>@​HomerusJa</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/663">pydantic/pydantic-settings#663</a></li>
<li><a
href="https://github.com/adrianohrl"><code>@​adrianohrl</code></a> made
their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/664">pydantic/pydantic-settings#664</a></li>
<li><a href="https://github.com/deepyaman"><code>@​deepyaman</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/668">pydantic/pydantic-settings#668</a></li>
<li><a
href="https://github.com/enrico-stauss"><code>@​enrico-stauss</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/681">pydantic/pydantic-settings#681</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic-settings/compare/2.10.1...v2.11.0">https://github.com/pydantic/pydantic-settings/compare/2.10.1...v2.11.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="584983d253"><code>584983d</code></a>
Prepare release 2.12 (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/705">#705</a>)</li>
<li><a
href="6b4d87e776"><code>6b4d87e</code></a>
Add Python 3.14 (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/704">#704</a>)</li>
<li><a
href="02de5b622b"><code>02de5b6</code></a>
Adapt test_protected_namespace_defaults for dev. Pydantic (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/637">#637</a>)</li>
<li><a
href="4239ea460a"><code>4239ea4</code></a>
Drop Python3.9 support (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/699">#699</a>)</li>
<li><a
href="5008c694f6"><code>5008c69</code></a>
Restore init kwarg names before returning final state dictionary. (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/700">#700</a>)</li>
<li><a
href="4433101fef"><code>4433101</code></a>
Coerce env vars if strict is True. (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/693">#693</a>)</li>
<li><a
href="4d2ebfd543"><code>4d2ebfd</code></a>
Strip non-explicit default values. (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/692">#692</a>)</li>
<li><a
href="4a6ffcaeae"><code>4a6ffca</code></a>
Add NestedSecretsSettings source (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/690">#690</a>)</li>
<li><a
href="7a6e96ebfc"><code>7a6e96e</code></a>
Apply source order: init &gt; env &gt; dotenv &gt; secrets &gt; defaults
and pres… (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/688">#688</a>)</li>
<li><a
href="68563eddc0"><code>68563ed</code></a>
Support for enum kebab case. (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/686">#686</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic-settings/compare/2.10.1...v2.12.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyjwt` from 2.10.1 to 2.11.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/jpadilla/pyjwt/releases">pyjwt's
releases</a>.</em></p>
<blockquote>
<h2>2.11.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Fixed type error in comment by <a
href="https://github.com/shuhaib-aot"><code>@​shuhaib-aot</code></a> in
<a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1026">jpadilla/pyjwt#1026</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1018">jpadilla/pyjwt#1018</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1033">jpadilla/pyjwt#1033</a></li>
<li>Make note of use of leeway with nbf by <a
href="https://github.com/djw8605"><code>@​djw8605</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1034">jpadilla/pyjwt#1034</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1035">jpadilla/pyjwt#1035</a></li>
<li>Fixes <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/964">#964</a>:
Validate key against allowed types for Algorithm family by <a
href="https://github.com/pachewise"><code>@​pachewise</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/985">jpadilla/pyjwt#985</a></li>
<li>Feat <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1024">#1024</a>:
Add iterator for PyJWKSet by <a
href="https://github.com/pachewise"><code>@​pachewise</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1041">jpadilla/pyjwt#1041</a></li>
<li>Fixes <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1039">#1039</a>:
Add iss, issuer type checks by <a
href="https://github.com/pachewise"><code>@​pachewise</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1040">jpadilla/pyjwt#1040</a></li>
<li>Fixes <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/660">#660</a>:
Improve typing/logic for <code>options</code> in decode,
decode_complete; Improve docs by <a
href="https://github.com/pachewise"><code>@​pachewise</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1045">jpadilla/pyjwt#1045</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1042">jpadilla/pyjwt#1042</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1052">jpadilla/pyjwt#1052</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1053">jpadilla/pyjwt#1053</a></li>
<li>Fix <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1022">#1022</a>:
Map <code>algorithm=None</code> to &quot;none&quot; by <a
href="https://github.com/qqii"><code>@​qqii</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1056">jpadilla/pyjwt#1056</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1055">jpadilla/pyjwt#1055</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1058">jpadilla/pyjwt#1058</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1060">jpadilla/pyjwt#1060</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1061">jpadilla/pyjwt#1061</a></li>
<li>Fixes <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1047">#1047</a>:
Correct <code>PyJWKClient.get_signing_key_from_jwt</code> annotation by
<a href="https://github.com/khvn26"><code>@​khvn26</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1048">jpadilla/pyjwt#1048</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1062">jpadilla/pyjwt#1062</a></li>
<li>Fixed doc string typo in _validate_jti() function <a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1063">#1063</a>
by <a
href="https://github.com/kuldeepkhatke"><code>@​kuldeepkhatke</code></a>
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1064">jpadilla/pyjwt#1064</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1065">jpadilla/pyjwt#1065</a></li>
<li>Update SECURITY.md by <a
href="https://github.com/auvipy"><code>@​auvipy</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1057">jpadilla/pyjwt#1057</a></li>
<li>Typing fix: use <code>float</code> instead of <code>int</code> for
<code>lifespan</code> and <code>timeout</code> by <a
href="https://github.com/nikitagashkov"><code>@​nikitagashkov</code></a>
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1068">jpadilla/pyjwt#1068</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1067">jpadilla/pyjwt#1067</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1071">jpadilla/pyjwt#1071</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1076">jpadilla/pyjwt#1076</a></li>
<li>Fix TYP header documentation by <a
href="https://github.com/fobiasmog"><code>@​fobiasmog</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1046">jpadilla/pyjwt#1046</a></li>
<li>doc: Document claims sub and jti by <a
href="https://github.com/cleder"><code>@​cleder</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1088">jpadilla/pyjwt#1088</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1077">jpadilla/pyjwt#1077</a></li>
<li>Bump actions/setup-python from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1089">jpadilla/pyjwt#1089</a></li>
<li>Bump actions/stale from 8 to 10 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1090">jpadilla/pyjwt#1090</a></li>
<li>Bump actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1083">jpadilla/pyjwt#1083</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1091">jpadilla/pyjwt#1091</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1093">jpadilla/pyjwt#1093</a></li>
<li>[pre-commit.ci] pre-commit autoupdate by <a
href="https://github.com/pre-commit-ci"><code>@​pre-commit-ci</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1096">jpadilla/pyjwt#1096</a></li>
<li>Resolve package build warnings by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1105">jpadilla/pyjwt#1105</a></li>
<li>Support Python 3.14, and test against PyPy 3.10+ by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1104">jpadilla/pyjwt#1104</a></li>
<li>Fix a <code>SyntaxWarning</code> caused by invalid escape sequences
by <a href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a>
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1103">jpadilla/pyjwt#1103</a></li>
<li>Standardize CHANGELOG links to PRs by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1110">jpadilla/pyjwt#1110</a></li>
<li>Migrate from <code>pep517</code>, which is deprecated, to
<code>build</code> by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1108">jpadilla/pyjwt#1108</a></li>
<li>Fix incorrectly-named test suite function by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1116">jpadilla/pyjwt#1116</a></li>
<li>Fix Read the Docs builds by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1111">jpadilla/pyjwt#1111</a></li>
<li>Bump actions/download-artifact from 4 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1118">jpadilla/pyjwt#1118</a></li>
<li>Escalate test suite warnings to errors by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1107">jpadilla/pyjwt#1107</a></li>
<li>Add pyupgrade as a pre-commit hook by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1109">jpadilla/pyjwt#1109</a></li>
<li>Simplify the test suite decorators by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1113">jpadilla/pyjwt#1113</a></li>
<li>Improve coverage config and eliminate unused test suite code by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1115">jpadilla/pyjwt#1115</a></li>
<li>Build a shared wheel once in the test suite by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in <a
href="https://redirect.github.com/jpadilla/pyjwt/pull/1114">jpadilla/pyjwt#1114</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/jpadilla/pyjwt/blob/master/CHANGELOG.rst">pyjwt's
changelog</a>.</em></p>
<blockquote>
<h2><code>v2.11.0
&lt;https://github.com/jpadilla/pyjwt/compare/2.10.1...2.11.0&gt;</code>__</h2>
<p>Fixed</p>
<pre><code>
- Enforce ECDSA curve validation per RFC 7518 Section 3.4.
- Fix build system warnings by @kurtmckee in
`[#1105](https://github.com/jpadilla/pyjwt/issues/1105)
&lt;https://github.com/jpadilla/pyjwt/pull/1105&gt;`__
- Validate key against allowed types for Algorithm family in
`[#964](https://github.com/jpadilla/pyjwt/issues/964)
&lt;https://github.com/jpadilla/pyjwt/pull/964&gt;`__
- Add iterator for JWKSet in
`[#1041](https://github.com/jpadilla/pyjwt/issues/1041)
&lt;https://github.com/jpadilla/pyjwt/pull/1041&gt;`__
- Validate `iss` claim is a string during encoding and decoding by
@pachewise in `[#1040](https://github.com/jpadilla/pyjwt/issues/1040)
&lt;https://github.com/jpadilla/pyjwt/pull/1040&gt;`__
- Improve typing/logic for `options` in decode, decode_complete by
@pachewise in `[#1045](https://github.com/jpadilla/pyjwt/issues/1045)
&lt;https://github.com/jpadilla/pyjwt/pull/1045&gt;`__
- Declare float supported type for lifespan and timeout by
@nikitagashkov in
`[#1068](https://github.com/jpadilla/pyjwt/issues/1068)
&lt;https://github.com/jpadilla/pyjwt/pull/1068&gt;`__
- Fix ``SyntaxWarning``\s/``DeprecationWarning``\s caused by invalid
escape sequences by @kurtmckee in
`[#1103](https://github.com/jpadilla/pyjwt/issues/1103)
&lt;https://github.com/jpadilla/pyjwt/pull/1103&gt;`__
- Development: Build a shared wheel once to speed up test suite setup
times by @kurtmckee in
`[#1114](https://github.com/jpadilla/pyjwt/issues/1114)
&lt;https://github.com/jpadilla/pyjwt/pull/1114&gt;`__
- Development: Test type annotations across all supported Python
versions,
increase the strictness of the type checking, and remove the mypy
pre-commit hook
by @kurtmckee in `[#1112](https://github.com/jpadilla/pyjwt/issues/1112)
&lt;https://github.com/jpadilla/pyjwt/pull/1112&gt;`__
<p>Added
</code></pre></p>
<ul>
<li>Support Python 3.14, and test against PyPy 3.10 and 3.11 by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in
<code>[#1104](https://github.com/jpadilla/pyjwt/issues/1104)
&lt;https://github.com/jpadilla/pyjwt/pull/1104&gt;</code>__</li>
<li>Development: Migrate to <code>build</code> to test package building
in CI by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in
<code>[#1108](https://github.com/jpadilla/pyjwt/issues/1108)
&lt;https://github.com/jpadilla/pyjwt/pull/1108&gt;</code>__</li>
<li>Development: Improve coverage config and eliminate unused test suite
code by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in
<code>[#1115](https://github.com/jpadilla/pyjwt/issues/1115)
&lt;https://github.com/jpadilla/pyjwt/pull/1115&gt;</code>__</li>
<li>Docs: Standardize CHANGELOG links to PRs by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in
<code>[#1110](https://github.com/jpadilla/pyjwt/issues/1110)
&lt;https://github.com/jpadilla/pyjwt/pull/1110&gt;</code>__</li>
<li>Docs: Fix Read the Docs builds by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in
<code>[#1111](https://github.com/jpadilla/pyjwt/issues/1111)
&lt;https://github.com/jpadilla/pyjwt/pull/1111&gt;</code>__</li>
<li>Docs: Add example of using leeway with nbf by <a
href="https://github.com/djw8605"><code>@​djw8605</code></a> in
<code>[#1034](https://github.com/jpadilla/pyjwt/issues/1034)
&lt;https://github.com/jpadilla/pyjwt/pull/1034&gt;</code>__</li>
<li>Docs: Refactored docs with <code>autodoc</code>; added
<code>PyJWS</code> and <code>jwt.algorithms</code> docs by <a
href="https://github.com/pachewise"><code>@​pachewise</code></a> in
<code>[#1045](https://github.com/jpadilla/pyjwt/issues/1045)
&lt;https://github.com/jpadilla/pyjwt/pull/1045&gt;</code>__</li>
<li>Docs: Documentation improvements for &quot;sub&quot; and
&quot;jti&quot; claims by <a
href="https://github.com/cleder"><code>@​cleder</code></a> in
<code>[#1088](https://github.com/jpadilla/pyjwt/issues/1088)
&lt;https://github.com/jpadilla/pyjwt/pull/1088&gt;</code>__</li>
<li>Development: Add pyupgrade as a pre-commit hook by <a
href="https://github.com/kurtmckee"><code>@​kurtmckee</code></a> in
<code>[#1109](https://github.com/jpadilla/pyjwt/issues/1109)
&lt;https://github.com/jpadilla/pyjwt/pull/1109&gt;</code>__</li>
<li>Add minimum key length validation for HMAC and RSA keys (CWE-326).
Warns by default via <code>InsecureKeyLengthWarning</code> when keys are
below
minimum recommended lengths per RFC 7518 Section 3.2 (HMAC) and
NIST SP 800-131A (RSA). Pass
<code>enforce_minimum_key_length=True</code> in
options to <code>PyJWT</code> or <code>PyJWS</code> to raise
<code>InvalidKeyError</code> instead.</li>
<li>Refactor <code>PyJWT</code> to own an internal <code>PyJWS</code>
instance instead of
calling global <code>api_jws</code> functions.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="697344d259"><code>697344d</code></a>
bump up version</li>
<li><a
href="e4d0aec024"><code>e4d0aec</code></a>
fix: pre-commit</li>
<li><a
href="df9a6a0c44"><code>df9a6a0</code></a>
fix: failing test</li>
<li><a
href="2b2e53cd23"><code>2b2e53c</code></a>
fix: docs</li>
<li><a
href="635c8d89dd"><code>635c8d8</code></a>
fix: failing mypy</li>
<li><a
href="96ae3563b9"><code>96ae356</code></a>
feat: add minimum key length validation for HMAC and RSA</li>
<li><a
href="5b86227733"><code>5b86227</code></a>
fix: enforce ECDSA curve validation per RFC 7518 Section 3.4</li>
<li><a
href="04947d75dc"><code>04947d7</code></a>
Bump actions/download-artifact from 6 to 7 (<a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1125">#1125</a>)</li>
<li><a
href="dd448344c3"><code>dd44834</code></a>
Fix leeway value in usage documentation (<a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1124">#1124</a>)</li>
<li><a
href="407f0bde99"><code>407f0bd</code></a>
Thoroughly test type annotations, and resolve errors (<a
href="https://redirect.github.com/jpadilla/pyjwt/issues/1112">#1112</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/jpadilla/pyjwt/compare/2.10.1...2.11.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `supabase` from 2.16.0 to 2.27.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/releases">supabase's
releases</a>.</em></p>
<blockquote>
<h2>v2.27.2</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.1...v2.27.2">2.27.2</a>
(2026-01-14)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>ci:</strong> generate new token for release-please (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1348">#1348</a>)
(<a
href="c2ad37f9dc">c2ad37f</a>)</li>
<li><strong>ci:</strong> run CI when .github files change (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1349">#1349</a>)
(<a
href="a221aac029">a221aac</a>)</li>
<li><strong>realtime:</strong> ammend reconnect logic to not unsubscribe
(<a
href="https://redirect.github.com/supabase/supabase-py/issues/1346">#1346</a>)
(<a
href="cfbe5943cb">cfbe594</a>)</li>
</ul>
<h2>v2.27.1</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.0...v2.27.1">2.27.1</a>
(2026-01-06)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>realtime:</strong> use 'event' instead of 'events' in
postgres_changes protocol (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1339">#1339</a>)
(<a
href="c1e7986c5e">c1e7986</a>)</li>
<li><strong>storage:</strong> catch bad responses from server (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1344">#1344</a>)
(<a
href="ddb50547db">ddb5054</a>)</li>
</ul>
<h2>v2.27.0</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.26.0...v2.27.0">2.27.0</a>
(2025-12-16)</h2>
<h3>Features</h3>
<ul>
<li><strong>auth:</strong> add X (OAuth 2.0) provider (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1335">#1335</a>)
(<a
href="f600f96b52">f600f96</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>storage:</strong> replace deprecated pydantic Extra with
literal values (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1334">#1334</a>)
(<a
href="6df3545785">6df3545</a>)</li>
</ul>
<h2>v2.26....

_Description has been truncated_

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-07 02:17:38 +00:00
Reinier van der Leer
8fddc9d71f fix(backend): Reduce GET /api/graphs expense + latency (#11986)
[SECRT-1896: Fix crazy `GET /api/graphs` latency (P95 =
107s)](https://linear.app/autogpt/issue/SECRT-1896)

These changes should decrease latency of this endpoint by ~~60-65%~~ a
lot.

### Changes 🏗️

- Make `Graph.credentials_input_schema` cheaper by avoiding constructing
a new `BlockSchema` subclass
- Strip down `GraphMeta` - drop all computed fields
- Replace with either `GraphModel` or `GraphModelWithoutNodes` wherever
those computed fields are used
- Simplify usage in `list_graphs_paginated` and
`fetch_graph_from_store_slug`
- Refactor and clarify relationships between the different graph models
  - Split `BaseGraph` into `GraphBaseMeta` + `BaseGraph`
- Strip down `Graph` - move `credentials_input_schema` and
`aggregate_credentials_inputs` to `GraphModel`
- Refactor to eliminate double `aggregate_credentials_inputs()` call in
`credentials_input_schema` call tree
  - Add `GraphModelWithoutNodes` (similar to current `GraphMeta`)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] `GET /api/graphs` works as it should
  - [x] Running a graph succeeds
  - [x] Adding a sub-agent in the Builder works as it should
2026-02-06 19:13:21 +00:00
Ubbe
3d1cd03fc8 ci(frontend): disable chromatic for this month (#11994)
### Changes 🏗️

- we react the max snapshots quota and don't wanna upgrade
- make it run (when re-enabled) on `src/components` changes only to
reduce snapshots

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI hope for the best
2026-02-06 19:17:25 +07:00
Swifty
e7ebe42306 fix(frontend): Revert ThinkingMessage progress bar delay to original values (#11993) 2026-02-06 12:23:32 +01:00
Otto
e0fab7e34e fix(frontend): Improve clarification answer message formatting (#11985)
## Summary

Improves the auto-generated message format when users submit
clarification answers in the agent generator.

## Before

```
I have the answers to your questions:

keyword_1: User answer 1
keyword_2: User answer 2

Please proceed with creating the agent.
```
<img width="748" height="153" alt="image"
src="https://github.com/user-attachments/assets/7231aaab-8ea4-406b-ba31-fa2b6055b82d"
/>

## After

```
**Here are my answers:**

> What is the primary purpose?

User answer 1

> What is the target audience?

User answer 2

Please proceed with creating the agent.
```
<img width="619" height="352" alt="image"
src="https://github.com/user-attachments/assets/ef8c1fbf-fb60-4488-b51f-407c1b9e3e44"
/>


## Changes

- Use human-readable question text instead of machine-readable keywords
- Use blockquote format for questions (natural "quote and reply"
pattern)
- Use double newlines for proper Markdown paragraph breaks
- Iterate over `message.questions` array to preserve original question
order
- Move handler inside conditional block for proper TypeScript type
narrowing

## Why

- The old format was ugly and hard to read (raw keywords, no line
breaks)
- The new format uses a natural "quoting and replying" pattern
- Better readability for both users and the LLM (verified: backend does
NOT parse keywords)

## Linear Ticket

Fixes [SECRT-1822](https://linear.app/autogpt/issue/SECRT-1822)

## Testing

- [ ] Trigger agent creation that requires clarifying questions
- [ ] Fill out the form and submit
- [ ] Verify message appears with new blockquote format
- [ ] Verify questions appear in original order
- [ ] Verify agent generation proceeds correctly

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2026-02-06 08:41:06 +00:00
Nicholas Tindle
29ee85c86f fix: add virus scanning to WorkspaceManager.write_file() (#11990)
## Summary

Adds virus scanning at the `WorkspaceManager.write_file()` layer for
defense in depth.

## Problem

Previously, virus scanning was only performed at entry points:
- `store_media_file()` in `backend/util/file.py`
- `WriteWorkspaceFileTool` in
`backend/api/features/chat/tools/workspace_files.py`

This created a trust boundary where any new caller of
`WorkspaceManager.write_file()` would need to remember to scan first.

## Solution

Add `scan_content_safe()` call directly in
`WorkspaceManager.write_file()` before persisting to storage. This
ensures all content is scanned regardless of the caller.

## Changes

- Added import for `scan_content_safe` from `backend.util.virus_scanner`
- Added virus scan call after file size validation, before storage

## Testing

Existing tests should pass. The scan is a no-op in test environments
where ClamAV isn't running.

Closes https://linear.app/autogpt/issue/OPEN-2993

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Introduces a new required async scan step in the workspace write path,
which can add latency or cause new failures if the scanner/ClamAV is
misconfigured or unavailable.
> 
> **Overview**
> Adds a **defense-in-depth** virus scan to
`WorkspaceManager.write_file()` by invoking `scan_content_safe()` after
file-size validation and before any storage/database persistence.
> 
> This centralizes scanning so any caller writing workspace files gets
the same malware check without relying on upstream entry points to
remember to scan.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
0f5ac68b92. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2026-02-06 04:38:32 +00:00
Nicholas Tindle
85b6520710 feat(blocks): Add video editing blocks (#11796)
<!-- Clearly explain the need for these changes: -->
This PR adds general-purpose video editing blocks for the AutoGPT
Platform, enabling automated video production workflows like documentary
creation, marketing videos, tutorial assembly, and content repurposing.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

**New blocks added in `backend/blocks/video/`:**
- `VideoDownloadBlock` - Download videos from URLs (YouTube, Vimeo, news
sites, direct links) using yt-dlp
- `VideoClipBlock` - Extract time segments from videos with start/end
time validation
- `VideoConcatBlock` - Merge multiple video clips with optional
transitions (none, crossfade, fade_black)
- `VideoTextOverlayBlock` - Add text overlays/captions with positioning
and timing options
- `VideoNarrationBlock` - Generate AI narration via ElevenLabs and mix
with video audio (replace, mix, or ducking modes)

**Dependencies required:**
- `yt-dlp` - For video downloading
- `moviepy` - For video editing operations

**Implementation details:**
- All blocks follow the SDK pattern with proper error handling and
exception chaining
- Proper resource cleanup in `finally` blocks to prevent memory leaks
- Input validation (e.g., end_time > start_time)
- Test mocks included for CI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Blocks follow the SDK pattern with
`BlockSchemaInput`/`BlockSchemaOutput`
  - [x] Resource cleanup is implemented in `finally` blocks
  - [x] Exception chaining is properly implemented
  - [x] Input validation is in place
  - [x] Test mocks are provided for CI environments

#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

N/A - No configuration changes required.


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Adds new multimedia blocks that invoke ffmpeg/MoviePy and introduces
new external dependencies (plus container packages), which can impact
runtime stability and resource usage; download/overlay blocks are
present but disabled due to sandbox/policy concerns.
> 
> **Overview**
> Adds a new `backend.blocks.video` module with general-purpose video
workflow blocks (download, clip, concat w/ transitions, loop, add-audio,
text overlay, and ElevenLabs-powered narration), including shared
utilities for codec selection, filename cleanup, and an ffmpeg-based
chapter-strip workaround for MoviePy.
> 
> Extends credentials/config to support ElevenLabs
(`ELEVENLABS_API_KEY`, provider enum, system credentials, and cost
config) and adds new dependencies (`elevenlabs`, `yt-dlp`) plus Docker
runtime packages (`ffmpeg`, `imagemagick`).
> 
> Improves file/reference handling end-to-end by embedding MIME types in
`workspace://...#mime` outputs and updating frontend rendering to detect
video vs image from MIME fragments (and broaden supported audio/video
extensions), with optional enhanced output rendering behind a feature
flag in the legacy builder UI.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
da7a44d794. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 22:22:33 +00:00
Bently
bfa942e032 feat(platform): Add Claude Opus 4.6 model support (#11983)
## Summary
Adds support for Anthropic's newly released Claude Opus 4.6 model.

## Changes
- Added `claude-opus-4-6` to the `LlmModel` enum
- Added model metadata: 200K context window (1M beta), **128K max output
tokens**
- Added block cost config (same pricing tier as Opus 4.5: $5/MTok input,
$25/MTok output)
- Updated chat config default model to Claude Opus 4.6

## Model Details
From [Anthropic's
docs](https://docs.anthropic.com/en/docs/about-claude/models):
- **API ID:** `claude-opus-4-6`
- **Context window:** 200K tokens (1M beta)
- **Max output:** 128K tokens (up from 64K on Opus 4.5)
- **Extended thinking:** Yes
- **Adaptive thinking:** Yes (new, Opus 4.6 exclusive)
- **Knowledge cutoff:** May 2025 (reliable), Aug 2025 (training)
- **Pricing:** $5/MTok input, $25/MTok output (same as Opus 4.5)

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2026-02-05 19:19:51 +00:00
Otto
11256076d8 fix(frontend): Rename "Tasks" tab to "Agents" in navbar (#11982)
## Summary
Renames the "Tasks" tab in the navbar to "Agents" per the Figma design.

## Changes
- `Navbar.tsx`: Changed label from "Tasks" to "Agents"

<img width="1069" height="153" alt="image"
src="https://github.com/user-attachments/assets/3869d2a2-9bd9-4346-b650-15dabbdb46c4"
/>


## Why
- "Tasks" was incorrectly named and confusing for users trying to find
their agent builds
- Matches the Figma design

## Linear Ticket
Fixes [SECRT-1894](https://linear.app/autogpt/issue/SECRT-1894)

## Related
- [SECRT-1865](https://linear.app/autogpt/issue/SECRT-1865) - Find and
Manage Existing/Unpublished or Recent Agent Builds Is Unintuitive
2026-02-05 17:54:39 +00:00
Bently
3ca2387631 feat(blocks): Implement Text Encode block (#11857)
## Summary
Implements a `TextEncoderBlock` that encodes plain text into escape
sequences (the reverse of `TextDecoderBlock`).

## Changes

### Block Implementation
- Added `encoder_block.py` with `TextEncoderBlock` in
`autogpt_platform/backend/backend/blocks/`
- Uses `codecs.encode(text, "unicode_escape").decode("utf-8")` for
encoding
- Mirrors the structure and patterns of the existing `TextDecoderBlock`
- Categorised as `BlockCategory.TEXT`

### Documentation
- Added Text Encoder section to
`docs/integrations/block-integrations/text.md` (the auto-generated docs
file for TEXT category blocks)
- Expanded "How it works" with technical details on the encoding method,
validation, and edge cases
- Added 3 structured use cases per docs guidelines: JSON payload
preparation, Config/ENV generation, Snapshot fixtures
- Added Text Encoder to the overview table in
`docs/integrations/README.md`
- Removed standalone `encoder_block.md` (TEXT category blocks belong in
`text.md` per `CATEGORY_FILE_MAP` in `generate_block_docs.py`)

### Documentation Formatting (CodeRabbit feedback)
- Added blank lines around markdown tables (MD058)
- Added `text` language tags to fenced code blocks (MD040)
- Restructured use case section with bold headings per coding guidelines

## How Docs Were Synced
The `check-docs-sync` CI job runs `poetry run python
scripts/generate_block_docs.py --check` which expects blocks to be
documented in category-grouped files. Since `TextEncoderBlock` uses
`BlockCategory.TEXT`, the `CATEGORY_FILE_MAP` maps it to `text.md` — not
a standalone file. The block entry was added to `text.md` following the
exact format used by the generator (with `<!-- MANUAL -->` markers for
hand-written sections).

## Related Issue
Fixes #11111

---------

Co-authored-by: Otto <otto@agpt.co>
Co-authored-by: lif <19658300+majiayu000@users.noreply.github.com>
Co-authored-by: Aryan Kaul <134673289+aryancodes1@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-05 17:31:02 +00:00
Otto
ed07f02738 fix(copilot): edit_agent updates existing agent instead of creating duplicate (#11981)
## Summary

When editing an agent via CoPilot's `edit_agent` tool, the code was
always creating a new `LibraryAgent` entry instead of updating the
existing one to point to the new graph version. This caused duplicate
agents to appear in the user's library.

## Changes

In `save_agent_to_library()`:
- When `is_update=True`, now checks if there's an existing library agent
for the graph using `get_library_agent_by_graph_id()`
- If found, uses `update_agent_version_in_library()` to update the
existing library agent to point to the new version
- Falls back to creating a new library agent if no existing one is found
(e.g., if editing a graph that wasn't added to library yet)

## Testing

- Verified lint/format checks pass
- Plan reviewed and approved by Staff Engineer Plan Reviewer agent

## Related

Fixes [SECRT-1857](https://linear.app/autogpt/issue/SECRT-1857)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-05 15:02:26 +00:00
Swifty
b121030c94 feat(frontend): Add progress indicator during agent generation [SECRT-1883] (#11974)
## Summary
- Add asymptotic progress bar that appears during long-running chat
tasks
- Progress bar shows after 10 seconds with "Working on it..." label and
percentage
- Uses half-life formula: ~50% at 30s, ~75% at 60s, ~87.5% at 90s, etc.
- Creates the classic "game loading bar" effect that never reaches 100%



https://github.com/user-attachments/assets/3c59289e-793c-4a08-b3fc-69e1eef28b1f



## Test plan
- [x] Start a chat that triggers agent generation
- [x] Wait 10+ seconds for the progress bar to appear
- [x] Verify progress bar is centered with label and percentage
- [x] Verify progress follows expected timing (~50% at 30s)
- [x] Verify progress bar disappears when task completes

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:37:51 +01:00
Swifty
c22c18374d feat(frontend): Add ready-to-test prompt after agent creation [SECRT-1882] (#11975)
## Summary
- Add special UI prompt when agent is successfully created in chat
- Show "Agent Created Successfully" with agent name
- Provide two action buttons:
- **Run with example values**: Sends chat message asking AI to run with
placeholders
- **Run with my inputs**: Opens RunAgentModal for custom input
configuration
- After run/schedule, automatically send chat message with execution
details for AI monitoring



https://github.com/user-attachments/assets/b11e118c-de59-4b79-a629-8bd0d52d9161



## Test plan
- [x] Create an agent through chat
- [x] Verify "Agent Created Successfully" prompt appears
- [x] Click "Run with example values" - verify chat message is sent
- [x] Click "Run with my inputs" - verify RunAgentModal opens
- [x] Fill inputs and run - verify chat message with execution ID is
sent
- [x] Fill inputs and schedule - verify chat message with schedule
details is sent

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:37:31 +01:00
Swifty
e40233a3ac fix(backend/chat): Guide find_agent users toward action with CTAs (#11976)
When users search for agents, guide them toward creating custom agents
if no results are found or after showing results. This improves user
engagement by offering a clear next step.

### Changes 🏗️

- Updated `agent_search.py` to add CTAs in search responses
- Added messaging to inform users they can create custom agents based on
their needs
- Applied to both "no results found" and "agents found" scenarios

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents in marketplace with matching results
  - [x] Search for agents in marketplace with no results
  - [x] Search for agents in library with matching results  
  - [x] Search for agents in library with no results
  - [x] Verify CTA message appears in all cases

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 15:36:55 +01:00
Swifty
3ae5eabf9d fix(backend/chat): Use latest prompt label in non-production environments (#11977)
In non-production environments, the chat service now fetches prompts
with the `latest` label instead of the default production-labeled
prompt. This makes it easier to test and iterate on prompt changes in
dev/staging without needing to promote them to production first.

### Changes 🏗️

- Updated `_get_system_prompt_template()` in chat service to pass
`label="latest"` when `app_env` is not `PRODUCTION`
- Production environments continue using the default behavior
(production-labeled prompts)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that in non-production environments, prompts with
`latest` label are fetched
- [x] Verified that production environments still use the default
(production) labeled prompts

Co-authored-by: Otto <otto@agpt.co>
2026-02-05 14:54:39 +01:00
Otto
a077ba9f03 fix(platform): YouTube block yields only error on failure (#11980)
## Summary

Fixes [SECRT-1889](https://linear.app/autogpt/issue/SECRT-1889): The
YouTube transcription block was yielding both `video_id` and `error`
when the transcript fetch failed.

## Problem

The block yielded `video_id` immediately upon extracting it from the
URL, before attempting to fetch the transcript. If the transcript fetch
failed, both outputs were present.

```python
# Before
video_id = self.extract_video_id(input_data.youtube_url)
yield "video_id", video_id  # ← Yielded before transcript attempt

transcript = self.get_transcript(video_id, credentials)  # ← Could fail here
```

## Solution

Wrap the entire operation in try/except and only yield outputs after all
operations succeed:

```python
# After
try:
    video_id = self.extract_video_id(input_data.youtube_url)
    transcript = self.get_transcript(video_id, credentials)
    transcript_text = self.format_transcript(transcript=transcript)

    # Only yield after all operations succeed
    yield "video_id", video_id
    yield "transcript", transcript_text
except Exception as e:
    yield "error", str(e)
```

This follows the established pattern in other blocks (e.g.,
`ai_image_generator_block.py`).

## Testing

- All 10 unit tests pass (`test/blocks/test_youtube.py`)
- Lint/format checks pass

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2026-02-05 11:51:32 +00:00
Bently
5401d54eaa fix(backend): Handle StreamHeartbeat in CoPilot stream handler (#11928)
### Changes 🏗️

Fixes **AUTOGPT-SERVER-7JA** (123 events since Jan 27, 2026).

#### Problem

`StreamHeartbeat` was added to keep SSE connections alive during
long-running tool executions (yielded every 15s while waiting). However,
the main `stream_chat_completion` handler's `elif` chain didn't have a
case for it:

```
StreamTextStart →  handled
StreamTextDelta →  handled
StreamTextEnd →  handled
StreamToolInputStart →  handled
StreamToolInputAvailable →  handled
StreamToolOutputAvailable →  handled
StreamFinish →  handled
StreamError →  handled
StreamUsage →  handled
StreamHeartbeat →  fell through to 'Unknown chunk type' error
```

This meant every heartbeat during tool execution generated a Sentry
error instead of keeping the connection alive.

#### Fix

Add `StreamHeartbeat` to the `elif` chain and yield it through. The
route handler already calls `to_sse()` on all yielded chunks, and
`StreamHeartbeat.to_sse()` correctly returns `: heartbeat\n\n` (SSE
comment format, ignored by clients but keeps proxies/load balancers
happy).

**1 file changed, 3 insertions.**
2026-02-05 12:04:46 +01:00
Otto
5ac89d7c0b fix(test): fix timing bug in test_block_credit_reset (#11978)
## Summary
Fixes the flaky `test_block_credit_reset` test that was failing on
multiple PRs with `assert 0 == 1000`.

## Root Cause
The test calls `disable_test_user_transactions()` which sets `updatedAt`
to 35 days ago from the **actual current time**. It then mocks
`time_now` to January 1st.

**The bug**: If the test runs in early February, 35 days ago is January
— the **same month** as the mocked `time_now`. The credit refill logic
only triggers when the balance snapshot is from a *different* month, so
no refill happens and the balance stays at 0.

## Fix
After calling `disable_test_user_transactions()`, explicitly set
`updatedAt` to December of the previous year. This ensures it's always
in a different month than the mocked `month1` (January), regardless of
when the test runs.

## Testing
CI will verify the fix.
2026-02-05 11:56:26 +01:00
Otto
4f908d5cb3 fix(platform): Improve Linear Search Block [SECRT-1880] (#11967)
## Summary

Implements [SECRT-1880](https://linear.app/autogpt/issue/SECRT-1880) -
Improve Linear Search Block

## Changes

### Models (`models.py`)
- Added `State` model with `id`, `name`, and `type` fields for workflow
state information
- Added `state: State | None` field to `Issue` model

### API Client (`_api.py`)
- Updated `try_search_issues()` to:
- Add `max_results` parameter (default 10, was ~50) to reduce token
usage
  - Add `team_id` parameter for team filtering
- Return `createdAt`, `state`, `project`, and `assignee` fields in
results
- Fixed `try_get_team_by_name()` to return descriptive error message
when team not found instead of crashing with `IndexError`

### Block (`issues.py`)
- Added `max_results` input parameter (1-100, default 10)
- Added `team_name` input parameter for optional team filtering
- Added `error` output field for graceful error handling
- Added categories (`PRODUCTIVITY`, `ISSUE_TRACKING`)
- Updated test fixtures to include new fields

## Breaking Changes

| Change | Before | After | Mitigation |
|--------|--------|-------|------------|
| Default result count | ~50 | 10 | Users can set `max_results` up to
100 if needed |

## Non-Breaking Changes

- `state` field added to `Issue` (optional, defaults to `None`)
- `max_results` param added (has default value)
- `team_name` param added (optional, defaults to `None`)
- `error` output added (follows established pattern from GitHub blocks)

## Testing

- [x] Format/lint checks pass
- [x] Unit test fixtures updated

Resolves SECRT-1880

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2026-02-04 22:54:46 +00:00
Reinier van der Leer
c1aa684743 fix(platform/chat): Filter host-scoped credentials for run_agent tool (#11905)
- Fixes [SECRT-1851: \[Copilot\] `run_agent` tool doesn't filter
host-scoped credentials](https://linear.app/autogpt/issue/SECRT-1851)
- Follow-up to #11881

### Changes 🏗️

- Filter host-scoped credentials for `run_agent` tool
- Tighten validation on host input field in `HostScopedCredentialsModal`
- Use netloc (w/ port) rather than just hostname (w/o port) as host
scope

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Create graph that requires host-scoped credentials to work
  - Create host-scoped credentials with a *different* host
  - Try to have Copilot run the graph
  - [x] -> no matching credentials available
  - Create new credentials
  - [x] -> works

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-04 16:27:14 +00:00
Otto
7e5b84cc5c fix(copilot): update homepage copy to focus on problem discovery (#11956)
## Summary
Update the CoPilot homepage to shift from "what do you want to
automate?" to "tell me about your problems." This lowers the barrier to
engagement by letting users describe their work frustrations instead of
requiring them to identify automations themselves.

## Changes
| Element | Before | After |
|---------|--------|-------|
| Headline | "What do you want to automate?" | "Tell me about your work
— I'll find what to automate." |
| Placeholder | "You can search or just ask - e.g. 'create a blog post
outline'" | "What's your role and what eats up most of your day? e.g.
'I'm a real estate agent and I hate...'" |
| Button 1 | "Show me what I can automate" | "I don't know where to
start, just ask me stuff" |
| Button 2 | "Design a custom workflow" | "I do the same thing every
week and it's killing me" |
| Button 3 | "Help me with content creation" | "Help me find where I'm
wasting my time" |
| Container | max-w-2xl | max-w-3xl |

> **Note on container width:** The `max-w-2xl` → `max-w-3xl` change is
just to keep the longer headline on one line. This works but may not be
the ideal solution — @lluis-xai should advise on the proper approach.

## Why This Matters
The current UX assumes users know what they want to automate. In
reality, most users know what frustrates them but can't identify
automations. The current screen blocks Otto from starting the discovery
conversation that leads to useful recommendations.

## Files Changed
- `autogpt_platform/frontend/src/app/(platform)/copilot/page.tsx` —
headline, placeholder, container width
- `autogpt_platform/frontend/src/app/(platform)/copilot/helpers.ts` —
quick action button text

Resolves: [SECRT-1876](https://linear.app/autogpt/issue/SECRT-1876)

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-02-04 17:38:58 +07:00
Swifty
09cb313211 fix(frontend): Prevent reflected XSS in OAuth callback route (#11963)
## Summary

Fixes a reflected cross-site scripting (XSS) vulnerability in the OAuth
callback route.

**Security Issue:**
https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/202

### Vulnerability

The OAuth callback route at
`frontend/src/app/(platform)/auth/integrations/oauth_callback/route.ts`
was writing user-controlled data directly into an HTML response without
proper sanitization. This allowed potential attackers to inject
malicious scripts via OAuth callback parameters.

### Fix

Added a `safeJsonStringify()` function that escapes characters that
could break out of the script context:
- `<` → `\u003c`
- `>` → `\u003e`  
- `&` → `\u0026`

This prevents any user-provided values from being interpreted as
HTML/script content when embedded in the response.

### References

- [OWASP XSS Prevention Cheat
Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html)
- [CWE-79: Improper Neutralization of Input During Web Page
Generation](https://cwe.mitre.org/data/definitions/79.html)

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified the OAuth callback still functions correctly
- [x] Confirmed special characters in OAuth responses are properly
escaped
2026-02-04 10:53:17 +01:00
Krzysztof Czerwinski
c026485023 feat(frontend): Disable auto-opening wallet (#11961)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

- Disable auto-opening Wallet for first time user and on credit increase
- Remove no longer needed `lastSeenCredits` state and storage

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Wallet doesn't open automatically
2026-02-04 06:11:41 +00:00
Nicholas Tindle
1eabc60484 Merge commit from fork
Fixes GHSA-rc89-6g7g-v5v7 / CVE-2026-22038

The logger.info() calls were explicitly logging API keys via
get_secret_value(), exposing credentials in plaintext logs.

Changes:
- Replace info-level credential logging with debug-level provider logging
- Remove all explicit secret value logging from observe/act/extract blocks

Co-authored-by: Otto <otto@agpt.co>
2026-02-03 11:16:57 -06:00
Swifty
f4bf492f24 feat(platform): Add Redis-based SSE reconnection for long-running CoPilot operations (#11877)
## Changes 🏗️

Adds Redis-based SSE reconnection support for long-running CoPilot
operations (like Agent Generator), enabling clients to reconnect and
resume receiving updates after disconnection.

### What this does:
- **Stream Registry** - Redis-backed task tracking with message
persistence via Redis Streams
- **SSE Reconnection** - Clients can reconnect to active tasks using
`task_id` and `last_message_id`
- **Duplicate Message Fix** - Filters out in-progress assistant messages
from session response when active stream exists
- **Completion Consumer** - Handles background task completion
notifications via Redis Streams

### Architecture:
```
1. User sends message → Backend creates task in Redis
2. SSE chunks written to Redis Stream for persistence
3. Client receives chunks via SSE subscription
4. If client disconnects → Task continues in background
5. Client reconnects → GET /sessions/{id} returns active_stream info
6. Client subscribes to /tasks/{task_id}/stream with last_message_id
7. Missed messages replayed from Redis Stream
```

### Key endpoints:
- `GET /sessions/{session_id}` - Returns `active_stream` info if task is
running
- `GET /tasks/{task_id}/stream?last_message_id=X` - SSE endpoint for
reconnection
- `GET /tasks/{task_id}` - Get task status
- `POST /operations/{op_id}/complete` - Webhook for external service
completion

### Duplicate message fix:
When `GET /sessions/{id}` detects an active stream:
1. Filters out the in-progress assistant message from response
2. Returns `last_message_id="0-0"` so client replays stream from
beginning
3. Client receives complete response only through SSE (single source of
truth)

### Frontend changes:
- Task persistence in localStorage for cross-tab reconnection
- Stream event dispatcher handles reconnection flow
- Deduplication logic prevents duplicate messages

### Testing:
- Manual testing of reconnection scenarios
- Verified duplicate message fix works correctly

## Related
- Resolves SSE timeout issues for Agent Generator
- Fixes duplicate message bug on reconnection
2026-02-03 16:52:06 +01:00
Zamil Majdy
81e48c00a4 feat(copilot): add customize_agent tool for marketplace templates (#11943)
## Summary

Adds a new copilot tool that allows users to customize
marketplace/template agents using natural language before adding them to
their library.

This exposes the Agent Generator's `/api/template-modification` endpoint
to the copilot, which was previously not available.

## Changes

- **service.py**: Add `customize_template_external` to call Agent
Generator's template modification endpoint
- **core.py**: 
  - Add `customize_template` wrapper function
- Extract `graph_to_json` as a reusable function (was previously inline
in `get_agent_as_json`)
- **customize_agent.py**: New tool that:
  - Takes marketplace agent ID (format: `creator/slug`)
  - Fetches template from store via `store_db.get_agent()`
  - Calls Agent Generator for customization
  - Handles clarifying questions from the generator
  - Saves customized agent to user's library
- **__init__.py**: Register the tool in `TOOL_REGISTRY` for
auto-discovery

## Usage Flow

1. User searches marketplace: *"Find me a newsletter agent"*
2. Copilot calls `find_agent` → returns `autogpt/newsletter-writer`
3. User: *"Customize that agent to post to Discord instead of email"*
4. Copilot calls:
   ```
   customize_agent(
       agent_id="autogpt/newsletter-writer",
       modifications="Post to Discord instead of sending email"
   )
   ```
5. Agent Generator may ask clarifying questions (e.g., "What Discord
channel?")
6. Customized agent is saved to user's library

## Test plan

- [x] Verified tool imports correctly
- [x] Verified tool is registered in `TOOL_REGISTRY`
- [x] Verified OpenAI function schema is valid
- [x] Ran existing tests (`pytest backend/api/features/chat/tools/`) -
all pass
- [x] Type checker (`pyright`) passes with 0 errors
- [ ] Manual testing with copilot (requires Agent Generator service)
2026-02-03 14:59:25 +00:00
Otto
7dc53071e8 fix(backend): Add retry and error handling to block initialization (#11946)
## Summary
Adds retry logic and graceful error handling to `initialize_blocks()` to
prevent transient DB errors from crashing server startup.

## Problem
When a transient database error occurs during block initialization
(e.g., Prisma P1017 "Server has closed the connection"), the entire
server fails to start. This is overly aggressive since:
1. Blocks are already registered in memory
2. The DB sync is primarily for tracking/schema storage
3. One flaky connection shouldn't prevent the server from starting

**Triggered by:** [Sentry
AUTOGPT-SERVER-7PW](https://significant-gravitas.sentry.io/issues/7238733543/)

## Solution
- Add retry decorator (3 attempts with exponential backoff) for DB
operations
- On failure after retries, log a warning and continue to the next block
- Blocks remain available in memory even if DB sync fails
- Log summary of any failed blocks at the end

## Changes
- `autogpt_platform/backend/backend/data/block.py`: Wrap block DB sync
in retry logic with graceful fallback

## Testing
- Existing block initialization behavior unchanged on success
- On transient DB errors: retries up to 3 times, then continues with
warning
2026-02-03 12:43:30 +00:00
Zamil Majdy
4878665c66 Merge branch 'master' into dev 2026-02-03 16:01:23 +04:00
Zamil Majdy
678ddde751 refactor(backend): unify context compression into compress_context() (#11937)
## Background

This PR consolidates and unifies context window management for the
CoPilot backend.

### Problem
The CoPilot backend had **two separate implementations** of context
window management:

1. **`service.py` → `_manage_context_window()`** - Chat service
streaming/continuation
2. **`prompt.py` → `compress_prompt()`** - Sync LLM blocks

This duplication led to inconsistent behavior, maintenance burden, and
duplicate code.

---

## Solution: Unified `compress_context()`

A single async function that handles both use cases:

| Caller | Usage | Behavior |
|--------|-------|----------|
| **Chat service** | `compress_context(msgs, client=openai_client)` |
Summarization → Truncation |
| **LLM blocks** | `compress_context(msgs, client=None)` | Truncation
only (no API call) |

---

## Strategy Order

| Step | Description | Runs When |
|------|-------------|-----------|
| **1. LLM Summarization** | Summarize old messages into single context
message, keep recent 15 | Only if `client` provided |
| **2. Content Truncation** | Progressively truncate message content
(8192→4096→...→128 tokens) | If still over limit |
| **3. Middle-out Deletion** | Delete messages one at a time from center
outward | If still over limit |
| **4. First/Last Trim** | Truncate system prompt and last message
content | Last resort |

### Why This Order?

1. **Summarization first** (if available) - Preserves semantic meaning
of old messages
2. **Content truncation before deletion** - Keeps all conversation
turns, just shorter
3. **Middle-out deletion** - More granular than dropping all old
messages at once
4. **First/last trim** - Only touch system prompt as last resort

---

## Key Fixes

| Issue | Before | After |
|-------|--------|-------|
| **Socket leak** | `AsyncOpenAI` client never closed | `async with`
context manager |
| **Timeout ignored** | `timeout=30` passed to `create()` (invalid) |
`client.with_options(timeout=30)` |
| **OpenAI tool messages** | Not truncated | Properly truncated |
| **Tool pair integrity** | OpenAI format only | Both OpenAI + Anthropic
formats |

---

## Tool Format Support

`_ensure_tool_pairs_intact()` now supports both formats:

### OpenAI Format
```python
# Assistant with tool_calls
{"role": "assistant", "tool_calls": [{"id": "call_1", ...}]}
# Tool response
{"role": "tool", "tool_call_id": "call_1", "content": "result"}
```

### Anthropic Format
```python
# Assistant with tool_use
{"role": "assistant", "content": [{"type": "tool_use", "id": "toolu_1", ...}]}
# Tool result
{"role": "user", "content": [{"type": "tool_result", "tool_use_id": "toolu_1", ...}]}
```

---

## Files Changed

| File | Change |
|------|--------|
| `backend/util/prompt.py` | +450 lines: Add `CompressResult`,
`compress_context()`, helpers |
| `backend/api/features/chat/service.py` | -380 lines: Remove duplicate,
use thin wrapper |
| `backend/blocks/llm.py` | Migrate `llm_call()` to use
`compress_context(client=None)` |
| `backend/util/prompt_test.py` | +400 lines: Comprehensive tests
(OpenAI + Anthropic) |

### Removed
- `compress_prompt()` - Replaced by `compress_context(client=None)`
- `_manage_context_window()` - Replaced by
`compress_context(client=openai_client)`

---

## API

```python
async def compress_context(
    messages: list[dict],
    target_tokens: int = 120_000,
    *,
    model: str = "gpt-4o",
    client: AsyncOpenAI | None = None,  # None = truncation only
    keep_recent: int = 15,
    reserve: int = 2_048,
    start_cap: int = 8_192,
    floor_cap: int = 128,
) -> CompressResult:
    ...

@dataclass
class CompressResult:
    messages: list[dict]
    token_count: int
    was_compacted: bool
    error: str | None = None
    original_token_count: int = 0
    messages_summarized: int = 0
    messages_dropped: int = 0
```

---

## Tests Added

| Test Class | Coverage |
|------------|----------|
| `TestMsgTokens` | Token counting for regular messages, OpenAI tool
calls, Anthropic tool_use |
| `TestTruncateToolMessageContent` | OpenAI + Anthropic tool message
truncation |
| `TestEnsureToolPairsIntact` | OpenAI format (3 tests), Anthropic
format (3 tests), edge cases (3 tests) |
| `TestCompressContext` | No compression, truncation-only, tool pair
preservation, error handling |

---

## Checklist

- [x] Code follows project conventions
- [x] Linting passes (`poetry run format`)
- [x] Type checking passes (`pyright`)
- [x] Tests added for all new functions
- [x] Both OpenAI and Anthropic tool formats supported
- [x] Backward compatible behavior preserved
- [x] All review comments addressed
2026-02-03 10:36:10 +00:00
Otto
aef6f57cfd fix(scheduler): route db calls through DatabaseManager (#11941)
## Summary

Routes `increment_onboarding_runs` and `cleanup_expired_oauth_tokens`
through the DatabaseManager RPC client instead of calling Prisma
directly.

## Problem

The Scheduler service never connects its Prisma client. While
`add_graph_execution()` in `utils.py` has a fallback that routes through
DatabaseManager when Prisma isn't connected, subsequent calls in the
scheduler were hitting Prisma directly:

- `increment_onboarding_runs()` after successful graph execution
- `cleanup_expired_oauth_tokens()` in the scheduled job

These threw `ClientNotConnectedError`, caught by generic exception
handlers but spamming Sentry (~696K events since December per the
original analysis in #11926).

## Solution

Follow the same pattern as `utils.py`:
1. Add `cleanup_expired_oauth_tokens` to `DatabaseManager` and
`DatabaseManagerAsyncClient`
2. Update scheduler to use `get_database_manager_async_client()` for
both calls

## Changes

- **database.py**: Import and expose `cleanup_expired_oauth_tokens` in
both manager classes
- **scheduler.py**: Use `db.increment_onboarding_runs()` and
`db.cleanup_expired_oauth_tokens()` via the async client

## Impact

- Eliminates Sentry error spam from scheduler
- Onboarding run counters now actually increment for scheduled
executions
- OAuth token cleanup now actually runs

## Testing

Deploy to staging with scheduled graphs and verify:
1. No more `ClientNotConnectedError` in scheduler logs
2. `UserOnboarding.agentRuns` increments on scheduled runs
3. Expired OAuth tokens get cleaned up

Refs: #11926 (original fix that was closed)
2026-02-03 09:54:49 +00:00
Krzysztof Czerwinski
14cee1670a fix(backend): Prevent leaking Redis connections in ws_api (#11869)
Fixing
https://github.com/Significant-Gravitas/AutoGPT/pull/11297#discussion_r2496833421

### Changes 🏗️

1. event_bus.py - Added close method to AsyncRedisEventBus
- Added __init__ method to track the _pubsub instance attribute
- Added async def close() method that closes the PubSub connection
safely
- Modified listen_events() to store the pubsub reference in self._pubsub

2. ws_api.py - Added cleanup in event_broadcaster
- Wrapped the worker coroutines in try/finally block
- The finally block calls close() on both event buses to ensure cleanup
happens on any exit (including exceptions before retry)
2026-02-03 08:07:48 +00:00
Zamil Majdy
d81d1ce024 refactor(backend): extract context window management and fix LLM continuation (#11936)
## Summary

Fixes CoPilot becoming unresponsive after long-running tools complete,
and refactors context window management into a reusable function.

## Problem

After `create_agent` completes, `_generate_llm_continuation()` was
sending ALL messages to OpenRouter without any context compaction. When
conversations exceeded ~50 messages, OpenRouter rejected requests with
`provider_name: 'unknown'` (no provider would accept).

**Evidence:** Langfuse session
[44fbb803-092e-4ebd-b288-852959f4faf5](https://cloud.langfuse.com/project/cmk5qhf210003ad079sd8utjt/sessions/44fbb803-092e-4ebd-b288-852959f4faf5)
showed:
- Successful calls: 32-50 messages, known providers
- Failed calls: 52+ messages, `provider: unknown`, `completion: null`

## Changes

### Refactor: Extract reusable `_manage_context_window()`
- Counts tokens and checks against 120k threshold
- Summarizes old messages while keeping recent 15
- Ensures tool_call/tool_response pairs stay intact
- Progressive truncation if still over limit
- Returns `ContextWindowResult` dataclass with messages, token count,
compaction status, and errors
- Helper `_messages_to_dicts()` reduces code duplication

### Fix: Update `_generate_llm_continuation()`
- Now calls `_manage_context_window()` before making LLM calls
- Adds retry logic with exponential backoff (matching
`_stream_chat_chunks` behavior)

### Cleanup: Update `_stream_chat_chunks()`
- Replaced inline context management with call to
`_manage_context_window()`
- Eliminates code duplication between the two functions

## Testing

- Syntax check: 
- Ruff lint: 
- Import verification: 

## Checklist

- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] My changes generate no new warnings
- [x] I have checked that my changes do not break existing functionality

---------

Co-authored-by: Otto <otto@agpt.co>
2026-02-03 04:41:43 +00:00
Zamil Majdy
2dd341c369 refactor: enrich description with context before calling Agent Generator (#11932)
## Summary
Updates the Agent Generator client to enrich the description with
context before calling, instead of sending `user_instruction` as a
separate parameter.

## Context
Companion PR to Significant-Gravitas/AutoGPT-Agent-Generator#105 which
removes unused parameters from the decompose API.

## Changes
- Enrich `description` with `context` (e.g., clarifying question
answers) before sending
- Remove `user_instruction` from request payload

## How it works
Both input boxes and chat box work the same way - the frontend
constructs a formatted message with answers and sends it as a user
message. The backend then enriches the description with this context
before calling the external Agent Generator service.
2026-02-03 02:31:07 +00:00
Otto
f7350c797a fix(copilot): use messages_dict in fallback context compaction (#11922)
## Summary

Fixes a bug where the fallback path in context compaction passes
`recent_messages` (already sliced) instead of `messages_dict` (full
conversation) to `_ensure_tool_pairs_intact`.

This caused the function to fail to find assistant messages that exist
in the original conversation but were outside the sliced window,
resulting in orphan tool_results being sent to Anthropic and rejected
with:

```
messages.66.content.0: unexpected tool_use_id found in tool_result blocks: toolu_vrtx_019bi1PDvEn7o5ByAxcS3VdA
```

## Changes

- Pass `messages_dict` and `slice_start` (relative to full conversation)
instead of `recent_messages` and `reduced_slice_start` (relative to
already-sliced list)

## Testing

This is a targeted fix for the fallback path. The bug only manifests
when:
1. Token count > 120k (triggers compaction)
2. Initial compaction + summary still exceeds limit (triggers fallback)
3. A tool_result's corresponding assistant is in `messages_dict` but not
in `recent_messages`

## Related

- Fixes SECRT-1861
- Related: SECRT-1839 (original fix that missed this code path)
2026-02-02 13:01:05 +00:00
Guofang.Tang
1081590384 feat(backend): cover webhook ingress URL route (#11747)
### Changes 🏗️

- Add a unit test to verify webhook ingress URL generation matches the
FastAPI route.

  ### Checklist 📋

  #### For code changes:

  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
- [x] poetry run pytest backend/integrations/webhooks/utils_test.py
--confcutdir=backend/integrations/webhooks

  #### For configuration changes:

  - [x] .env.default is updated or already compatible with my changes
- [x] docker-compose.yml is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under Changes)



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
* Added a unit test that validates webhook ingress URL generation
matches the application's resolved route (scheme, host, and path) for
provider-specific webhook endpoints, improving confidence in routing
behavior and helping prevent regressions.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2026-02-01 20:29:15 +00:00
Otto
7e37de8e30 fix: Include graph schemas for marketplace agents in Agent Generator (#11920)
## Problem

When marketplace agents are included in the `library_agents` payload
sent to the Agent Generator service, they were missing required fields
(`graph_id`, `graph_version`, `input_schema`, `output_schema`). This
caused Pydantic validation to fail with HTTP 422 Unprocessable Entity.

**Root cause:** The `MarketplaceAgentSummary` TypedDict had a different
shape than `LibraryAgentInfo` expected by the Agent Generator:
- Agent Generator expects: `graph_id`, `graph_version`, `name`,
`description`, `input_schema`, `output_schema`
- MarketplaceAgentSummary had: `name`, `description`, `sub_heading`,
`creator`, `is_marketplace_agent`

## Solution

1. **Add `agent_graph_id` to `StoreAgent` model** - The field was
already in the database view but not exposed
2. **Include `agentGraphId` in hybrid search SQL query** - Carry the
field through the search CTEs
3. **Update `search_marketplace_agents_for_generation()`** - Now fetches
full graph schemas using `get_graph()` and returns `LibraryAgentSummary`
(same type as library agents)
4. **Update deduplication logic** - Use `graph_id` instead of name for
more accurate deduplication

## Changes

- `backend/api/features/store/model.py`: Add optional `agent_graph_id`
field to `StoreAgent`
- `backend/api/features/store/hybrid_search.py`: Include `agentGraphId`
in SQL query columns
- `backend/api/features/store/db.py`: Map `agentGraphId` when creating
`StoreAgent` objects
- `backend/api/features/chat/tools/agent_generator/core.py`: Update
`search_marketplace_agents_for_generation()` to fetch and include full
graph schemas

## Testing

- [ ] Agent creation on dev with marketplace agents in context
- [ ] Verify no 422 errors from Agent Generator
- [ ] Verify marketplace agents can be used as sub-agents

Fixes: SECRT-1817

---------

Co-authored-by: majdyz <majdyz@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-01-31 19:17:36 +00:00
Otto
7ee94d986c docs: add credentials prerequisites to create-basic-agent guide (#11913)
## Summary
Addresses #11785 - users were encountering `openai_api_key_credentials`
errors when following the create-basic-agent guide because it didn't
mention the need to configure API credentials before using AI blocks.

## Changes
Added a **Prerequisites** section to
`docs/platform/create-basic-agent.md` explaining:
- **Cloud users:** Go to Profile → Integrations to add API keys
- **Self-hosted (Docker):** Add keys to `autogpt_platform/backend/.env`
and restart services

Also added a note that the Calculator example doesn't need credentials,
making it a good first test.

## Related
- Issue: #11785
2026-01-31 03:05:31 +00:00
Zamil Majdy
18a1661fa3 feat: add library agent fetching with two-phase search for sub-agent support (#11889)
## Context

When users ask the chat to create agents, they may want to compose
workflows that reuse their existing agents as sub-agents. For this to
work, the Agent Generator service needs to know what agents the user has
available.

**Challenge:** Users can have large libraries with many agents. Fetching
all of them would be slow and provide too much context to the LLM.

## Solution

This PR implements **search-based library agent fetching** with a
**two-phase search** strategy:

1. **Phase 1 (Initial Search):** When the user describes their goal, we
search for relevant library agents using the goal as the search query
2. **Phase 2 (Step-Based Enrichment):** After the goal is decomposed
into steps, we extract keywords from those steps and search for
additional relevant agents

This ensures we find agents that are relevant to both the high-level
goal AND the specific steps identified.

### Example Flow

```
User goal: "Create an agent that fetches weather and sends a summary email"

Phase 1: Search for "weather email summary" → finds "Weather Fetcher" agent
Phase 2: After decomposition identifies steps like "send email notification"
         → searches "send email notification" → finds "Gmail Sender" agent
```

### Changes

**Library Agent Fetching:**
- `get_library_agents_for_generation()` - Search-based fetching from
user's library
- `search_marketplace_agents_for_generation()` - Search public
marketplace
- `get_all_relevant_agents_for_generation()` - Combines both with
deduplication

**Two-Phase Search:**
- `extract_search_terms_from_steps()` - Extracts keywords from
decomposed steps
- `enrich_library_agents_from_steps()` - Searches for additional agents
based on steps
- Integrated into `create_agent.py` as "Step 1.5" after goal
decomposition

**Type Safety:**
- Added `TypedDict` definitions: `LibraryAgentSummary`,
`MarketplaceAgentSummary`, `DecompositionStep`, `DecompositionResult`

### Design Decisions

- **Search-based, not fetch-all:** Scalable for large libraries
- **Library agents prioritized:** They have full schemas; marketplace
agents have basic info only
- **Deduplication by name and graph_id:** Prevents duplicates across
searches
- **Graceful degradation:** Failures don't block agent generation
- **Limited to 3 search terms:** Avoids excessive API calls during
enrichment

## Related PR
- Agent Generator:
https://github.com/Significant-Gravitas/AutoGPT-Agent-Generator/pull/103

## Test plan
- [x] `test_library_agents.py` - 19 tests covering all new functions
- [x] `test_service.py` - 4 tests for library_agents passthrough
- [ ] Integration test: Create agent with library sub-agent composition
2026-01-31 00:18:21 +00:00
Otto
b72521daa9 fix(readme): update broken self-hosting docs link (#11911)
## Summary
The self-hosting guide link in README.md was broken.

**Old link:** `https://docs.agpt.co/platform/getting-started/`
- Redirects to `https://agpt.co/docs/platform/getting-started`
- Returns HTTP 400 

**New link:**
`https://agpt.co/docs/platform/getting-started/getting-started`
- Works correctly 

## Changes
- Updated the self-hosting guide URL in README.md

Fixes #OPEN-2973
2026-01-30 22:59:45 +00:00
Reinier van der Leer
350ad3591b fix(backend/chat): Filter credentials for graph execution by scopes (#11881)
[SECRT-1842: run_agent tool does not correctly use credentials - agents
fail with insufficient auth
scopes](https://linear.app/autogpt/issue/SECRT-1842)

### Changes 🏗️

- Include scopes in credentials filter in
`backend.api.features.chat.tools.utils.match_user_credentials_to_graph`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI must pass
- It's broken now and a simple change so we'll test in the dev
deployment
2026-01-30 11:01:51 +00:00
Bently
de0ec3d388 chore(llm): remove deprecated Claude 3.7 Sonnet model with migration and defensive handling (#11841)
## Summary
Remove `claude-3-7-sonnet-20250219` from LLM model definitions ahead of
Anthropic's API retirement, with comprehensive migration and defensive
error handling.

## Background
Anthropic is retiring Claude 3.7 Sonnet (`claude-3-7-sonnet-20250219`)
on **February 19, 2026 at 9:00 AM PT**. This PR removes the model from
the platform and migrates existing users to prevent service
interruptions.

## Changes

### Code Changes
- Remove `CLAUDE_3_7_SONNET` enum member from `LlmModel` in `llm.py`
- Remove corresponding `ModelMetadata` entry
- Remove `CLAUDE_3_7_SONNET` from `StagehandRecommendedLlmModel` enum
- Remove `CLAUDE_3_7_SONNET` from block cost config
- Add `CLAUDE_4_5_SONNET` to `StagehandRecommendedLlmModel` enum
- Update Stagehand block defaults from `CLAUDE_3_7_SONNET` to
`CLAUDE_4_5_SONNET` (staying in Claude family)
- Add defensive error handling in `CredentialsFieldInfo.discriminate()`
for deprecated model values

### Database Migration
- Adds migration `20260126120000_migrate_claude_3_7_to_4_5_sonnet`
- Migrates `AgentNode.constantInput` model references
- Migrates `AgentNodeExecutionInputOutput.data` preset overrides

### Documentation
- Updated `docs/integrations/block-integrations/llm.md` to remove
deprecated model
- Updated `docs/integrations/block-integrations/stagehand/blocks.md` to
remove deprecated model and add Claude 4.5 Sonnet

## Notes
- Agent JSON files in `autogpt_platform/backend/agents/` still reference
this model in their provider mappings. These are auto-generated and
should be regenerated separately.

## Testing
- [ ] Verify LLM block still functions with remaining models
- [ ] Confirm no import errors in affected files
- [ ] Verify migration runs successfully
- [ ] Verify deprecated model gives helpful error message instead of
KeyError
2026-01-30 08:40:55 +00:00
Otto
7cb1e588b0 fix(frontend): Refocus ChatInput after voice transcription completes (#11893)
## Summary
Refocuses the chat input textarea after voice transcription finishes,
allowing users to immediately use `spacebar+enter` to record and send
their prompt.

## Changes
- Added `inputId` parameter to `useVoiceRecording` hook
- After transcription completes, the input is automatically focused
- This improves the voice input UX flow

## Testing
1. Click mic button or press spacebar to record voice
2. Record a message and stop
3. After transcription completes, the input should be focused
4. User can now press Enter to send or spacebar to record again

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
2026-01-30 14:49:05 +07:00
Otto
582c6cad36 fix(e2e): Make E2E test data deterministic and fix flaky tests (#11890)
## Summary
Fixes flaky E2E marketplace and library tests that were causing PRs to
be removed from the merge queue.

## Root Cause
1. **Test data was probabilistic** - `e2e_test_data.py` used random
chances (40% approve, then 20-50% feature), which could result in 0
featured agents
2. **Library pagination threshold wrong** - Checked `>= 10`, but page
size is 20
3. **Fixed timeouts** - Used `waitForTimeout(2000)` /
`waitForTimeout(10000)` instead of proper waits

## Changes

### Backend (`e2e_test_data.py`)
- Add guaranteed minimums: 8 featured agents, 5 featured creators, 10
top agents
- First N submissions are deterministically approved and featured
- Increase agents per user from 15 → 25 (for pagination with
page_size=20)
- Fix library agent creation to use constants instead of hardcoded `10`

### Frontend Tests
- `library.spec.ts`: Fix pagination threshold to `PAGE_SIZE` (20)
- `library.page.ts`: Replace 2s timeout with `networkidle` +
`waitForFunction`
- `marketplace.page.ts`: Add `networkidle` wait, 30s waits in
`getFirst*` methods
- `marketplace.spec.ts`: Replace 10s timeout with `waitForFunction`
- `marketplace-creator.spec.ts`: Add `networkidle` + element waits

## Related
- Closes SECRT-1848, SECRT-1849
- Should unblock #11841 and other PRs in merge queue

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-30 05:12:35 +00:00
Nicholas Tindle
3b822cdaf7 chore(branchlet): Remove docs pip install from postCreateCmd (#11883)
### Changes 🏗️

- Removed `cd docs && pip install -r requirements.txt` from
`postCreateCmd` in `.branchlet.json`
- Docs dependencies will no longer be auto-installed during branchlet
worktree creation

### Rationale

The docs setup step was adding unnecessary overhead to the worktree
creation process. Developers who need to work on documentation can
manually install the docs requirements when needed.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified branchlet worktree creation still works without the docs
pip install step

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-01-30 00:31:34 +00:00
Zamil Majdy
b2eb4831bd feat(chat): improve agent generator error propagation (#11884)
## Summary
- Add helper functions in `service.py` to create standardized error
responses with `error_type` classification
- Update service functions to return error dicts instead of `None`,
preserving error details from the Agent Generator microservice
- Update `core.py` to pass through error responses properly
- Update `create_agent.py` to handle error responses with user-friendly
messages based on error type

## Error Types Now Propagated
| Error Type | Description | User Message |
|------------|-------------|--------------|
| `llm_parse_error` | LLM returned unparseable response | "The AI had
trouble understanding this request" |
| `llm_timeout` / `timeout` | Request timed out | "The request took too
long" |
| `llm_rate_limit` / `rate_limit` | Rate limited | "The service is
currently busy" |
| `validation_error` | Agent validation failed | "The generated agent
failed validation" |
| `connection_error` | Could not connect to Agent Generator | Generic
error message |
| `http_error` | HTTP error from Agent Generator | Generic error message
|
| `unknown` | Unclassified error | Generic error message |

## Motivation
This enables better debugging for issues like SECRT-1817 where
decomposition failed due to transient LLM errors but the root cause was
unclear in the logs. Now:
1. Error details from the Agent Generator microservice are preserved
2. Users get more helpful error messages based on error type
3. Debugging is easier with `error_type` in response details

## Related PR
- Agent Generator side:
https://github.com/Significant-Gravitas/AutoGPT-Agent-Generator/pull/102

## Test Plan
- [ ] Test decomposition with various error scenarios (timeout, parse
error)
- [ ] Verify user-friendly messages are shown based on error type
- [ ] Check that error details are logged properly
2026-01-29 19:53:40 +00:00
Reinier van der Leer
4cd5da678d refactor(claude): Split autogpt_platform/CLAUDE.md into project-specific files (#11788)
Split `autogpt_platform/CLAUDE.md` into project-specific files, to make
the scope of the instructions clearer.

Also, some minor improvements:

- Change references to other Markdown files to @file/path.md syntax that
Claude recognizes
- Update ambiguous/incorrect/outdated instructions
- Remove trailing slashes
- Fix broken file path references in other docs (including comments)
2026-01-29 17:33:02 +00:00
Ubbe
b94c83aacc feat(frontend): Copilot speech to text via Whisper model (#11871)
## Changes 🏗️


https://github.com/user-attachments/assets/d9c12ac0-625c-4b38-8834-e494b5eda9c0

Add a "speech to text" feature in the Chat input fox of Copilot, similar
as what you have in ChatGPT.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run locally and try the speech to text feature as part of the chat
input box

### For configuration changes:

We need to add `OPENAI_API_KEY=` to Vercel ( used in the Front-end )
both in Dev and Prod.

- [x] `.env.default` is updated or already compatible with my changes

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 17:46:36 +07:00
Nicholas Tindle
7668c17d9c feat(platform): add User Workspace for persistent CoPilot file storage (#11867)
Implements persistent User Workspace storage for CoPilot, enabling
blocks to save and retrieve files across sessions. Files are stored in
session-scoped virtual paths (`/sessions/{session_id}/`).

Fixes SECRT-1833

### Changes 🏗️

**Database & Storage:**
- Add `UserWorkspace` and `UserWorkspaceFile` Prisma models
- Implement `WorkspaceStorageBackend` abstraction (GCS for cloud, local
filesystem for self-hosted)
- Add `workspace_id` and `session_id` fields to `ExecutionContext`

**Backend API:**
- Add REST endpoints: `GET/POST /api/workspace/files`, `GET/DELETE
/api/workspace/files/{id}`, `GET /api/workspace/files/{id}/download`
- Add CoPilot tools: `list_workspace_files`, `read_workspace_file`,
`write_workspace_file`
- Integrate workspace storage into `store_media_file()` - returns
`workspace://file-id` references

**Block Updates:**
- Refactor all file-handling blocks to use unified `ExecutionContext`
parameter
- Update media-generating blocks to persist outputs to workspace
(AIImageGenerator, AIImageCustomizer, FluxKontext, TalkingHead, FAL
video, Bannerbear, etc.)

**Frontend:**
- Render `workspace://` image references in chat via proxy endpoint
- Add "AI cannot see this image" overlay indicator

**CoPilot Context Mapping:**
- Session = Agent (graph_id) = Run (graph_exec_id)
- Files scoped to `/sessions/{session_id}/`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Create CoPilot session, generate image with AIImageGeneratorBlock
  - [ ] Verify image returns `workspace://file-id` (not base64)
  - [ ] Verify image renders in chat with visibility indicator
  - [ ] Verify workspace files persist across sessions
  - [ ] Test list/read/write workspace files via CoPilot tools
  - [ ] Test local storage backend for self-hosted deployments

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Introduces a new persistent file-storage surface area (DB tables,
storage backends, download API, and chat tools) and rewires
`store_media_file()`/block execution context across many blocks, so
regressions could impact file handling, access control, or storage
costs.
> 
> **Overview**
> Adds a **persistent per-user Workspace** (new
`UserWorkspace`/`UserWorkspaceFile` models plus `WorkspaceManager` +
`WorkspaceStorageBackend` with GCS/local implementations) and wires it
into the API via a new `/api/workspace/files/{file_id}/download` route
(including header-sanitized `Content-Disposition`) and shutdown
lifecycle hooks.
> 
> Extends `ExecutionContext` to carry execution identity +
`workspace_id`/`session_id`, updates executor tooling to clone
node-specific contexts, and updates `run_block` (CoPilot) to create a
session-scoped workspace and synthetic graph/run/node IDs.
> 
> Refactors `store_media_file()` to require `execution_context` +
`return_format` and to support `workspace://` references; migrates many
media/file-handling blocks and related tests to the new API and to
persist generated media as `workspace://...` (or fall back to data URIs
outside CoPilot), and adds CoPilot chat tools for
listing/reading/writing/deleting workspace files with safeguards against
context bloat.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6abc70f793. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2026-01-29 05:49:47 +00:00
664 changed files with 56537 additions and 19928 deletions

View File

@@ -29,8 +29,7 @@
"postCreateCmd": [
"cd autogpt_platform/autogpt_libs && poetry install",
"cd autogpt_platform/backend && poetry install && poetry run prisma generate",
"cd autogpt_platform/frontend && pnpm install",
"cd docs && pip install -r requirements.txt"
"cd autogpt_platform/frontend && pnpm install"
],
"terminalCommand": "code .",
"deleteBranchWithWorktree": false

View File

@@ -5,42 +5,13 @@
!docs/
# Platform - Libs
!autogpt_platform/autogpt_libs/autogpt_libs/
!autogpt_platform/autogpt_libs/pyproject.toml
!autogpt_platform/autogpt_libs/poetry.lock
!autogpt_platform/autogpt_libs/README.md
!autogpt_platform/autogpt_libs/
# Platform - Backend
!autogpt_platform/backend/backend/
!autogpt_platform/backend/test/e2e_test_data.py
!autogpt_platform/backend/migrations/
!autogpt_platform/backend/schema.prisma
!autogpt_platform/backend/pyproject.toml
!autogpt_platform/backend/poetry.lock
!autogpt_platform/backend/README.md
!autogpt_platform/backend/.env
!autogpt_platform/backend/gen_prisma_types_stub.py
# Platform - Market
!autogpt_platform/market/market/
!autogpt_platform/market/scripts.py
!autogpt_platform/market/schema.prisma
!autogpt_platform/market/pyproject.toml
!autogpt_platform/market/poetry.lock
!autogpt_platform/market/README.md
!autogpt_platform/backend/
# Platform - Frontend
!autogpt_platform/frontend/src/
!autogpt_platform/frontend/public/
!autogpt_platform/frontend/scripts/
!autogpt_platform/frontend/package.json
!autogpt_platform/frontend/pnpm-lock.yaml
!autogpt_platform/frontend/tsconfig.json
!autogpt_platform/frontend/README.md
## config
!autogpt_platform/frontend/*.config.*
!autogpt_platform/frontend/.env.*
!autogpt_platform/frontend/.env
!autogpt_platform/frontend/
# Classic - AutoGPT
!classic/original_autogpt/autogpt/
@@ -64,6 +35,38 @@
# Classic - Frontend
!classic/frontend/build/web/
# Explicitly re-ignore some folders
.*
**/__pycache__
# Explicitly re-ignore unwanted files from whitelisted directories
# Note: These patterns MUST come after the whitelist rules to take effect
# Hidden files and directories (but keep frontend .env files needed for build)
**/.*
!autogpt_platform/frontend/.env
!autogpt_platform/frontend/.env.default
!autogpt_platform/frontend/.env.production
# Python artifacts
**/__pycache__/
**/*.pyc
**/*.pyo
**/.venv/
**/.ruff_cache/
**/.pytest_cache/
**/.coverage
**/htmlcov/
# Node artifacts
**/node_modules/
**/.next/
**/storybook-static/
**/playwright-report/
**/test-results/
# Build artifacts
**/dist/
**/build/
!autogpt_platform/frontend/src/**/build/
**/target/
# Logs and temp files
**/*.log
**/*.tmp

View File

@@ -160,7 +160,7 @@ pnpm storybook # Start component development server
**Backend Entry Points:**
- `backend/backend/server/server.py` - FastAPI application setup
- `backend/backend/api/rest_api.py` - FastAPI application setup
- `backend/backend/data/` - Database models and user management
- `backend/blocks/` - Agent execution blocks and logic
@@ -219,7 +219,7 @@ Agents are built using a visual block-based system where each block performs a s
### API Development
1. Update routes in `/backend/backend/server/routers/`
1. Update routes in `/backend/backend/api/features/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
4. For `data/*.py` changes, validate user ID checks
@@ -285,7 +285,7 @@ Agents are built using a visual block-based system where each block performs a s
### Security Guidelines
**Cache Protection Middleware** (`/backend/backend/server/middleware/security.py`):
**Cache Protection Middleware** (`/backend/backend/api/middleware/security.py`):
- Default: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses allow list approach for cacheable paths (static assets, health checks, public pages)

1229
.github/scripts/detect_overlaps.py vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -49,7 +49,7 @@ jobs:
- name: Create PR ${{ env.BUILD_BRANCH }} -> ${{ github.ref_name }}
if: github.event_name == 'push'
uses: peter-evans/create-pull-request@v7
uses: peter-evans/create-pull-request@v8
with:
add-paths: classic/frontend/build/web
base: ${{ github.ref_name }}

View File

@@ -22,7 +22,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0
@@ -40,9 +40,51 @@ jobs:
git checkout -b "$BRANCH_NAME"
echo "branch_name=$BRANCH_NAME" >> $GITHUB_OUTPUT
# Backend Python/Poetry setup (so Claude can run linting/tests)
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install Python dependencies
working-directory: autogpt_platform/backend
run: poetry install
- name: Generate Prisma Client
working-directory: autogpt_platform/backend
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (so Claude can run linting/tests)
- name: Enable corepack
run: corepack enable
- name: Set up Node.js
uses: actions/setup-node@v6
with:
node-version: "22"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
run: pnpm install --frozen-lockfile
- name: Get CI failure details
id: failure_details
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
const run = await github.rest.actions.getWorkflowRun({

View File

@@ -30,7 +30,7 @@ jobs:
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 1
@@ -41,7 +41,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -77,27 +77,15 @@ jobs:
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
- name: Set up Node.js
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
@@ -124,7 +112,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes
@@ -309,6 +297,7 @@ jobs:
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
allowed_bots: "dependabot[bot]"
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |

View File

@@ -40,7 +40,7 @@ jobs:
actions: read # Required for CI access
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 1
@@ -57,7 +57,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -93,27 +93,15 @@ jobs:
run: poetry run prisma generate && poetry run gen-prisma-stub
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
- name: Enable corepack
run: corepack enable
- name: Set pnpm store directory
run: |
pnpm config set store-dir ~/.pnpm-store
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
- name: Set up Node.js
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install JavaScript dependencies
working-directory: autogpt_platform/frontend
@@ -140,7 +128,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes

View File

@@ -58,11 +58,11 @@ jobs:
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
uses: github/codeql-action/init@v4
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -93,6 +93,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
uses: github/codeql-action/analyze@v4
with:
category: "/language:${{matrix.language}}"

View File

@@ -27,7 +27,7 @@ jobs:
# If you do not check out your code, Copilot will do this for you.
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
submodules: true
@@ -39,7 +39,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -76,7 +76,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js
uses: actions/setup-node@v4
uses: actions/setup-node@v6
with:
node-version: "22"
@@ -89,7 +89,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -132,7 +132,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache
id: docker-cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes

View File

@@ -23,7 +23,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 1
@@ -33,7 +33,7 @@ jobs:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -7,6 +7,10 @@ on:
- "docs/integrations/**"
- "autogpt_platform/backend/backend/blocks/**"
concurrency:
group: claude-docs-review-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
claude-review:
# Only run for PRs from members/collaborators
@@ -23,7 +27,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -33,7 +37,7 @@ jobs:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -91,5 +95,35 @@ jobs:
3. Read corresponding documentation files to verify accuracy
4. Provide your feedback as a PR comment
## IMPORTANT: Comment Marker
Start your PR comment with exactly this HTML comment marker on its own line:
<!-- CLAUDE_DOCS_REVIEW -->
This marker is used to identify and replace your comment on subsequent runs.
Be constructive and specific. If everything looks good, say so!
If there are issues, explain what's wrong and suggest how to fix it.
- name: Delete old Claude review comments
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get all comment IDs with our marker, sorted by creation date (oldest first)
COMMENT_IDS=$(gh api \
repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments \
--jq '[.[] | select(.body | contains("<!-- CLAUDE_DOCS_REVIEW -->"))] | sort_by(.created_at) | .[].id')
# Count comments
COMMENT_COUNT=$(echo "$COMMENT_IDS" | grep -c . || true)
if [ "$COMMENT_COUNT" -gt 1 ]; then
# Delete all but the last (newest) comment
echo "$COMMENT_IDS" | head -n -1 | while read -r COMMENT_ID; do
if [ -n "$COMMENT_ID" ]; then
echo "Deleting old review comment: $COMMENT_ID"
gh api -X DELETE repos/${{ github.repository }}/issues/comments/$COMMENT_ID
fi
done
else
echo "No old review comments to clean up"
fi

View File

@@ -28,7 +28,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 1
@@ -38,7 +38,7 @@ jobs:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -25,7 +25,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: ${{ github.event.inputs.git_ref || github.ref_name }}
@@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@v4
with:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -17,7 +17,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: ${{ github.ref_name || 'master' }}
@@ -45,7 +45,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Trigger deploy workflow
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@v4
with:
token: ${{ secrets.DEPLOY_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure

View File

@@ -41,13 +41,18 @@ jobs:
ports:
- 6379:6379
rabbitmq:
image: rabbitmq:3.12-management
image: rabbitmq:4.1.4
ports:
- 5672:5672
- 15672:15672
env:
RABBITMQ_DEFAULT_USER: ${{ env.RABBITMQ_DEFAULT_USER }}
RABBITMQ_DEFAULT_PASS: ${{ env.RABBITMQ_DEFAULT_PASS }}
options: >-
--health-cmd "rabbitmq-diagnostics -q ping"
--health-interval 30s
--health-timeout 10s
--health-retries 5
--health-start-period 10s
clamav:
image: clamav/clamav-debian:latest
ports:
@@ -68,7 +73,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
submodules: true
@@ -88,7 +93,7 @@ jobs:
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -17,7 +17,7 @@ jobs:
- name: Check comment permissions and deployment status
id: check_status
if: github.event_name == 'issue_comment' && github.event.issue.pull_request
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
const commentBody = context.payload.comment.body.trim();
@@ -55,7 +55,7 @@ jobs:
- name: Post permission denied comment
if: steps.check_status.outputs.permission_denied == 'true'
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
await github.rest.issues.createComment({
@@ -68,7 +68,7 @@ jobs:
- name: Get PR details for deployment
id: pr_details
if: steps.check_status.outputs.should_deploy == 'true' || steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
const pr = await github.rest.pulls.get({
@@ -82,7 +82,7 @@ jobs:
- name: Dispatch Deploy Event
if: steps.check_status.outputs.should_deploy == 'true'
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@v4
with:
token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -98,7 +98,7 @@ jobs:
- name: Post deploy success comment
if: steps.check_status.outputs.should_deploy == 'true'
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
await github.rest.issues.createComment({
@@ -110,7 +110,7 @@ jobs:
- name: Dispatch Undeploy Event (from comment)
if: steps.check_status.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@v4
with:
token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -126,7 +126,7 @@ jobs:
- name: Post undeploy success comment
if: steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
await github.rest.issues.createComment({
@@ -139,7 +139,7 @@ jobs:
- name: Check deployment status on PR close
id: check_pr_close
if: github.event_name == 'pull_request' && github.event.action == 'closed'
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
const comments = await github.rest.issues.listComments({
@@ -168,7 +168,7 @@ jobs:
github.event_name == 'pull_request' &&
github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v3
uses: peter-evans/repository-dispatch@v4
with:
token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
@@ -187,7 +187,7 @@ jobs:
github.event_name == 'pull_request' &&
github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true'
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
script: |
await github.rest.issues.createComment({

View File

@@ -6,10 +6,16 @@ on:
paths:
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
- "autogpt_platform/backend/Dockerfile"
- "autogpt_platform/docker-compose.yml"
- "autogpt_platform/docker-compose.platform.yml"
pull_request:
paths:
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
- "autogpt_platform/backend/Dockerfile"
- "autogpt_platform/docker-compose.yml"
- "autogpt_platform/docker-compose.platform.yml"
merge_group:
workflow_dispatch:
@@ -26,34 +32,31 @@ jobs:
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
components-changed: ${{ steps.filter.outputs.components }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Set up Node.js
uses: actions/setup-node@v4
- name: Check for component changes
uses: dorny/paths-filter@v3
id: filter
with:
node-version: "22.18.0"
filters: |
components:
- 'autogpt_platform/frontend/src/components/**'
- name: Enable corepack
run: corepack enable
- name: Generate cache key
id: cache-key
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
- name: Set up Node
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22.18.0"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install dependencies
- name: Install dependencies to populate cache
run: pnpm install --frozen-lockfile
lint:
@@ -62,24 +65,17 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22.18.0"
uses: actions/checkout@v6
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
- name: Set up Node
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22.18.0"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -90,31 +86,27 @@ jobs:
chromatic:
runs-on: ubuntu-latest
needs: setup
# Only run on dev branch pushes or PRs targeting dev
if: github.ref == 'refs/heads/dev' || github.base_ref == 'dev'
# Disabled: to re-enable, remove 'false &&' from the condition below
if: >-
false
&& (github.ref == 'refs/heads/dev' || github.base_ref == 'dev')
&& needs.setup.outputs.components-changed == 'true'
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
- name: Set up Node
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22.18.0"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -129,30 +121,20 @@ jobs:
exitOnceUploaded: true
e2e_test:
name: end-to-end tests
runs-on: big-boi
needs: setup
strategy:
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
- name: Copy default supabase .env
- name: Set up Platform - Copy default supabase .env
run: |
cp ../.env.default ../.env
- name: Copy backend .env and set OpenAI API key
- name: Set up Platform - Copy backend .env and set OpenAI API key
run: |
cp ../backend/.env.default ../backend/.env
echo "OPENAI_INTERNAL_API_KEY=${{ secrets.OPENAI_API_KEY }}" >> ../backend/.env
@@ -160,77 +142,125 @@ jobs:
# Used by E2E test data script to generate embeddings for approved store agents
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
- name: Set up Docker Buildx
- name: Set up Platform - Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-frontend-test-${{ hashFiles('autogpt_platform/docker-compose.yml', 'autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/pyproject.toml', 'autogpt_platform/backend/poetry.lock') }}
restore-keys: |
${{ runner.os }}-buildx-frontend-test-
driver: docker-container
driver-opts: network=host
- name: Run docker compose
- name: Set up Platform - Expose GHA cache to docker buildx CLI
uses: crazy-max/ghaction-github-runtime@v3
- name: Set up Platform - Build Docker images (with cache)
working-directory: autogpt_platform
run: |
NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d
pip install pyyaml
# Resolve extends and generate a flat compose file that bake can understand
docker compose -f docker-compose.yml config > docker-compose.resolved.yml
# Add cache configuration to the resolved compose file
python ../.github/workflows/scripts/docker-ci-fix-compose-build-cache.py \
--source docker-compose.resolved.yml \
--cache-from "type=gha" \
--cache-to "type=gha,mode=max" \
--backend-hash "${{ hashFiles('autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/poetry.lock', 'autogpt_platform/backend/backend') }}" \
--frontend-hash "${{ hashFiles('autogpt_platform/frontend/Dockerfile', 'autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/src') }}" \
--git-ref "${{ github.ref }}"
# Build with bake using the resolved compose file (now includes cache config)
docker buildx bake --allow=fs.read=.. -f docker-compose.resolved.yml --load
env:
DOCKER_BUILDKIT: 1
BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache
BUILDX_CACHE_TO: type=local,dest=/tmp/.buildx-cache-new,mode=max
NEXT_PUBLIC_PW_TEST: true
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
if [ -d "/tmp/.buildx-cache-new" ]; then
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
fi
- name: Set up tests - Cache E2E test data
id: e2e-data-cache
uses: actions/cache@v5
with:
path: /tmp/e2e_test_data.sql
key: e2e-test-data-${{ hashFiles('autogpt_platform/backend/test/e2e_test_data.py', 'autogpt_platform/backend/migrations/**', '.github/workflows/platform-frontend-ci.yml') }}
- name: Wait for services to be ready
- name: Set up Platform - Start Supabase DB + Auth
run: |
docker compose -f ../docker-compose.resolved.yml up -d db auth --no-build
echo "Waiting for database to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.resolved.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done'
echo "Waiting for auth service to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.resolved.yml exec -T db psql -U postgres -d postgres -c "SELECT 1 FROM auth.users LIMIT 1" 2>/dev/null; do sleep 2; done' || echo "Auth schema check timeout, continuing..."
- name: Set up Platform - Run migrations
run: |
echo "Running migrations..."
docker compose -f ../docker-compose.resolved.yml run --rm migrate
echo "✅ Migrations completed"
env:
NEXT_PUBLIC_PW_TEST: true
- name: Set up tests - Load cached E2E test data
if: steps.e2e-data-cache.outputs.cache-hit == 'true'
run: |
echo "✅ Found cached E2E test data, restoring..."
{
echo "SET session_replication_role = 'replica';"
cat /tmp/e2e_test_data.sql
echo "SET session_replication_role = 'origin';"
} | docker compose -f ../docker-compose.resolved.yml exec -T db psql -U postgres -d postgres -b
# Refresh materialized views after restore
docker compose -f ../docker-compose.resolved.yml exec -T db \
psql -U postgres -d postgres -b -c "SET search_path TO platform; SELECT refresh_store_materialized_views();" || true
echo "✅ E2E test data restored from cache"
- name: Set up Platform - Start (all other services)
run: |
docker compose -f ../docker-compose.resolved.yml up -d --no-build
echo "Waiting for rest_server to be ready..."
timeout 60 sh -c 'until curl -f http://localhost:8006/health 2>/dev/null; do sleep 2; done' || echo "Rest server health check timeout, continuing..."
echo "Waiting for database to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' || echo "Database ready check timeout, continuing..."
env:
NEXT_PUBLIC_PW_TEST: true
- name: Create E2E test data
- name: Set up tests - Create E2E test data
if: steps.e2e-data-cache.outputs.cache-hit != 'true'
run: |
echo "Creating E2E test data..."
# First try to run the script from inside the container
if docker compose -f ../docker-compose.yml exec -T rest_server test -f /app/autogpt_platform/backend/test/e2e_test_data.py; then
echo "✅ Found e2e_test_data.py in container, running it..."
docker compose -f ../docker-compose.yml exec -T rest_server sh -c "cd /app/autogpt_platform && python backend/test/e2e_test_data.py" || {
echo "❌ E2E test data creation failed!"
docker compose -f ../docker-compose.yml logs --tail=50 rest_server
exit 1
}
else
echo "⚠️ e2e_test_data.py not found in container, copying and running..."
# Copy the script into the container and run it
docker cp ../backend/test/e2e_test_data.py $(docker compose -f ../docker-compose.yml ps -q rest_server):/tmp/e2e_test_data.py || {
echo "❌ Failed to copy script to container"
exit 1
}
docker compose -f ../docker-compose.yml exec -T rest_server sh -c "cd /app/autogpt_platform && python /tmp/e2e_test_data.py" || {
echo "❌ E2E test data creation failed!"
docker compose -f ../docker-compose.yml logs --tail=50 rest_server
exit 1
}
fi
docker cp ../backend/test/e2e_test_data.py $(docker compose -f ../docker-compose.resolved.yml ps -q rest_server):/tmp/e2e_test_data.py
docker compose -f ../docker-compose.resolved.yml exec -T rest_server sh -c "cd /app/autogpt_platform && python /tmp/e2e_test_data.py" || {
echo "❌ E2E test data creation failed!"
docker compose -f ../docker-compose.resolved.yml logs --tail=50 rest_server
exit 1
}
- name: Restore dependencies cache
uses: actions/cache@v4
# Dump auth.users + platform schema for cache (two separate dumps)
echo "Dumping database for cache..."
{
docker compose -f ../docker-compose.resolved.yml exec -T db \
pg_dump -U postgres --data-only --column-inserts \
--table='auth.users' postgres
docker compose -f ../docker-compose.resolved.yml exec -T db \
pg_dump -U postgres --data-only --column-inserts \
--schema=platform \
--exclude-table='platform._prisma_migrations' \
--exclude-table='platform.apscheduler_jobs' \
--exclude-table='platform.apscheduler_jobs_batched_notifications' \
postgres
} > /tmp/e2e_test_data.sql
echo "✅ Database dump created for caching ($(wc -l < /tmp/e2e_test_data.sql) lines)"
- name: Set up tests - Enable corepack
run: corepack enable
- name: Set up tests - Set up Node
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22.18.0"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install dependencies
- name: Set up tests - Install dependencies
run: pnpm install --frozen-lockfile
- name: Install Browser 'chromium'
- name: Set up tests - Install browser 'chromium'
run: pnpm playwright install --with-deps chromium
- name: Run Playwright tests
@@ -257,7 +287,7 @@ jobs:
- name: Print Final Docker Compose logs
if: always()
run: docker compose -f ../docker-compose.yml logs
run: docker compose -f ../docker-compose.resolved.yml logs
integration_test:
runs-on: ubuntu-latest
@@ -265,26 +295,19 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "22.18.0"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
- name: Set up Node
uses: actions/setup-node@v6
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }}
${{ runner.os }}-pnpm-
node-version: "22.18.0"
cache: "pnpm"
cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml
- name: Install dependencies
run: pnpm install --frozen-lockfile

View File

@@ -29,10 +29,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Set up Node.js
uses: actions/setup-node@v4
uses: actions/setup-node@v6
with:
node-version: "22.18.0"
@@ -44,7 +44,7 @@ jobs:
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
@@ -56,19 +56,19 @@ jobs:
run: pnpm install --frozen-lockfile
types:
runs-on: ubuntu-latest
runs-on: big-boi
needs: setup
strategy:
fail-fast: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
uses: actions/setup-node@v6
with:
node-version: "22.18.0"
@@ -85,10 +85,10 @@ jobs:
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
docker compose -f ../docker-compose.yml --profile local up -d deps_backend
- name: Restore dependencies cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}

39
.github/workflows/pr-overlap-check.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: PR Overlap Detection
on:
pull_request:
types: [opened, synchronize, reopened]
branches:
- dev
- master
permissions:
contents: read
pull-requests: write
jobs:
check-overlaps:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0 # Need full history for merge testing
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Configure git
run: |
git config user.email "github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
- name: Run overlap detection
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Always succeed - this check informs contributors, it shouldn't block merging
continue-on-error: true
run: |
python .github/scripts/detect_overlaps.py ${{ github.event.pull_request.number }}

View File

@@ -11,7 +11,7 @@ jobs:
steps:
# - name: Wait some time for all actions to start
# run: sleep 30
- uses: actions/checkout@v4
- uses: actions/checkout@v6
# with:
# fetch-depth: 0
- name: Set up Python

View File

@@ -0,0 +1,195 @@
#!/usr/bin/env python3
"""
Add cache configuration to a resolved docker-compose file for all services
that have a build key, and ensure image names match what docker compose expects.
"""
import argparse
import yaml
DEFAULT_BRANCH = "dev"
CACHE_BUILDS_FOR_COMPONENTS = ["backend", "frontend"]
def main():
parser = argparse.ArgumentParser(
description="Add cache config to a resolved compose file"
)
parser.add_argument(
"--source",
required=True,
help="Source compose file to read (should be output of `docker compose config`)",
)
parser.add_argument(
"--cache-from",
default="type=gha",
help="Cache source configuration",
)
parser.add_argument(
"--cache-to",
default="type=gha,mode=max",
help="Cache destination configuration",
)
for component in CACHE_BUILDS_FOR_COMPONENTS:
parser.add_argument(
f"--{component}-hash",
default="",
help=f"Hash for {component} cache scope (e.g., from hashFiles())",
)
parser.add_argument(
"--git-ref",
default="",
help="Git ref for branch-based cache scope (e.g., refs/heads/master)",
)
args = parser.parse_args()
# Normalize git ref to a safe scope name (e.g., refs/heads/master -> master)
git_ref_scope = ""
if args.git_ref:
git_ref_scope = args.git_ref.replace("refs/heads/", "").replace("/", "-")
with open(args.source, "r") as f:
compose = yaml.safe_load(f)
# Get project name from compose file or default
project_name = compose.get("name", "autogpt_platform")
def get_image_name(dockerfile: str, target: str) -> str:
"""Generate image name based on Dockerfile folder and build target."""
dockerfile_parts = dockerfile.replace("\\", "/").split("/")
if len(dockerfile_parts) >= 2:
folder_name = dockerfile_parts[-2] # e.g., "backend" or "frontend"
else:
folder_name = "app"
return f"{project_name}-{folder_name}:{target}"
def get_build_key(dockerfile: str, target: str) -> str:
"""Generate a unique key for a Dockerfile+target combination."""
return f"{dockerfile}:{target}"
def get_component(dockerfile: str) -> str | None:
"""Get component name (frontend/backend) from dockerfile path."""
for component in CACHE_BUILDS_FOR_COMPONENTS:
if component in dockerfile:
return component
return None
# First pass: collect all services with build configs and identify duplicates
# Track which (dockerfile, target) combinations we've seen
build_key_to_first_service: dict[str, str] = {}
services_to_build: list[str] = []
services_to_dedupe: list[str] = []
for service_name, service_config in compose.get("services", {}).items():
if "build" not in service_config:
continue
build_config = service_config["build"]
dockerfile = build_config.get("dockerfile", "Dockerfile")
target = build_config.get("target", "default")
build_key = get_build_key(dockerfile, target)
if build_key not in build_key_to_first_service:
# First service with this build config - it will do the actual build
build_key_to_first_service[build_key] = service_name
services_to_build.append(service_name)
else:
# Duplicate - will just use the image from the first service
services_to_dedupe.append(service_name)
# Second pass: configure builds and deduplicate
modified_services = []
for service_name, service_config in compose.get("services", {}).items():
if "build" not in service_config:
continue
build_config = service_config["build"]
dockerfile = build_config.get("dockerfile", "Dockerfile")
target = build_config.get("target", "latest")
image_name = get_image_name(dockerfile, target)
# Set image name for all services (needed for both builders and deduped)
service_config["image"] = image_name
if service_name in services_to_dedupe:
# Remove build config - this service will use the pre-built image
del service_config["build"]
continue
# This service will do the actual build - add cache config
cache_from_list = []
cache_to_list = []
component = get_component(dockerfile)
if not component:
# Skip services that don't clearly match frontend/backend
continue
# Get the hash for this component
component_hash = getattr(args, f"{component}_hash")
# Scope format: platform-{component}-{target}-{hash|ref}
# Example: platform-backend-server-abc123
if "type=gha" in args.cache_from:
# 1. Primary: exact hash match (most specific)
if component_hash:
hash_scope = f"platform-{component}-{target}-{component_hash}"
cache_from_list.append(f"{args.cache_from},scope={hash_scope}")
# 2. Fallback: branch-based cache
if git_ref_scope:
ref_scope = f"platform-{component}-{target}-{git_ref_scope}"
cache_from_list.append(f"{args.cache_from},scope={ref_scope}")
# 3. Fallback: dev branch cache (for PRs/feature branches)
if git_ref_scope and git_ref_scope != DEFAULT_BRANCH:
master_scope = f"platform-{component}-{target}-{DEFAULT_BRANCH}"
cache_from_list.append(f"{args.cache_from},scope={master_scope}")
if "type=gha" in args.cache_to:
# Write to both hash-based and branch-based scopes
if component_hash:
hash_scope = f"platform-{component}-{target}-{component_hash}"
cache_to_list.append(f"{args.cache_to},scope={hash_scope}")
if git_ref_scope:
ref_scope = f"platform-{component}-{target}-{git_ref_scope}"
cache_to_list.append(f"{args.cache_to},scope={ref_scope}")
# Ensure we have at least one cache source/target
if not cache_from_list:
cache_from_list.append(args.cache_from)
if not cache_to_list:
cache_to_list.append(args.cache_to)
build_config["cache_from"] = cache_from_list
build_config["cache_to"] = cache_to_list
modified_services.append(service_name)
# Write back to the same file
with open(args.source, "w") as f:
yaml.dump(compose, f, default_flow_style=False, sort_keys=False)
print(f"Added cache config to {len(modified_services)} services in {args.source}:")
for svc in modified_services:
svc_config = compose["services"][svc]
build_cfg = svc_config.get("build", {})
cache_from_list = build_cfg.get("cache_from", ["none"])
cache_to_list = build_cfg.get("cache_to", ["none"])
print(f" - {svc}")
print(f" image: {svc_config.get('image', 'N/A')}")
print(f" cache_from: {cache_from_list}")
print(f" cache_to: {cache_to_list}")
if services_to_dedupe:
print(
f"Deduplicated {len(services_to_dedupe)} services (will use pre-built images):"
)
for svc in services_to_dedupe:
print(f" - {svc} -> {compose['services'][svc].get('image', 'N/A')}")
if __name__ == "__main__":
main()

1
.gitignore vendored
View File

@@ -178,5 +178,6 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
.claude/settings.local.json
CLAUDE.local.md
/autogpt_platform/backend/logs
.next

View File

@@ -16,7 +16,6 @@ See `docs/content/platform/getting-started.md` for setup instructions.
- Format Python code with `poetry run format`.
- Format frontend code using `pnpm format`.
## Frontend guidelines:
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
@@ -33,14 +32,17 @@ See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid creating large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components, arrow functions only for callbacks
- No barrel files or `index.ts` re-exports
- Do not use `useCallback` or `useMemo` unless strictly needed
- Avoid comments at all times unless the code is very complex
- Do not use `useCallback` or `useMemo` unless asked to optimise a given function
- Do not type hook returns, let Typescript infer as much as possible
- Never type with `any`, if not types available use `unknown`
## Testing
@@ -49,22 +51,8 @@ See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
Always run the relevant linters and tests before committing.
Use conventional commit messages for all commits (e.g. `feat(backend): add API`).
Types:
- feat
- fix
- refactor
- ci
- dx (developer experience)
Scopes:
- platform
- platform/library
- platform/marketplace
- backend
- backend/executor
- frontend
- frontend/library
- frontend/marketplace
- blocks
Types: - feat - fix - refactor - ci - dx (developer experience)
Scopes: - platform - platform/library - platform/marketplace - backend - backend/executor - frontend - frontend/library - frontend/marketplace - blocks
## Pull requests

View File

@@ -54,7 +54,7 @@ Before proceeding with the installation, ensure your system meets the following
### Updated Setup Instructions:
We've moved to a fully maintained and regularly updated documentation site.
👉 [Follow the official self-hosting guide here](https://docs.agpt.co/platform/getting-started/)
👉 [Follow the official self-hosting guide here](https://agpt.co/docs/platform/getting-started/getting-started)
This tutorial assumes you have Docker, VSCode, git and npm installed.

View File

@@ -6,152 +6,30 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
AutoGPT Platform is a monorepo containing:
- **Backend** (`/backend`): Python FastAPI server with async support
- **Frontend** (`/frontend`): Next.js React application
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
- **Backend** (`backend`): Python FastAPI server with async support
- **Frontend** (`frontend`): Next.js React application
- **Shared Libraries** (`autogpt_libs`): Common Python utilities
## Essential Commands
## Component Documentation
### Backend Development
- **Backend**: See @backend/CLAUDE.md for backend-specific commands, architecture, and development tasks
- **Frontend**: See @frontend/CLAUDE.md for frontend-specific commands, architecture, and development patterns
```bash
# Install dependencies
cd backend && poetry install
# Run database migrations
poetry run prisma migrate dev
# Start all services (database, redis, rabbitmq, clamav)
docker compose up -d
# Run the backend server
poetry run serve
# Run tests
poetry run test
# Run specific test
poetry run pytest path/to/test_file.py::test_function_name
# Run block tests (tests that validate all blocks work correctly)
poetry run pytest backend/blocks/test/test_block.py -xvs
# Run tests for a specific block (e.g., GetCurrentTimeBlock)
poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[GetCurrentTimeBlock]' -xvs
# Lint and format
# prefer format if you want to just "fix" it and only get the errors that can't be autofixed
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in TESTING.md
#### Creating/Updating Snapshots
When you first write a test or when the expected output changes:
```bash
poetry run pytest path/to/test.py --snapshot-update
```
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
### Frontend Development
```bash
# Install dependencies
cd frontend && pnpm i
# Generate API client from OpenAPI spec
pnpm generate:api
# Start development server
pnpm dev
# Run E2E tests
pnpm test
# Run Storybook for component development
pnpm storybook
# Build production
pnpm build
# Format and lint
pnpm format
# Type checking
pnpm types
```
**📖 Complete Guide**: See `/frontend/CONTRIBUTING.md` and `/frontend/.cursorrules` for comprehensive frontend patterns.
**Key Frontend Conventions:**
- Separate render logic from data/behavior in components
- Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Use function declarations (not arrow functions) for components/handlers
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Only use Phosphor Icons
- Never use `src/components/__legacy__/*` or deprecated `BackendAPI`
## Architecture Overview
### Backend Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
- **Execution Engine**: Separate executor service processes agent workflows
- **Authentication**: JWT-based with Supabase integration
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
### Frontend Architecture
- **Framework**: Next.js 15 App Router (client-first approach)
- **Data Fetching**: Type-safe generated API hooks via Orval + React Query
- **State Management**: React Query for server state, co-located UI state in components/hooks
- **Component Structure**: Separate render logic (`.tsx`) from business logic (`use*.ts` hooks)
- **Workflow Builder**: Visual graph editor using @xyflow/react
- **UI Components**: shadcn/ui (Radix UI primitives) with Tailwind CSS styling
- **Icons**: Phosphor Icons only
- **Feature Flags**: LaunchDarkly integration
- **Error Handling**: ErrorCard for render errors, toast for mutations, Sentry for exceptions
- **Testing**: Playwright for E2E, Storybook for component development
### Key Concepts
## Key Concepts
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
2. **Blocks**: Reusable components in `backend/backend/blocks/` that perform specific tasks
3. **Integrations**: OAuth and API connections stored per user
4. **Store**: Marketplace for sharing agent templates
5. **Virus Scanning**: ClamAV integration for file upload security
### Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
- Frontend uses Playwright for E2E tests
- Component testing via Storybook
### Database Schema
Key models (defined in `/backend/schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
- `AgentNode`: Individual nodes in a workflow
- `StoreListing`: Marketplace listings for sharing agents
### Environment Configuration
#### Configuration Files
- **Backend**: `/backend/.env.default` (defaults) → `/backend/.env` (user overrides)
- **Frontend**: `/frontend/.env.default` (defaults) → `/frontend/.env` (user overrides)
- **Platform**: `/.env.default` (Supabase/shared defaults) → `/.env` (user overrides)
- **Backend**: `backend/.env.default` (defaults) → `backend/.env` (user overrides)
- **Frontend**: `frontend/.env.default` (defaults) → `frontend/.env` (user overrides)
- **Platform**: `.env.default` (Supabase/shared defaults) → `.env` (user overrides)
#### Docker Environment Loading Order
@@ -167,83 +45,17 @@ Key models (defined in `/backend/schema.prisma`):
- Backend/Frontend services use YAML anchors for consistent configuration
- Supabase services (`db/docker/docker-compose.yml`) follow the same pattern
### Common Development Tasks
### Branching Strategy
**Adding a new block:**
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
- Testing and validation
- File organization
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
4. Define input/output schemas using `BlockSchema`
5. Implement async `run` method
6. Generate unique block ID using `uuid.uuid4()`
7. Test with `poetry run pytest backend/blocks/test/test_block.py`
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
If you get any pushback or hit complex block conditions check the new_blocks guide in the docs.
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
### Frontend guidelines:
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
1. **Pages**: Create in `src/app/(platform)/feature-name/page.tsx`
- Add `usePageName.ts` hook for logic
- Put sub-components in local `components/` folder
2. **Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
3. **Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Regenerate with `pnpm generate:api`
- Pattern: `use{Method}{Version}{OperationName}`
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
- Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
- Separate render logic from business logic (component.tsx + useComponent.ts + helpers.ts)
- Colocate state when possible and avoid creating large components, use sub-components ( local `/components` folder next to the parent component ) when sensible
- Avoid large hooks, abstract logic into `helpers.ts` files when sensible
- Use function declarations for components, arrow functions only for callbacks
- No barrel files or `index.ts` re-exports
- Do not use `useCallback` or `useMemo` unless strictly needed
- Avoid comments at all times unless the code is very complex
### Security Implementation
**Cache Protection Middleware:**
- Located in `/backend/backend/server/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
- Cacheable paths include: static assets (`/static/*`, `/_next/static/*`), health checks, public store pages, documentation
- Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications
- **`dev`** is the main development branch. All PRs should target `dev`.
- **`master`** is the production branch. Only used for production releases.
### Creating Pull Requests
- Create the PR aginst the `dev` branch of the repository.
- Ensure the branch name is descriptive (e.g., `feature/add-new-block`)/
- Use conventional commit messages (see below)/
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description/
- Create the PR against the `dev` branch of the repository.
- Ensure the branch name is descriptive (e.g., `feature/add-new-block`)
- Use conventional commit messages (see below)
- Fill out the .github/PULL_REQUEST_TEMPLATE.md template as the PR description
- Run the github pre-commit hooks to ensure code quality.
### Reviewing/Revising Pull Requests

File diff suppressed because it is too large Load Diff

View File

@@ -9,25 +9,25 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^45.0"
cryptography = "^46.0"
expiringdict = "^1.2.2"
fastapi = "^0.116.1"
google-cloud-logging = "^3.12.1"
launchdarkly-server-sdk = "^9.12.0"
pydantic = "^2.11.7"
pydantic-settings = "^2.10.1"
pyjwt = { version = "^2.10.1", extras = ["crypto"] }
fastapi = "^0.128.7"
google-cloud-logging = "^3.13.0"
launchdarkly-server-sdk = "^9.15.0"
pydantic = "^2.12.5"
pydantic-settings = "^2.12.0"
pyjwt = { version = "^2.11.0", extras = ["crypto"] }
redis = "^6.2.0"
supabase = "^2.16.0"
uvicorn = "^0.35.0"
supabase = "^2.28.0"
uvicorn = "^0.40.0"
[tool.poetry.group.dev.dependencies]
pyright = "^1.1.404"
pyright = "^1.1.408"
pytest = "^8.4.1"
pytest-asyncio = "^1.1.0"
pytest-mock = "^3.14.1"
pytest-cov = "^6.2.1"
ruff = "^0.12.11"
pytest-asyncio = "^1.3.0"
pytest-mock = "^3.15.1"
pytest-cov = "^7.0.0"
ruff = "^0.15.0"
[build-system]
requires = ["poetry-core"]

View File

@@ -104,6 +104,12 @@ TWITTER_CLIENT_SECRET=
# Make a new workspace for your OAuth APP -- trust me
# https://linear.app/settings/api/applications/new
# Callback URL: http://localhost:3000/auth/integrations/oauth_callback
LINEAR_API_KEY=
# Linear project and team IDs for the feature request tracker.
# Find these in your Linear workspace URL: linear.app/<workspace>/project/<project-id>
# and in team settings. Used by the chat copilot to file and search feature requests.
LINEAR_FEATURE_REQUEST_PROJECT_ID=
LINEAR_FEATURE_REQUEST_TEAM_ID=
LINEAR_CLIENT_ID=
LINEAR_CLIENT_SECRET=
@@ -152,6 +158,7 @@ REPLICATE_API_KEY=
REVID_API_KEY=
SCREENSHOTONE_API_KEY=
UNREAL_SPEECH_API_KEY=
ELEVENLABS_API_KEY=
# Data & Search Services
E2B_API_KEY=

View File

@@ -19,3 +19,6 @@ load-tests/*.json
load-tests/*.log
load-tests/node_modules/*
migrations/*/rollback*.sql
# Workspace files
workspaces/

View File

@@ -0,0 +1,170 @@
# CLAUDE.md - Backend
This file provides guidance to Claude Code when working with the backend.
## Essential Commands
To run something with Python package dependencies you MUST use `poetry run ...`.
```bash
# Install dependencies
poetry install
# Run database migrations
poetry run prisma migrate dev
# Start all services (database, redis, rabbitmq, clamav)
docker compose up -d
# Run the backend as a whole
poetry run app
# Run tests
poetry run test
# Run specific test
poetry run pytest path/to/test_file.py::test_function_name
# Run block tests (tests that validate all blocks work correctly)
poetry run pytest backend/blocks/test/test_block.py -xvs
# Run tests for a specific block (e.g., GetCurrentTimeBlock)
poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[GetCurrentTimeBlock]' -xvs
# Lint and format
# prefer format if you want to just "fix" it and only get the errors that can't be autofixed
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in @TESTING.md
### Creating/Updating Snapshots
When you first write a test or when the expected output changes:
```bash
poetry run pytest path/to/test.py --snapshot-update
```
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
## Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
- **Execution Engine**: Separate executor service processes agent workflows
- **Authentication**: JWT-based with Supabase integration
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
## Testing Approach
- Uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
## Database Schema
Key models (defined in `schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
- `AgentNode`: Individual nodes in a workflow
- `StoreListing`: Marketplace listings for sharing agents
## Environment Configuration
- **Backend**: `.env.default` (defaults) → `.env` (user overrides)
## Common Development Tasks
### Adding a new block
Follow the comprehensive [Block SDK Guide](@../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
- Testing and validation
- File organization
Quick steps:
1. Create new file in `backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
4. Define input/output schemas using `BlockSchema`
5. Implement async `run` method
6. Generate unique block ID using `uuid.uuid4()`
7. Test with `poetry run pytest backend/blocks/test/test_block.py`
Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph-based editor or would they struggle to connect productively?
ex: do the inputs and outputs tie well together?
If you get any pushback or hit complex block conditions check the new_blocks guide in the docs.
#### Handling files in blocks with `store_media_file()`
When blocks need to work with files (images, videos, documents), use `store_media_file()` from `backend.util.file`. The `return_format` parameter determines what you get back:
| Format | Use When | Returns |
|--------|----------|---------|
| `"for_local_processing"` | Processing with local tools (ffmpeg, MoviePy, PIL) | Local file path (e.g., `"image.png"`) |
| `"for_external_api"` | Sending content to external APIs (Replicate, OpenAI) | Data URI (e.g., `"data:image/png;base64,..."`) |
| `"for_block_output"` | Returning output from your block | Smart: `workspace://` in CoPilot, data URI in graphs |
**Examples:**
```python
# INPUT: Need to process file locally with ffmpeg
local_path = await store_media_file(
file=input_data.video,
execution_context=execution_context,
return_format="for_local_processing",
)
# local_path = "video.mp4" - use with Path/ffmpeg/etc
# INPUT: Need to send to external API like Replicate
image_b64 = await store_media_file(
file=input_data.image,
execution_context=execution_context,
return_format="for_external_api",
)
# image_b64 = "data:image/png;base64,iVBORw0..." - send to API
# OUTPUT: Returning result from block
result_url = await store_media_file(
file=generated_image_url,
execution_context=execution_context,
return_format="for_block_output",
)
yield "image_url", result_url
# In CoPilot: result_url = "workspace://abc123"
# In graphs: result_url = "data:image/png;base64,..."
```
**Key points:**
- `for_block_output` is the ONLY format that auto-adapts to execution context
- Always use `for_block_output` for block outputs unless you have a specific reason not to
- Never hardcode workspace checks - let `for_block_output` handle it
### Modifying the API
1. Update route in `backend/api/features/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
## Security Implementation
### Cache Protection Middleware
- Located in `backend/api/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
- Cacheable paths include: static assets (`static/*`, `_next/static/*`), health checks, public store pages, documentation
- Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications

View File

@@ -1,3 +1,5 @@
# ============================ DEPENDENCY BUILDER ============================ #
FROM debian:13-slim AS builder
# Set environment variables
@@ -51,25 +53,62 @@ COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/parti
COPY autogpt_platform/backend/gen_prisma_types_stub.py ./
RUN poetry run prisma generate && poetry run gen-prisma-stub
FROM debian:13-slim AS server_dependencies
# =============================== DB MIGRATOR =============================== #
# Lightweight migrate stage - only needs Prisma CLI, not full Python environment
FROM debian:13-slim AS migrate
WORKDIR /app/autogpt_platform/backend
ENV DEBIAN_FRONTEND=noninteractive
# Install only what's needed for prisma migrate: Node.js and minimal Python for prisma-python
RUN apt-get update && apt-get install -y --no-install-recommends \
python3.13 \
python3-pip \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Copy Node.js from builder (needed for Prisma CLI)
COPY --from=builder /usr/bin/node /usr/bin/node
COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules
COPY --from=builder /usr/bin/npm /usr/bin/npm
# Copy Prisma binaries
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
# Install prisma-client-py directly (much smaller than copying full venv)
RUN pip3 install prisma>=0.15.0 --break-system-packages
COPY autogpt_platform/backend/schema.prisma ./
COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py
COPY autogpt_platform/backend/gen_prisma_types_stub.py ./
COPY autogpt_platform/backend/migrations ./migrations
# ============================== BACKEND SERVER ============================== #
FROM debian:13-slim AS server
WORKDIR /app
ENV POETRY_HOME=/opt/poetry \
POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_CREATE=true \
POETRY_VIRTUALENVS_IN_PROJECT=true \
DEBIAN_FRONTEND=noninteractive
ENV PATH=/opt/poetry/bin:$PATH
ENV DEBIAN_FRONTEND=noninteractive
# Install Python without upgrading system-managed packages
RUN apt-get update && apt-get install -y \
# Install Python, FFmpeg, ImageMagick, and CLI tools for agent use.
# bubblewrap provides OS-level sandbox (whitelist-only FS + no network)
# for the bash_exec MCP tool.
# Using --no-install-recommends saves ~650MB by skipping unnecessary deps like llvm, mesa, etc.
RUN apt-get update && apt-get install -y --no-install-recommends \
python3.13 \
python3-pip \
ffmpeg \
imagemagick \
jq \
ripgrep \
tree \
bubblewrap \
&& rm -rf /var/lib/apt/lists/*
# Copy only necessary files from builder
COPY --from=builder /app /app
# Copy poetry (build-time only, for `poetry install --only-root` to create entry points)
COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3*
COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry
# Copy Node.js installation for Prisma
@@ -79,30 +118,25 @@ COPY --from=builder /usr/bin/npm /usr/bin/npm
COPY --from=builder /usr/bin/npx /usr/bin/npx
COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
RUN mkdir -p /app/autogpt_platform/autogpt_libs
RUN mkdir -p /app/autogpt_platform/backend
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/autogpt_platform/backend
FROM server_dependencies AS migrate
# Copy only the .venv from builder (not the entire /app directory)
# The .venv includes the generated Prisma client
COPY --from=builder /app/autogpt_platform/backend/.venv ./.venv
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
# Migration stage only needs schema and migrations - much lighter than full backend
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
COPY autogpt_platform/backend/backend/data/partial_types.py /app/autogpt_platform/backend/backend/data/partial_types.py
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
# Copy dependency files + autogpt_libs (path dependency)
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml ./
FROM server_dependencies AS server
COPY autogpt_platform/backend /app/autogpt_platform/backend
# Copy backend code + docs (for Copilot docs search)
COPY autogpt_platform/backend ./
COPY docs /app/docs
RUN poetry install --no-ansi --only-root
# Install the project package to create entry point scripts in .venv/bin/
# (e.g., rest, executor, ws, db, scheduler, notification - see [tool.poetry.scripts])
RUN POETRY_VIRTUALENVS_CREATE=true POETRY_VIRTUALENVS_IN_PROJECT=true \
poetry install --no-ansi --only-root
ENV PORT=8000
CMD ["poetry", "run", "rest"]
CMD ["rest"]

View File

@@ -138,7 +138,7 @@ If the test doesn't need the `user_id` specifically, mocking is not necessary as
#### Using Global Auth Fixtures
Two global auth fixtures are provided by `backend/server/conftest.py`:
Two global auth fixtures are provided by `backend/api/conftest.py`:
- `mock_jwt_user` - Regular user with `test_user_id` ("test-user-id")
- `mock_jwt_admin` - Admin user with `admin_user_id` ("admin-user-id")

View File

@@ -1,4 +1,9 @@
"""Common test fixtures for server tests."""
"""Common test fixtures for server tests.
Note: Common fixtures like test_user_id, admin_user_id, target_user_id,
setup_test_user, and setup_admin_user are defined in the parent conftest.py
(backend/conftest.py) and are available here automatically.
"""
import pytest
from pytest_snapshot.plugin import Snapshot
@@ -11,54 +16,6 @@ def configured_snapshot(snapshot: Snapshot) -> Snapshot:
return snapshot
@pytest.fixture
def test_user_id() -> str:
"""Test user ID fixture."""
return "3e53486c-cf57-477e-ba2a-cb02dc828e1a"
@pytest.fixture
def admin_user_id() -> str:
"""Admin user ID fixture."""
return "4e53486c-cf57-477e-ba2a-cb02dc828e1b"
@pytest.fixture
def target_user_id() -> str:
"""Target user ID fixture."""
return "5e53486c-cf57-477e-ba2a-cb02dc828e1c"
@pytest.fixture
async def setup_test_user(test_user_id):
"""Create test user in database before tests."""
from backend.data.user import get_or_create_user
# Create the test user in the database using JWT token format
user_data = {
"sub": test_user_id,
"email": "test@example.com",
"user_metadata": {"name": "Test User"},
}
await get_or_create_user(user_data)
return test_user_id
@pytest.fixture
async def setup_admin_user(admin_user_id):
"""Create admin user in database before tests."""
from backend.data.user import get_or_create_user
# Create the admin user in the database using JWT token format
user_data = {
"sub": admin_user_id,
"email": "test-admin@example.com",
"user_metadata": {"name": "Test Admin"},
}
await get_or_create_user(user_data)
return admin_user_id
@pytest.fixture
def mock_jwt_user(test_user_id):
"""Provide mock JWT payload for regular user testing."""

View File

@@ -10,7 +10,7 @@ from typing_extensions import TypedDict
import backend.api.features.store.cache as store_cache
import backend.api.features.store.model as store_model
import backend.data.block
import backend.blocks
from backend.api.external.middleware import require_permission
from backend.data import execution as execution_db
from backend.data import graph as graph_db
@@ -67,7 +67,7 @@ async def get_user_info(
dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))],
)
async def get_graph_blocks() -> Sequence[dict[Any, Any]]:
blocks = [block() for block in backend.data.block.get_blocks().values()]
blocks = [block() for block in backend.blocks.get_blocks().values()]
return [b.to_dict() for b in blocks if not b.disabled]
@@ -83,7 +83,7 @@ async def execute_graph_block(
require_permission(APIKeyPermission.EXECUTE_BLOCK)
),
) -> CompletedBlockOutput:
obj = backend.data.block.get_block(block_id)
obj = backend.blocks.get_block(block_id)
if not obj:
raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.")
if obj.disabled:

View File

@@ -15,9 +15,9 @@ from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field
from backend.api.external.middleware import require_permission
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools import find_agent_tool, run_agent_tool
from backend.api.features.chat.tools.models import ToolResponseBase
from backend.copilot.model import ChatSession
from backend.copilot.tools import find_agent_tool, run_agent_tool
from backend.copilot.tools.models import ToolResponseBase
from backend.data.auth.base import APIAuthorizationInfo
logger = logging.getLogger(__name__)

View File

@@ -10,10 +10,15 @@ import backend.api.features.library.db as library_db
import backend.api.features.library.model as library_model
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
import backend.data.block
from backend.blocks import load_all_blocks
from backend.blocks._base import (
AnyBlockSchema,
BlockCategory,
BlockInfo,
BlockSchema,
BlockType,
)
from backend.blocks.llm import LlmModel
from backend.data.block import AnyBlockSchema, BlockCategory, BlockInfo, BlockSchema
from backend.data.db import query_raw_with_schema
from backend.integrations.providers import ProviderName
from backend.util.cache import cached
@@ -22,7 +27,7 @@ from backend.util.models import Pagination
from .model import (
BlockCategoryResponse,
BlockResponse,
BlockType,
BlockTypeFilter,
CountResponse,
FilterType,
Provider,
@@ -88,7 +93,7 @@ def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse
def get_blocks(
*,
category: str | None = None,
type: BlockType | None = None,
type: BlockTypeFilter | None = None,
provider: ProviderName | None = None,
page: int = 1,
page_size: int = 50,
@@ -669,9 +674,9 @@ async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]:
for block_type in load_all_blocks().values():
block: AnyBlockSchema = block_type()
if block.disabled or block.block_type in (
backend.data.block.BlockType.INPUT,
backend.data.block.BlockType.OUTPUT,
backend.data.block.BlockType.AGENT,
BlockType.INPUT,
BlockType.OUTPUT,
BlockType.AGENT,
):
continue
# Find the execution count for this block

View File

@@ -4,7 +4,7 @@ from pydantic import BaseModel
import backend.api.features.library.model as library_model
import backend.api.features.store.model as store_model
from backend.data.block import BlockInfo
from backend.blocks._base import BlockInfo
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination
@@ -15,7 +15,7 @@ FilterType = Literal[
"my_agents",
]
BlockType = Literal["all", "input", "action", "output"]
BlockTypeFilter = Literal["all", "input", "action", "output"]
class SearchEntry(BaseModel):

View File

@@ -17,7 +17,7 @@ router = fastapi.APIRouter(
)
# Taken from backend/server/v2/store/db.py
# Taken from backend/api/features/store/db.py
def sanitize_query(query: str | None) -> str | None:
if query is None:
return query
@@ -88,7 +88,7 @@ async def get_block_categories(
)
async def get_blocks(
category: Annotated[str | None, fastapi.Query()] = None,
type: Annotated[builder_model.BlockType | None, fastapi.Query()] = None,
type: Annotated[builder_model.BlockTypeFilter | None, fastapi.Query()] = None,
provider: Annotated[ProviderName | None, fastapi.Query()] = None,
page: Annotated[int, fastapi.Query()] = 1,
page_size: Annotated[int, fastapi.Query()] = 50,

View File

@@ -1,96 +0,0 @@
"""Configuration management for chat system."""
import os
from pydantic import Field, field_validator
from pydantic_settings import BaseSettings
class ChatConfig(BaseSettings):
"""Configuration for the chat system."""
# OpenAI API Configuration
model: str = Field(
default="anthropic/claude-opus-4.5", description="Default model to use"
)
title_model: str = Field(
default="openai/gpt-4o-mini",
description="Model to use for generating session titles (should be fast/cheap)",
)
api_key: str | None = Field(default=None, description="OpenAI API key")
base_url: str | None = Field(
default="https://openrouter.ai/api/v1",
description="Base URL for API (e.g., for OpenRouter)",
)
# Session TTL Configuration - 12 hours
session_ttl: int = Field(default=43200, description="Session TTL in seconds")
# Streaming Configuration
max_context_messages: int = Field(
default=50, ge=1, le=200, description="Maximum context messages"
)
stream_timeout: int = Field(default=300, description="Stream timeout in seconds")
max_retries: int = Field(default=3, description="Maximum number of retries")
max_agent_runs: int = Field(default=30, description="Maximum number of agent runs")
max_agent_schedules: int = Field(
default=30, description="Maximum number of agent schedules"
)
# Long-running operation configuration
long_running_operation_ttl: int = Field(
default=600,
description="TTL in seconds for long-running operation tracking in Redis (safety net if pod dies)",
)
# Langfuse Prompt Management Configuration
# Note: Langfuse credentials are in Settings().secrets (settings.py)
langfuse_prompt_name: str = Field(
default="CoPilot Prompt",
description="Name of the prompt in Langfuse to fetch",
)
@field_validator("api_key", mode="before")
@classmethod
def get_api_key(cls, v):
"""Get API key from environment if not provided."""
if v is None:
# Try to get from environment variables
# First check for CHAT_API_KEY (Pydantic prefix)
v = os.getenv("CHAT_API_KEY")
if not v:
# Fall back to OPEN_ROUTER_API_KEY
v = os.getenv("OPEN_ROUTER_API_KEY")
if not v:
# Fall back to OPENAI_API_KEY
v = os.getenv("OPENAI_API_KEY")
return v
@field_validator("base_url", mode="before")
@classmethod
def get_base_url(cls, v):
"""Get base URL from environment if not provided."""
if v is None:
# Check for OpenRouter or custom base URL
v = os.getenv("CHAT_BASE_URL")
if not v:
v = os.getenv("OPENROUTER_BASE_URL")
if not v:
v = os.getenv("OPENAI_BASE_URL")
if not v:
v = "https://openrouter.ai/api/v1"
return v
# Prompt paths for different contexts
PROMPT_PATHS: dict[str, str] = {
"default": "prompts/chat_system.md",
"onboarding": "prompts/onboarding_system.md",
}
class Config:
"""Pydantic config."""
env_file = ".env"
env_file_encoding = "utf-8"
extra = "ignore" # Ignore extra environment variables

View File

@@ -1,119 +0,0 @@
import pytest
from .model import (
ChatMessage,
ChatSession,
Usage,
get_chat_session,
upsert_chat_session,
)
messages = [
ChatMessage(content="Hello, how are you?", role="user"),
ChatMessage(
content="I'm fine, thank you!",
role="assistant",
tool_calls=[
{
"id": "t123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"city": "New York"}',
},
}
],
),
ChatMessage(
content="I'm using the tool to get the weather",
role="tool",
tool_call_id="t123",
),
]
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_serialization_deserialization():
s = ChatSession.new(user_id="abc123")
s.messages = messages
s.usage = [Usage(prompt_tokens=100, completion_tokens=200, total_tokens=300)]
serialized = s.model_dump_json()
s2 = ChatSession.model_validate_json(serialized)
assert s2.model_dump() == s.model_dump()
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage(setup_test_user, test_user_id):
s = ChatSession.new(user_id=test_user_id)
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 == s
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_redis_storage_user_id_mismatch(
setup_test_user, test_user_id
):
s = ChatSession.new(user_id=test_user_id)
s.messages = messages
s = await upsert_chat_session(s)
s2 = await get_chat_session(s.session_id, "different_user_id")
assert s2 is None
@pytest.mark.asyncio(loop_scope="session")
async def test_chatsession_db_storage(setup_test_user, test_user_id):
"""Test that messages are correctly saved to and loaded from DB (not cache)."""
from backend.data.redis_client import get_redis_async
# Create session with messages including assistant message
s = ChatSession.new(user_id=test_user_id)
s.messages = messages # Contains user, assistant, and tool messages
assert s.session_id is not None, "Session id is not set"
# Upsert to save to both cache and DB
s = await upsert_chat_session(s)
# Clear the Redis cache to force DB load
redis_key = f"chat:session:{s.session_id}"
async_redis = await get_redis_async()
await async_redis.delete(redis_key)
# Load from DB (cache was cleared)
s2 = await get_chat_session(
session_id=s.session_id,
user_id=s.user_id,
)
assert s2 is not None, "Session not found after loading from DB"
assert len(s2.messages) == len(
s.messages
), f"Message count mismatch: expected {len(s.messages)}, got {len(s2.messages)}"
# Verify all roles are present
roles = [m.role for m in s2.messages]
assert "user" in roles, f"User message missing. Roles found: {roles}"
assert "assistant" in roles, f"Assistant message missing. Roles found: {roles}"
assert "tool" in roles, f"Tool message missing. Roles found: {roles}"
# Verify message content
for orig, loaded in zip(s.messages, s2.messages):
assert orig.role == loaded.role, f"Role mismatch: {orig.role} != {loaded.role}"
assert (
orig.content == loaded.content
), f"Content mismatch for {orig.role}: {orig.content} != {loaded.content}"
if orig.tool_calls:
assert (
loaded.tool_calls is not None
), f"Tool calls missing for {orig.role} message"
assert len(orig.tool_calls) == len(loaded.tool_calls)

View File

@@ -1,20 +1,60 @@
"""Chat API routes for chat session management and streaming via SSE."""
import asyncio
import logging
import uuid as uuid_module
from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Query, Security
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Response, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from backend.copilot import service as chat_service
from backend.copilot import stream_registry
from backend.copilot.completion_handler import (
process_operation_failure,
process_operation_success,
)
from backend.copilot.config import ChatConfig
from backend.copilot.executor.utils import enqueue_copilot_task
from backend.copilot.model import (
ChatMessage,
ChatSession,
append_and_save_message,
create_chat_session,
delete_chat_session,
get_chat_session,
get_user_sessions,
)
from backend.copilot.response_model import StreamError, StreamFinish, StreamHeartbeat
from backend.copilot.tools.models import (
AgentDetailsResponse,
AgentOutputResponse,
AgentPreviewResponse,
AgentSavedResponse,
AgentsFoundResponse,
BlockDetailsResponse,
BlockListResponse,
BlockOutputResponse,
ClarificationNeededResponse,
DocPageResponse,
DocSearchResultsResponse,
ErrorResponse,
ExecutionStartedResponse,
InputValidationErrorResponse,
NeedLoginResponse,
NoResultsResponse,
OperationInProgressResponse,
OperationPendingResponse,
OperationStartedResponse,
SetupRequirementsResponse,
UnderstandingUpdatedResponse,
)
from backend.copilot.tracking import track_user_message
from backend.util.exceptions import NotFoundError
from . import service as chat_service
from .config import ChatConfig
from .model import ChatSession, create_chat_session, get_chat_session, get_user_sessions
config = ChatConfig()
@@ -55,6 +95,15 @@ class CreateSessionResponse(BaseModel):
user_id: str | None
class ActiveStreamInfo(BaseModel):
"""Information about an active stream for reconnection."""
task_id: str
last_message_id: str # Redis Stream message ID for resumption
operation_id: str # Operation ID for completion tracking
tool_name: str # Name of the tool being executed
class SessionDetailResponse(BaseModel):
"""Response model providing complete details for a chat session, including messages."""
@@ -63,6 +112,7 @@ class SessionDetailResponse(BaseModel):
updated_at: str
user_id: str | None
messages: list[dict]
active_stream: ActiveStreamInfo | None = None # Present if stream is still active
class SessionSummaryResponse(BaseModel):
@@ -81,6 +131,14 @@ class ListSessionsResponse(BaseModel):
total: int
class OperationCompleteRequest(BaseModel):
"""Request model for external completion webhook."""
success: bool
result: dict | str | None = None
error: str | None = None
# ========== Routes ==========
@@ -155,6 +213,43 @@ async def create_session(
)
@router.delete(
"/sessions/{session_id}",
dependencies=[Security(auth.requires_user)],
status_code=204,
responses={404: {"description": "Session not found or access denied"}},
)
async def delete_session(
session_id: str,
user_id: Annotated[str, Security(auth.get_user_id)],
) -> Response:
"""
Delete a chat session.
Permanently removes a chat session and all its messages.
Only the owner can delete their sessions.
Args:
session_id: The session ID to delete.
user_id: The authenticated user's ID.
Returns:
204 No Content on success.
Raises:
HTTPException: 404 if session not found or not owned by user.
"""
deleted = await delete_chat_session(session_id, user_id)
if not deleted:
raise HTTPException(
status_code=404,
detail=f"Session {session_id} not found or access denied",
)
return Response(status_code=204)
@router.get(
"/sessions/{session_id}",
)
@@ -166,13 +261,14 @@ async def get_session(
Retrieve the details of a specific chat session.
Looks up a chat session by ID for the given user (if authenticated) and returns all session data including messages.
If there's an active stream for this session, returns the task_id for reconnection.
Args:
session_id: The unique identifier for the desired chat session.
user_id: The optional authenticated user ID, or None for anonymous access.
Returns:
SessionDetailResponse: Details for the requested session, or None if not found.
SessionDetailResponse: Details for the requested session, including active_stream info if applicable.
"""
session = await get_chat_session(session_id, user_id)
@@ -180,11 +276,32 @@ async def get_session(
raise NotFoundError(f"Session {session_id} not found.")
messages = [message.model_dump() for message in session.messages]
logger.info(
f"Returning session {session_id}: "
f"message_count={len(messages)}, "
f"roles={[m.get('role') for m in messages]}"
# Check if there's an active stream for this session
active_stream_info = None
active_task, last_message_id = await stream_registry.get_active_task_for_session(
session_id, user_id
)
logger.info(
f"[GET_SESSION] session={session_id}, active_task={active_task is not None}, "
f"msg_count={len(messages)}, last_role={messages[-1].get('role') if messages else 'none'}"
)
if active_task:
# Filter out the in-progress assistant message from the session response.
# The client will receive the complete assistant response through the SSE
# stream replay instead, preventing duplicate content.
if messages and messages[-1].get("role") == "assistant":
messages = messages[:-1]
# Use "0-0" as last_message_id to replay the stream from the beginning.
# Since we filtered out the cached assistant message, the client needs
# the full stream to reconstruct the response.
active_stream_info = ActiveStreamInfo(
task_id=active_task.task_id,
last_message_id="0-0",
operation_id=active_task.operation_id,
tool_name=active_task.tool_name,
)
return SessionDetailResponse(
id=session.session_id,
@@ -192,6 +309,7 @@ async def get_session(
updated_at=session.updated_at.isoformat(),
user_id=session.user_id or None,
messages=messages,
active_stream=active_stream_info,
)
@@ -211,49 +329,225 @@ async def stream_chat_post(
- Tool call UI elements (if invoked)
- Tool execution results
The AI generation runs in a background task that continues even if the client disconnects.
All chunks are written to Redis for reconnection support. If the client disconnects,
they can reconnect using GET /tasks/{task_id}/stream to resume from where they left off.
Args:
session_id: The chat session identifier to associate with the streamed messages.
request: Request body containing message, is_user_message, and optional context.
user_id: Optional authenticated user ID.
Returns:
StreamingResponse: SSE-formatted response chunks.
StreamingResponse: SSE-formatted response chunks. First chunk is a "start" event
containing the task_id for reconnection.
"""
session = await _validate_and_get_session(session_id, user_id)
import asyncio
import time
async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion(
session_id,
request.message,
is_user_message=request.is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context,
):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
stream_start_time = time.perf_counter()
log_meta = {"component": "ChatStream", "session_id": session_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] stream_chat_post STARTED, session={session_id}, "
f"user={user_id}, message_len={len(request.message)}",
extra={"json_fields": log_meta},
)
await _validate_and_get_session(session_id, user_id)
logger.info(
f"[TIMING] session validated in {(time.perf_counter() - stream_start_time) * 1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"duration_ms": (time.perf_counter() - stream_start_time) * 1000,
}
},
)
# Atomically append user message to session BEFORE creating task to avoid
# race condition where GET_SESSION sees task as "running" but message isn't
# saved yet. append_and_save_message re-fetches inside a lock to prevent
# message loss from concurrent requests.
if request.message:
message = ChatMessage(
role="user" if request.is_user_message else "assistant",
content=request.message,
)
# AI SDK protocol termination
yield "data: [DONE]\n\n"
if request.is_user_message:
track_user_message(
user_id=user_id,
session_id=session_id,
message_length=len(request.message),
)
logger.info(f"[STREAM] Saving user message to session {session_id}")
await append_and_save_message(session_id, message)
logger.info(f"[STREAM] User message saved for session {session_id}")
# Create a task in the stream registry for reconnection support
task_id = str(uuid_module.uuid4())
operation_id = str(uuid_module.uuid4())
log_meta["task_id"] = task_id
task_create_start = time.perf_counter()
await stream_registry.create_task(
task_id=task_id,
session_id=session_id,
user_id=user_id,
tool_call_id="chat_stream", # Not a tool call, but needed for the model
tool_name="chat",
operation_id=operation_id,
)
logger.info(
f"[TIMING] create_task completed in {(time.perf_counter() - task_create_start) * 1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"duration_ms": (time.perf_counter() - task_create_start) * 1000,
}
},
)
await enqueue_copilot_task(
task_id=task_id,
session_id=session_id,
user_id=user_id,
operation_id=operation_id,
message=request.message,
is_user_message=request.is_user_message,
context=request.context,
)
setup_time = (time.perf_counter() - stream_start_time) * 1000
logger.info(
f"[TIMING] Task enqueued to RabbitMQ, setup={setup_time:.1f}ms",
extra={"json_fields": {**log_meta, "setup_time_ms": setup_time}},
)
# SSE endpoint that subscribes to the task's stream
async def event_generator() -> AsyncGenerator[str, None]:
import time as time_module
event_gen_start = time_module.perf_counter()
logger.info(
f"[TIMING] event_generator STARTED, task={task_id}, session={session_id}, "
f"user={user_id}",
extra={"json_fields": log_meta},
)
subscriber_queue = None
first_chunk_yielded = False
chunks_yielded = 0
try:
# Subscribe to the task stream (this replays existing messages + live updates)
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=task_id,
user_id=user_id,
last_message_id="0-0", # Get all messages from the beginning
)
if subscriber_queue is None:
yield StreamFinish().to_sse()
yield "data: [DONE]\n\n"
return
# Read from the subscriber queue and yield to SSE
logger.info(
"[TIMING] Starting to read from subscriber_queue",
extra={"json_fields": log_meta},
)
while True:
try:
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
chunks_yielded += 1
if not first_chunk_yielded:
first_chunk_yielded = True
elapsed = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] FIRST CHUNK from queue at {elapsed:.2f}s, "
f"type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"chunk_type": type(chunk).__name__,
"elapsed_ms": elapsed * 1000,
}
},
)
yield chunk.to_sse()
# Check for finish signal
if isinstance(chunk, StreamFinish):
total_time = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] StreamFinish received in {total_time:.2f}s; "
f"n_chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"chunks_yielded": chunks_yielded,
"total_time_ms": total_time * 1000,
}
},
)
break
except asyncio.TimeoutError:
yield StreamHeartbeat().to_sse()
except GeneratorExit:
logger.info(
f"[TIMING] GeneratorExit (client disconnected), chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"chunks_yielded": chunks_yielded,
"reason": "client_disconnect",
}
},
)
pass # Client disconnected - background task continues
except Exception as e:
elapsed = (time_module.perf_counter() - event_gen_start) * 1000
logger.error(
f"[TIMING] event_generator ERROR after {elapsed:.1f}ms: {e}",
extra={
"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}
},
)
# Surface error to frontend so it doesn't appear stuck
yield StreamError(
errorText="An error occurred. Please try again.",
code="stream_error",
).to_sse()
yield StreamFinish().to_sse()
finally:
# Unsubscribe when client disconnects or stream ends
if subscriber_queue is not None:
try:
await stream_registry.unsubscribe_from_task(
task_id, subscriber_queue
)
except Exception as unsub_err:
logger.error(
f"Error unsubscribing from task {task_id}: {unsub_err}",
exc_info=True,
)
# AI SDK protocol termination - always yield even if unsubscribe fails
total_time = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] event_generator FINISHED in {total_time:.2f}s; "
f"task={task_id}, session={session_id}, n_chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time * 1000,
"chunks_yielded": chunks_yielded,
}
},
)
yield "data: [DONE]\n\n"
return StreamingResponse(
event_generator(),
@@ -270,63 +564,90 @@ async def stream_chat_post(
@router.get(
"/sessions/{session_id}/stream",
)
async def stream_chat_get(
async def resume_session_stream(
session_id: str,
message: Annotated[str, Query(min_length=1, max_length=10000)],
user_id: str | None = Depends(auth.get_user_id),
is_user_message: bool = Query(default=True),
):
"""
Stream chat responses for a session (GET - legacy endpoint).
Resume an active stream for a session.
Streams the AI/completion responses in real time over Server-Sent Events (SSE), including:
- Text fragments as they are generated
- Tool call UI elements (if invoked)
- Tool execution results
Called by the AI SDK's ``useChat(resume: true)`` on page load.
Checks for an active (in-progress) task on the session and either replays
the full SSE stream or returns 204 No Content if nothing is running.
Args:
session_id: The chat session identifier to associate with the streamed messages.
message: The user's new message to process.
session_id: The chat session identifier.
user_id: Optional authenticated user ID.
is_user_message: Whether the message is a user message.
Returns:
StreamingResponse: SSE-formatted response chunks.
Returns:
StreamingResponse (SSE) when an active stream exists,
or 204 No Content when there is nothing to resume.
"""
session = await _validate_and_get_session(session_id, user_id)
import asyncio
active_task, _last_id = await stream_registry.get_active_task_for_session(
session_id, user_id
)
if not active_task:
return Response(status_code=204)
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=active_task.task_id,
user_id=user_id,
last_message_id="0-0", # Full replay so useChat rebuilds the message
)
if subscriber_queue is None:
return Response(status_code=204)
async def event_generator() -> AsyncGenerator[str, None]:
chunk_count = 0
first_chunk_type: str | None = None
async for chunk in chat_service.stream_chat_completion(
session_id,
message,
is_user_message=is_user_message,
user_id=user_id,
session=session, # Pass pre-fetched session to avoid double-fetch
):
if chunk_count < 3:
logger.info(
"Chat stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
try:
while True:
try:
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
if chunk_count < 3:
logger.info(
"Resume stream chunk",
extra={
"session_id": session_id,
"chunk_type": str(chunk.type),
},
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse()
if isinstance(chunk, StreamFinish):
break
except asyncio.TimeoutError:
yield StreamHeartbeat().to_sse()
except GeneratorExit:
pass
except Exception as e:
logger.error(f"Error in resume stream for session {session_id}: {e}")
finally:
try:
await stream_registry.unsubscribe_from_task(
active_task.task_id, subscriber_queue
)
if not first_chunk_type:
first_chunk_type = str(chunk.type)
chunk_count += 1
yield chunk.to_sse()
logger.info(
"Chat stream completed",
extra={
"session_id": session_id,
"chunk_count": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
# AI SDK protocol termination
yield "data: [DONE]\n\n"
except Exception as unsub_err:
logger.error(
f"Error unsubscribing from task {active_task.task_id}: {unsub_err}",
exc_info=True,
)
logger.info(
"Resume stream completed",
extra={
"session_id": session_id,
"n_chunks": chunk_count,
"first_chunk_type": first_chunk_type,
},
)
yield "data: [DONE]\n\n"
return StreamingResponse(
event_generator(),
@@ -334,8 +655,8 @@ async def stream_chat_get(
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # Disable nginx buffering
"x-vercel-ai-ui-message-stream": "v1", # AI SDK protocol header
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
},
)
@@ -366,6 +687,249 @@ async def session_assign_user(
return {"status": "ok"}
# ========== Task Streaming (SSE Reconnection) ==========
@router.get(
"/tasks/{task_id}/stream",
)
async def stream_task(
task_id: str,
user_id: str | None = Depends(auth.get_user_id),
last_message_id: str = Query(
default="0-0",
description="Last Redis Stream message ID received (e.g., '1706540123456-0'). Use '0-0' for full replay.",
),
):
"""
Reconnect to a long-running task's SSE stream.
When a long-running operation (like agent generation) starts, the client
receives a task_id. If the connection drops, the client can reconnect
using this endpoint to resume receiving updates.
Args:
task_id: The task ID from the operation_started response.
user_id: Authenticated user ID for ownership validation.
last_message_id: Last Redis Stream message ID received ("0-0" for full replay).
Returns:
StreamingResponse: SSE-formatted response chunks starting after last_message_id.
Raises:
HTTPException: 404 if task not found, 410 if task expired, 403 if access denied.
"""
# Check task existence and expiry before subscribing
task, error_code = await stream_registry.get_task_with_expiry_info(task_id)
if error_code == "TASK_EXPIRED":
raise HTTPException(
status_code=410,
detail={
"code": "TASK_EXPIRED",
"message": "This operation has expired. Please try again.",
},
)
if error_code == "TASK_NOT_FOUND":
raise HTTPException(
status_code=404,
detail={
"code": "TASK_NOT_FOUND",
"message": f"Task {task_id} not found.",
},
)
# Validate ownership if task has an owner
if task and task.user_id and user_id != task.user_id:
raise HTTPException(
status_code=403,
detail={
"code": "ACCESS_DENIED",
"message": "You do not have access to this task.",
},
)
# Get subscriber queue from stream registry
subscriber_queue = await stream_registry.subscribe_to_task(
task_id=task_id,
user_id=user_id,
last_message_id=last_message_id,
)
if subscriber_queue is None:
raise HTTPException(
status_code=404,
detail={
"code": "TASK_NOT_FOUND",
"message": f"Task {task_id} not found or access denied.",
},
)
async def event_generator() -> AsyncGenerator[str, None]:
heartbeat_interval = 15.0 # Send heartbeat every 15 seconds
try:
while True:
try:
# Wait for next chunk with timeout for heartbeats
chunk = await asyncio.wait_for(
subscriber_queue.get(), timeout=heartbeat_interval
)
yield chunk.to_sse()
# Check for finish signal
if isinstance(chunk, StreamFinish):
break
except asyncio.TimeoutError:
# Send heartbeat to keep connection alive
yield StreamHeartbeat().to_sse()
except Exception as e:
logger.error(f"Error in task stream {task_id}: {e}", exc_info=True)
finally:
# Unsubscribe when client disconnects or stream ends
try:
await stream_registry.unsubscribe_from_task(task_id, subscriber_queue)
except Exception as unsub_err:
logger.error(
f"Error unsubscribing from task {task_id}: {unsub_err}",
exc_info=True,
)
# AI SDK protocol termination - always yield even if unsubscribe fails
yield "data: [DONE]\n\n"
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
"x-vercel-ai-ui-message-stream": "v1",
},
)
@router.get(
"/tasks/{task_id}",
)
async def get_task_status(
task_id: str,
user_id: str | None = Depends(auth.get_user_id),
) -> dict:
"""
Get the status of a long-running task.
Args:
task_id: The task ID to check.
user_id: Authenticated user ID for ownership validation.
Returns:
dict: Task status including task_id, status, tool_name, and operation_id.
Raises:
NotFoundError: If task_id is not found or user doesn't have access.
"""
task = await stream_registry.get_task(task_id)
if task is None:
raise NotFoundError(f"Task {task_id} not found.")
# Validate ownership - if task has an owner, requester must match
if task.user_id and user_id != task.user_id:
raise NotFoundError(f"Task {task_id} not found.")
return {
"task_id": task.task_id,
"session_id": task.session_id,
"status": task.status,
"tool_name": task.tool_name,
"operation_id": task.operation_id,
"created_at": task.created_at.isoformat(),
}
# ========== External Completion Webhook ==========
@router.post(
"/operations/{operation_id}/complete",
status_code=200,
)
async def complete_operation(
operation_id: str,
request: OperationCompleteRequest,
x_api_key: str | None = Header(default=None),
) -> dict:
"""
External completion webhook for long-running operations.
Called by Agent Generator (or other services) when an operation completes.
This triggers the stream registry to publish completion and continue LLM generation.
Args:
operation_id: The operation ID to complete.
request: Completion payload with success status and result/error.
x_api_key: Internal API key for authentication.
Returns:
dict: Status of the completion.
Raises:
HTTPException: If API key is invalid or operation not found.
"""
# Validate internal API key - reject if not configured or invalid
if not config.internal_api_key:
logger.error(
"Operation complete webhook rejected: CHAT_INTERNAL_API_KEY not configured"
)
raise HTTPException(
status_code=503,
detail="Webhook not available: internal API key not configured",
)
if x_api_key != config.internal_api_key:
raise HTTPException(status_code=401, detail="Invalid API key")
# Find task by operation_id
task = await stream_registry.find_task_by_operation_id(operation_id)
if task is None:
raise HTTPException(
status_code=404,
detail=f"Operation {operation_id} not found",
)
logger.info(
f"Received completion webhook for operation {operation_id} "
f"(task_id={task.task_id}, success={request.success})"
)
if request.success:
await process_operation_success(task, request.result)
else:
await process_operation_failure(task, request.error)
return {"status": "ok", "task_id": task.task_id}
# ========== Configuration ==========
@router.get("/config/ttl", status_code=200)
async def get_ttl_config() -> dict:
"""
Get the stream TTL configuration.
Returns the Time-To-Live settings for chat streams, which determines
how long clients can reconnect to an active stream.
Returns:
dict: TTL configuration with seconds and milliseconds values.
"""
return {
"stream_ttl_seconds": config.stream_ttl,
"stream_ttl_ms": config.stream_ttl * 1000,
}
# ========== Health Check ==========
@@ -402,3 +966,43 @@ async def health_check() -> dict:
"service": "chat",
"version": "0.1.0",
}
# ========== Schema Export (for OpenAPI / Orval codegen) ==========
ToolResponseUnion = (
AgentsFoundResponse
| NoResultsResponse
| AgentDetailsResponse
| SetupRequirementsResponse
| ExecutionStartedResponse
| NeedLoginResponse
| ErrorResponse
| InputValidationErrorResponse
| AgentOutputResponse
| UnderstandingUpdatedResponse
| AgentPreviewResponse
| AgentSavedResponse
| ClarificationNeededResponse
| BlockListResponse
| BlockDetailsResponse
| BlockOutputResponse
| DocSearchResultsResponse
| DocPageResponse
| OperationStartedResponse
| OperationPendingResponse
| OperationInProgressResponse
)
@router.get(
"/schema/tool-responses",
response_model=ToolResponseUnion,
include_in_schema=True,
summary="[Dummy] Tool response type export for codegen",
description="This endpoint is not meant to be called. It exists solely to "
"expose tool response models in the OpenAPI schema for frontend codegen.",
)
async def _tool_response_schema() -> ToolResponseUnion: # type: ignore[return]
"""Never called at runtime. Exists only so Orval generates TS types."""
raise HTTPException(status_code=501, detail="Schema-only endpoint")

View File

@@ -1,82 +0,0 @@
import logging
from os import getenv
import pytest
from . import service as chat_service
from .model import create_chat_session, get_chat_session, upsert_chat_session
from .response_model import (
StreamError,
StreamFinish,
StreamTextDelta,
StreamToolOutputAvailable,
)
logger = logging.getLogger(__name__)
@pytest.mark.asyncio(loop_scope="session")
async def test_stream_chat_completion(setup_test_user, test_user_id):
"""
Test the stream_chat_completion function.
"""
api_key: str | None = getenv("OPEN_ROUTER_API_KEY")
if not api_key:
return pytest.skip("OPEN_ROUTER_API_KEY is not set, skipping test")
session = await create_chat_session(test_user_id)
has_errors = False
has_ended = False
assistant_message = ""
async for chunk in chat_service.stream_chat_completion(
session.session_id, "Hello, how are you?", user_id=session.user_id
):
logger.info(chunk)
if isinstance(chunk, StreamError):
has_errors = True
if isinstance(chunk, StreamTextDelta):
assistant_message += chunk.delta
if isinstance(chunk, StreamFinish):
has_ended = True
assert has_ended, "Chat completion did not end"
assert not has_errors, "Error occurred while streaming chat completion"
assert assistant_message, "Assistant message is empty"
@pytest.mark.asyncio(loop_scope="session")
async def test_stream_chat_completion_with_tool_calls(setup_test_user, test_user_id):
"""
Test the stream_chat_completion function.
"""
api_key: str | None = getenv("OPEN_ROUTER_API_KEY")
if not api_key:
return pytest.skip("OPEN_ROUTER_API_KEY is not set, skipping test")
session = await create_chat_session(test_user_id)
session = await upsert_chat_session(session)
has_errors = False
has_ended = False
had_tool_calls = False
async for chunk in chat_service.stream_chat_completion(
session.session_id,
"Please find me an agent that can help me with my business. Use the query 'moneny printing agent'",
user_id=session.user_id,
):
logger.info(chunk)
if isinstance(chunk, StreamError):
has_errors = True
if isinstance(chunk, StreamFinish):
has_ended = True
if isinstance(chunk, StreamToolOutputAvailable):
had_tool_calls = True
assert has_ended, "Chat completion did not end"
assert not has_errors, "Error occurred while streaming chat completion"
assert had_tool_calls, "Tool calls did not occur"
session = await get_chat_session(session.session_id)
assert session, "Session not found"
assert session.usage, "Usage is empty"

View File

@@ -1,28 +0,0 @@
"""Agent generator package - Creates agents from natural language."""
from .core import (
AgentGeneratorNotConfiguredError,
decompose_goal,
generate_agent,
generate_agent_patch,
get_agent_as_json,
json_to_graph,
save_agent_to_library,
)
from .service import health_check as check_external_service_health
from .service import is_external_service_configured
__all__ = [
# Core functions
"decompose_goal",
"generate_agent",
"generate_agent_patch",
"save_agent_to_library",
"get_agent_as_json",
"json_to_graph",
# Exceptions
"AgentGeneratorNotConfiguredError",
# Service
"is_external_service_configured",
"check_external_service_health",
]

View File

@@ -1,277 +0,0 @@
"""Core agent generation functions."""
import logging
import uuid
from typing import Any
from backend.api.features.library import db as library_db
from backend.data.graph import Graph, Link, Node, create_graph
from .service import (
decompose_goal_external,
generate_agent_external,
generate_agent_patch_external,
is_external_service_configured,
)
logger = logging.getLogger(__name__)
class AgentGeneratorNotConfiguredError(Exception):
"""Raised when the external Agent Generator service is not configured."""
pass
def _check_service_configured() -> None:
"""Check if the external Agent Generator service is configured.
Raises:
AgentGeneratorNotConfiguredError: If the service is not configured.
"""
if not is_external_service_configured():
raise AgentGeneratorNotConfiguredError(
"Agent Generator service is not configured. "
"Set AGENTGENERATOR_HOST environment variable to enable agent generation."
)
async def decompose_goal(description: str, context: str = "") -> dict[str, Any] | None:
"""Break down a goal into steps or return clarifying questions.
Args:
description: Natural language goal description
context: Additional context (e.g., answers to previous questions)
Returns:
Dict with either:
- {"type": "clarifying_questions", "questions": [...]}
- {"type": "instructions", "steps": [...]}
Or None on error
Raises:
AgentGeneratorNotConfiguredError: If the external service is not configured.
"""
_check_service_configured()
logger.info("Calling external Agent Generator service for decompose_goal")
return await decompose_goal_external(description, context)
async def generate_agent(instructions: dict[str, Any]) -> dict[str, Any] | None:
"""Generate agent JSON from instructions.
Args:
instructions: Structured instructions from decompose_goal
Returns:
Agent JSON dict or None on error
Raises:
AgentGeneratorNotConfiguredError: If the external service is not configured.
"""
_check_service_configured()
logger.info("Calling external Agent Generator service for generate_agent")
result = await generate_agent_external(instructions)
if result:
# Ensure required fields
if "id" not in result:
result["id"] = str(uuid.uuid4())
if "version" not in result:
result["version"] = 1
if "is_active" not in result:
result["is_active"] = True
return result
def json_to_graph(agent_json: dict[str, Any]) -> Graph:
"""Convert agent JSON dict to Graph model.
Args:
agent_json: Agent JSON with nodes and links
Returns:
Graph ready for saving
"""
nodes = []
for n in agent_json.get("nodes", []):
node = Node(
id=n.get("id", str(uuid.uuid4())),
block_id=n["block_id"],
input_default=n.get("input_default", {}),
metadata=n.get("metadata", {}),
)
nodes.append(node)
links = []
for link_data in agent_json.get("links", []):
link = Link(
id=link_data.get("id", str(uuid.uuid4())),
source_id=link_data["source_id"],
sink_id=link_data["sink_id"],
source_name=link_data["source_name"],
sink_name=link_data["sink_name"],
is_static=link_data.get("is_static", False),
)
links.append(link)
return Graph(
id=agent_json.get("id", str(uuid.uuid4())),
version=agent_json.get("version", 1),
is_active=agent_json.get("is_active", True),
name=agent_json.get("name", "Generated Agent"),
description=agent_json.get("description", ""),
nodes=nodes,
links=links,
)
def _reassign_node_ids(graph: Graph) -> None:
"""Reassign all node and link IDs to new UUIDs.
This is needed when creating a new version to avoid unique constraint violations.
"""
# Create mapping from old node IDs to new UUIDs
id_map = {node.id: str(uuid.uuid4()) for node in graph.nodes}
# Reassign node IDs
for node in graph.nodes:
node.id = id_map[node.id]
# Update link references to use new node IDs
for link in graph.links:
link.id = str(uuid.uuid4()) # Also give links new IDs
if link.source_id in id_map:
link.source_id = id_map[link.source_id]
if link.sink_id in id_map:
link.sink_id = id_map[link.sink_id]
async def save_agent_to_library(
agent_json: dict[str, Any], user_id: str, is_update: bool = False
) -> tuple[Graph, Any]:
"""Save agent to database and user's library.
Args:
agent_json: Agent JSON dict
user_id: User ID
is_update: Whether this is an update to an existing agent
Returns:
Tuple of (created Graph, LibraryAgent)
"""
from backend.data.graph import get_graph_all_versions
graph = json_to_graph(agent_json)
if is_update:
# For updates, keep the same graph ID but increment version
# and reassign node/link IDs to avoid conflicts
if graph.id:
existing_versions = await get_graph_all_versions(graph.id, user_id)
if existing_versions:
latest_version = max(v.version for v in existing_versions)
graph.version = latest_version + 1
# Reassign node IDs (but keep graph ID the same)
_reassign_node_ids(graph)
logger.info(f"Updating agent {graph.id} to version {graph.version}")
else:
# For new agents, always generate a fresh UUID to avoid collisions
graph.id = str(uuid.uuid4())
graph.version = 1
# Reassign all node IDs as well
_reassign_node_ids(graph)
logger.info(f"Creating new agent with ID {graph.id}")
# Save to database
created_graph = await create_graph(graph, user_id)
# Add to user's library (or update existing library agent)
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)
return created_graph, library_agents[0]
async def get_agent_as_json(
graph_id: str, user_id: str | None
) -> dict[str, Any] | None:
"""Fetch an agent and convert to JSON format for editing.
Args:
graph_id: Graph ID or library agent ID
user_id: User ID
Returns:
Agent as JSON dict or None if not found
"""
from backend.data.graph import get_graph
# Try to get the graph (version=None gets the active version)
graph = await get_graph(graph_id, version=None, user_id=user_id)
if not graph:
return None
# Convert to JSON format
nodes = []
for node in graph.nodes:
nodes.append(
{
"id": node.id,
"block_id": node.block_id,
"input_default": node.input_default,
"metadata": node.metadata,
}
)
links = []
for node in graph.nodes:
for link in node.output_links:
links.append(
{
"id": link.id,
"source_id": link.source_id,
"sink_id": link.sink_id,
"source_name": link.source_name,
"sink_name": link.sink_name,
"is_static": link.is_static,
}
)
return {
"id": graph.id,
"name": graph.name,
"description": graph.description,
"version": graph.version,
"is_active": graph.is_active,
"nodes": nodes,
"links": links,
}
async def generate_agent_patch(
update_request: str, current_agent: dict[str, Any]
) -> dict[str, Any] | None:
"""Update an existing agent using natural language.
The external Agent Generator service handles:
- Generating the patch
- Applying the patch
- Fixing and validating the result
Args:
update_request: Natural language description of changes
current_agent: Current agent JSON
Returns:
Updated agent JSON, clarifying questions dict, or None on error
Raises:
AgentGeneratorNotConfiguredError: If the external service is not configured.
"""
_check_service_configured()
logger.info("Calling external Agent Generator service for generate_agent_patch")
return await generate_agent_patch_external(update_request, current_agent)

View File

@@ -1,269 +0,0 @@
"""External Agent Generator service client.
This module provides a client for communicating with the external Agent Generator
microservice. When AGENTGENERATOR_HOST is configured, the agent generation functions
will delegate to the external service instead of using the built-in LLM-based implementation.
"""
import logging
from typing import Any
import httpx
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
_client: httpx.AsyncClient | None = None
_settings: Settings | None = None
def _get_settings() -> Settings:
"""Get or create settings singleton."""
global _settings
if _settings is None:
_settings = Settings()
return _settings
def is_external_service_configured() -> bool:
"""Check if external Agent Generator service is configured."""
settings = _get_settings()
return bool(settings.config.agentgenerator_host)
def _get_base_url() -> str:
"""Get the base URL for the external service."""
settings = _get_settings()
host = settings.config.agentgenerator_host
port = settings.config.agentgenerator_port
return f"http://{host}:{port}"
def _get_client() -> httpx.AsyncClient:
"""Get or create the HTTP client for the external service."""
global _client
if _client is None:
settings = _get_settings()
_client = httpx.AsyncClient(
base_url=_get_base_url(),
timeout=httpx.Timeout(settings.config.agentgenerator_timeout),
)
return _client
async def decompose_goal_external(
description: str, context: str = ""
) -> dict[str, Any] | None:
"""Call the external service to decompose a goal.
Args:
description: Natural language goal description
context: Additional context (e.g., answers to previous questions)
Returns:
Dict with either:
- {"type": "clarifying_questions", "questions": [...]}
- {"type": "instructions", "steps": [...]}
- {"type": "unachievable_goal", ...}
- {"type": "vague_goal", ...}
Or None on error
"""
client = _get_client()
# Build the request payload
payload: dict[str, Any] = {"description": description}
if context:
# The external service uses user_instruction for additional context
payload["user_instruction"] = context
try:
response = await client.post("/api/decompose-description", json=payload)
response.raise_for_status()
data = response.json()
if not data.get("success"):
logger.error(f"External service returned error: {data.get('error')}")
return None
# Map the response to the expected format
response_type = data.get("type")
if response_type == "instructions":
return {"type": "instructions", "steps": data.get("steps", [])}
elif response_type == "clarifying_questions":
return {
"type": "clarifying_questions",
"questions": data.get("questions", []),
}
elif response_type == "unachievable_goal":
return {
"type": "unachievable_goal",
"reason": data.get("reason"),
"suggested_goal": data.get("suggested_goal"),
}
elif response_type == "vague_goal":
return {
"type": "vague_goal",
"suggested_goal": data.get("suggested_goal"),
}
else:
logger.error(
f"Unknown response type from external service: {response_type}"
)
return None
except httpx.HTTPStatusError as e:
logger.error(f"HTTP error calling external agent generator: {e}")
return None
except httpx.RequestError as e:
logger.error(f"Request error calling external agent generator: {e}")
return None
except Exception as e:
logger.error(f"Unexpected error calling external agent generator: {e}")
return None
async def generate_agent_external(
instructions: dict[str, Any]
) -> dict[str, Any] | None:
"""Call the external service to generate an agent from instructions.
Args:
instructions: Structured instructions from decompose_goal
Returns:
Agent JSON dict or None on error
"""
client = _get_client()
try:
response = await client.post(
"/api/generate-agent", json={"instructions": instructions}
)
response.raise_for_status()
data = response.json()
if not data.get("success"):
logger.error(f"External service returned error: {data.get('error')}")
return None
return data.get("agent_json")
except httpx.HTTPStatusError as e:
logger.error(f"HTTP error calling external agent generator: {e}")
return None
except httpx.RequestError as e:
logger.error(f"Request error calling external agent generator: {e}")
return None
except Exception as e:
logger.error(f"Unexpected error calling external agent generator: {e}")
return None
async def generate_agent_patch_external(
update_request: str, current_agent: dict[str, Any]
) -> dict[str, Any] | None:
"""Call the external service to generate a patch for an existing agent.
Args:
update_request: Natural language description of changes
current_agent: Current agent JSON
Returns:
Updated agent JSON, clarifying questions dict, or None on error
"""
client = _get_client()
try:
response = await client.post(
"/api/update-agent",
json={
"update_request": update_request,
"current_agent_json": current_agent,
},
)
response.raise_for_status()
data = response.json()
if not data.get("success"):
logger.error(f"External service returned error: {data.get('error')}")
return None
# Check if it's clarifying questions
if data.get("type") == "clarifying_questions":
return {
"type": "clarifying_questions",
"questions": data.get("questions", []),
}
# Otherwise return the updated agent JSON
return data.get("agent_json")
except httpx.HTTPStatusError as e:
logger.error(f"HTTP error calling external agent generator: {e}")
return None
except httpx.RequestError as e:
logger.error(f"Request error calling external agent generator: {e}")
return None
except Exception as e:
logger.error(f"Unexpected error calling external agent generator: {e}")
return None
async def get_blocks_external() -> list[dict[str, Any]] | None:
"""Get available blocks from the external service.
Returns:
List of block info dicts or None on error
"""
client = _get_client()
try:
response = await client.get("/api/blocks")
response.raise_for_status()
data = response.json()
if not data.get("success"):
logger.error("External service returned error getting blocks")
return None
return data.get("blocks", [])
except httpx.HTTPStatusError as e:
logger.error(f"HTTP error getting blocks from external service: {e}")
return None
except httpx.RequestError as e:
logger.error(f"Request error getting blocks from external service: {e}")
return None
except Exception as e:
logger.error(f"Unexpected error getting blocks from external service: {e}")
return None
async def health_check() -> bool:
"""Check if the external service is healthy.
Returns:
True if healthy, False otherwise
"""
if not is_external_service_configured():
return False
client = _get_client()
try:
response = await client.get("/health")
response.raise_for_status()
data = response.json()
return data.get("status") == "healthy" and data.get("blocks_loaded", False)
except Exception as e:
logger.warning(f"External agent generator health check failed: {e}")
return False
async def close_client() -> None:
"""Close the HTTP client."""
global _client
if _client is not None:
await _client.aclose()
_client = None

View File

@@ -1,203 +0,0 @@
"""Shared agent search functionality for find_agent and find_library_agent tools."""
import asyncio
import logging
from typing import Literal
from backend.api.features.library import db as library_db
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.util.exceptions import DatabaseError, NotFoundError
from .models import (
AgentInfo,
AgentsFoundResponse,
ErrorResponse,
NoResultsResponse,
ToolResponseBase,
)
from .utils import fetch_graph_from_store_slug
logger = logging.getLogger(__name__)
SearchSource = Literal["marketplace", "library"]
async def search_agents(
query: str,
source: SearchSource,
session_id: str | None,
user_id: str | None = None,
) -> ToolResponseBase:
"""
Search for agents in marketplace or user library.
Args:
query: Search query string
source: "marketplace" or "library"
session_id: Chat session ID
user_id: User ID (required for library search)
Returns:
AgentsFoundResponse, NoResultsResponse, or ErrorResponse
"""
if not query:
return ErrorResponse(
message="Please provide a search query", session_id=session_id
)
if source == "library" and not user_id:
return ErrorResponse(
message="User authentication required to search library",
session_id=session_id,
)
agents: list[AgentInfo] = []
try:
if source == "marketplace":
logger.info(f"Searching marketplace for: {query}")
results = await store_db.get_store_agents(search_query=query, page_size=5)
# Fetch all graphs in parallel for better performance
async def fetch_marketplace_graph(
creator: str, slug: str
) -> GraphModel | None:
try:
graph, _ = await fetch_graph_from_store_slug(creator, slug)
return graph
except Exception as e:
logger.warning(
f"Failed to fetch input schema for {creator}/{slug}: {e}"
)
return None
graphs = await asyncio.gather(
*(
fetch_marketplace_graph(agent.creator, agent.slug)
for agent in results.agents
)
)
for agent, graph in zip(results.agents, graphs):
agents.append(
AgentInfo(
id=f"{agent.creator}/{agent.slug}",
name=agent.agent_name,
description=agent.description or "",
source="marketplace",
in_library=False,
creator=agent.creator,
category="general",
rating=agent.rating,
runs=agent.runs,
is_featured=False,
inputs=graph.input_schema if graph else None,
)
)
else: # library
logger.info(f"Searching user library for: {query}")
results = await library_db.list_library_agents(
user_id=user_id, # type: ignore[arg-type]
search_term=query,
page_size=10,
)
# Fetch all graphs in parallel for better performance
# (list_library_agents doesn't include nodes for performance)
async def fetch_library_graph(
graph_id: str, graph_version: int
) -> GraphModel | None:
try:
return await graph_db.get_graph(
graph_id=graph_id,
version=graph_version,
user_id=user_id,
)
except Exception as e:
logger.warning(
f"Failed to fetch input schema for graph {graph_id}: {e}"
)
return None
graphs = await asyncio.gather(
*(
fetch_library_graph(agent.graph_id, agent.graph_version)
for agent in results.agents
)
)
for agent, graph in zip(results.agents, graphs):
agents.append(
AgentInfo(
id=agent.id,
name=agent.name,
description=agent.description or "",
source="library",
in_library=True,
creator=agent.creator_name,
status=agent.status.value,
can_access_graph=agent.can_access_graph,
has_external_trigger=agent.has_external_trigger,
new_output=agent.new_output,
graph_id=agent.graph_id,
inputs=graph.input_schema if graph else None,
)
)
logger.info(f"Found {len(agents)} agents in {source}")
except NotFoundError:
pass
except DatabaseError as e:
logger.error(f"Error searching {source}: {e}", exc_info=True)
return ErrorResponse(
message=f"Failed to search {source}. Please try again.",
error=str(e),
session_id=session_id,
)
if not agents:
suggestions = (
[
"Try more general terms",
"Browse categories in the marketplace",
"Check spelling",
]
if source == "marketplace"
else [
"Try different keywords",
"Use find_agent to search the marketplace",
"Check your library at /library",
]
)
no_results_msg = (
f"No agents found matching '{query}'. Try different keywords or browse the marketplace."
if source == "marketplace"
else f"No agents matching '{query}' found in your library."
)
return NoResultsResponse(
message=no_results_msg, session_id=session_id, suggestions=suggestions
)
title = f"Found {len(agents)} agent{'s' if len(agents) != 1 else ''} "
title += (
f"for '{query}'"
if source == "marketplace"
else f"in your library for '{query}'"
)
message = (
"Now you have found some options for the user to choose from. "
"You can add a link to a recommended agent at: /marketplace/agent/agent_id "
"Please ask the user if they would like to use any of these agents."
if source == "marketplace"
else "Found agents in the user's library. You can provide a link to view an agent at: "
"/library/agents/{agent_id}. Use agent_output to get execution results, or run_agent to execute."
)
return AgentsFoundResponse(
message=message,
title=title,
agents=agents,
count=len(agents),
session_id=session_id,
)

View File

@@ -1,7 +1,7 @@
import asyncio
import logging
from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING, Annotated, List, Literal
from typing import TYPE_CHECKING, Annotated, Any, List, Literal
from autogpt_libs.auth import get_user_id
from fastapi import (
@@ -14,7 +14,7 @@ from fastapi import (
Security,
status,
)
from pydantic import BaseModel, Field, SecretStr
from pydantic import BaseModel, Field, SecretStr, model_validator
from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY
from backend.api.features.library.db import set_preset_webhook, update_preset
@@ -39,7 +39,11 @@ from backend.data.onboarding import OnboardingStep, complete_onboarding_step
from backend.data.user import get_user_integrations
from backend.executor.utils import add_graph_execution
from backend.integrations.ayrshare import AyrshareClient, SocialPlatform
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.credentials_store import provider_matches
from backend.integrations.creds_manager import (
IntegrationCredentialsManager,
create_mcp_oauth_handler,
)
from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
@@ -102,9 +106,37 @@ class CredentialsMetaResponse(BaseModel):
scopes: list[str] | None
username: str | None
host: str | None = Field(
default=None, description="Host pattern for host-scoped credentials"
default=None,
description="Host pattern for host-scoped or MCP server URL for MCP credentials",
)
@model_validator(mode="before")
@classmethod
def _normalize_provider(cls, data: Any) -> Any:
"""Fix ``ProviderName.X`` format from Python 3.13 ``str(Enum)`` bug."""
if isinstance(data, dict):
prov = data.get("provider", "")
if isinstance(prov, str) and prov.startswith("ProviderName."):
member = prov.removeprefix("ProviderName.")
try:
data = {**data, "provider": ProviderName[member].value}
except KeyError:
pass
return data
@staticmethod
def get_host(cred: Credentials) -> str | None:
"""Extract host from credential: HostScoped host or MCP server URL."""
if isinstance(cred, HostScopedCredentials):
return cred.host
if isinstance(cred, OAuth2Credentials) and cred.provider in (
ProviderName.MCP,
ProviderName.MCP.value,
"ProviderName.MCP",
):
return (cred.metadata or {}).get("mcp_server_url")
return None
@router.post("/{provider}/callback", summary="Exchange OAuth code for tokens")
async def callback(
@@ -179,9 +211,7 @@ async def callback(
title=credentials.title,
scopes=credentials.scopes,
username=credentials.username,
host=(
credentials.host if isinstance(credentials, HostScopedCredentials) else None
),
host=(CredentialsMetaResponse.get_host(credentials)),
)
@@ -199,7 +229,7 @@ async def list_credentials(
title=cred.title,
scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None,
username=cred.username if isinstance(cred, OAuth2Credentials) else None,
host=cred.host if isinstance(cred, HostScopedCredentials) else None,
host=CredentialsMetaResponse.get_host(cred),
)
for cred in credentials
]
@@ -222,7 +252,7 @@ async def list_credentials_by_provider(
title=cred.title,
scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None,
username=cred.username if isinstance(cred, OAuth2Credentials) else None,
host=cred.host if isinstance(cred, HostScopedCredentials) else None,
host=CredentialsMetaResponse.get_host(cred),
)
for cred in credentials
]
@@ -322,7 +352,11 @@ async def delete_credentials(
tokens_revoked = None
if isinstance(creds, OAuth2Credentials):
handler = _get_provider_oauth_handler(request, provider)
if provider_matches(provider.value, ProviderName.MCP.value):
# MCP uses dynamic per-server OAuth — create handler from metadata
handler = create_mcp_oauth_handler(creds)
else:
handler = _get_provider_oauth_handler(request, provider)
tokens_revoked = await handler.revoke_tokens(creds)
return CredentialsDeletionResponse(revoked=tokens_revoked)

View File

@@ -12,14 +12,16 @@ import backend.api.features.store.image_gen as store_image_gen
import backend.api.features.store.media as store_media
import backend.data.graph as graph_db
import backend.data.integrations as integrations_db
from backend.data.block import BlockInput
from backend.data.db import transaction
from backend.data.execution import get_graph_execution
from backend.data.graph import GraphSettings
from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include
from backend.data.model import CredentialsMetaInput
from backend.data.model import CredentialsMetaInput, GraphInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.webhooks.graph_lifecycle_hooks import on_graph_activate
from backend.integrations.webhooks.graph_lifecycle_hooks import (
on_graph_activate,
on_graph_deactivate,
)
from backend.util.clients import get_scheduler_client
from backend.util.exceptions import DatabaseError, InvalidInputError, NotFoundError
from backend.util.json import SafeJson
@@ -39,6 +41,7 @@ async def list_library_agents(
sort_by: library_model.LibraryAgentSort = library_model.LibraryAgentSort.UPDATED_AT,
page: int = 1,
page_size: int = 50,
include_executions: bool = False,
) -> library_model.LibraryAgentResponse:
"""
Retrieves a paginated list of LibraryAgent records for a given user.
@@ -49,6 +52,9 @@ async def list_library_agents(
sort_by: Sorting field (createdAt, updatedAt, isFavorite, isCreatedByUser).
page: Current page (1-indexed).
page_size: Number of items per page.
include_executions: Whether to include execution data for status calculation.
Defaults to False for performance (UI fetches status separately).
Set to True when accurate status/metrics are needed (e.g., agent generator).
Returns:
A LibraryAgentResponse containing the list of agents and pagination details.
@@ -76,7 +82,6 @@ async def list_library_agents(
"isArchived": False,
}
# Build search filter if applicable
if search_term:
where_clause["OR"] = [
{
@@ -93,7 +98,6 @@ async def list_library_agents(
},
]
# Determine sorting
order_by: prisma.types.LibraryAgentOrderByInput | None = None
if sort_by == library_model.LibraryAgentSort.CREATED_AT:
@@ -105,7 +109,7 @@ async def list_library_agents(
library_agents = await prisma.models.LibraryAgent.prisma().find_many(
where=where_clause,
include=library_agent_include(
user_id, include_nodes=False, include_executions=False
user_id, include_nodes=False, include_executions=include_executions
),
order=order_by,
skip=(page - 1) * page_size,
@@ -369,7 +373,7 @@ async def get_library_agent_by_graph_id(
async def add_generated_agent_image(
graph: graph_db.BaseGraph,
graph: graph_db.GraphBaseMeta,
user_id: str,
library_agent_id: str,
) -> Optional[prisma.models.LibraryAgent]:
@@ -535,6 +539,92 @@ async def update_agent_version_in_library(
return library_model.LibraryAgent.from_db(lib)
async def create_graph_in_library(
graph: graph_db.Graph,
user_id: str,
) -> tuple[graph_db.GraphModel, library_model.LibraryAgent]:
"""Create a new graph and add it to the user's library."""
graph.version = 1
graph_model = graph_db.make_graph_model(graph, user_id)
graph_model.reassign_ids(user_id=user_id, reassign_graph_id=True)
created_graph = await graph_db.create_graph(graph_model, user_id)
library_agents = await create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)
if created_graph.is_active:
created_graph = await on_graph_activate(created_graph, user_id=user_id)
return created_graph, library_agents[0]
async def update_graph_in_library(
graph: graph_db.Graph,
user_id: str,
) -> tuple[graph_db.GraphModel, library_model.LibraryAgent]:
"""Create a new version of an existing graph and update the library entry."""
existing_versions = await graph_db.get_graph_all_versions(graph.id, user_id)
current_active_version = (
next((v for v in existing_versions if v.is_active), None)
if existing_versions
else None
)
graph.version = (
max(v.version for v in existing_versions) + 1 if existing_versions else 1
)
graph_model = graph_db.make_graph_model(graph, user_id)
graph_model.reassign_ids(user_id=user_id, reassign_graph_id=False)
created_graph = await graph_db.create_graph(graph_model, user_id)
library_agent = await get_library_agent_by_graph_id(user_id, created_graph.id)
if not library_agent:
raise NotFoundError(f"Library agent not found for graph {created_graph.id}")
library_agent = await update_library_agent_version_and_settings(
user_id, created_graph
)
if created_graph.is_active:
created_graph = await on_graph_activate(created_graph, user_id=user_id)
await graph_db.set_graph_active_version(
graph_id=created_graph.id,
version=created_graph.version,
user_id=user_id,
)
if current_active_version:
await on_graph_deactivate(current_active_version, user_id=user_id)
return created_graph, library_agent
async def update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
"""Update library agent to point to new graph version and sync settings."""
library = await update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await update_library_agent(
library_agent_id=library.id,
user_id=user_id,
settings=updated_settings,
)
return library
async def update_library_agent(
library_agent_id: str,
user_id: str,
@@ -1039,7 +1129,7 @@ async def create_preset_from_graph_execution(
async def update_preset(
user_id: str,
preset_id: str,
inputs: Optional[BlockInput] = None,
inputs: Optional[GraphInput] = None,
credentials: Optional[dict[str, CredentialsMetaInput]] = None,
name: Optional[str] = None,
description: Optional[str] = None,

View File

@@ -6,9 +6,13 @@ import prisma.enums
import prisma.models
import pydantic
from backend.data.block import BlockInput
from backend.data.graph import GraphModel, GraphSettings, GraphTriggerInfo
from backend.data.model import CredentialsMetaInput, is_credentials_field_name
from backend.data.model import (
CredentialsMetaInput,
GraphInput,
is_credentials_field_name,
)
from backend.util.json import loads as json_loads
from backend.util.models import Pagination
if TYPE_CHECKING:
@@ -16,10 +20,10 @@ if TYPE_CHECKING:
class LibraryAgentStatus(str, Enum):
COMPLETED = "COMPLETED" # All runs completed
HEALTHY = "HEALTHY" # Agent is running (not all runs have completed)
WAITING = "WAITING" # Agent is queued or waiting to start
ERROR = "ERROR" # Agent is in an error state
COMPLETED = "COMPLETED"
HEALTHY = "HEALTHY"
WAITING = "WAITING"
ERROR = "ERROR"
class MarketplaceListingCreator(pydantic.BaseModel):
@@ -39,6 +43,30 @@ class MarketplaceListing(pydantic.BaseModel):
creator: MarketplaceListingCreator
class RecentExecution(pydantic.BaseModel):
"""Summary of a recent execution for quality assessment.
Used by the LLM to understand the agent's recent performance with specific examples
rather than just aggregate statistics.
"""
status: str
correctness_score: float | None = None
activity_summary: str | None = None
def _parse_settings(settings: dict | str | None) -> GraphSettings:
"""Parse settings from database, handling both dict and string formats."""
if settings is None:
return GraphSettings()
try:
if isinstance(settings, str):
settings = json_loads(settings)
return GraphSettings.model_validate(settings)
except Exception:
return GraphSettings()
class LibraryAgent(pydantic.BaseModel):
"""
Represents an agent in the library, including metadata for display and
@@ -48,7 +76,7 @@ class LibraryAgent(pydantic.BaseModel):
id: str
graph_id: str
graph_version: int
owner_user_id: str # ID of user who owns/created this agent graph
owner_user_id: str
image_url: str | None
@@ -64,7 +92,7 @@ class LibraryAgent(pydantic.BaseModel):
description: str
instructions: str | None = None
input_schema: dict[str, Any] # Should be BlockIOObjectSubSchema in frontend
input_schema: dict[str, Any]
output_schema: dict[str, Any]
credentials_input_schema: dict[str, Any] | None = pydantic.Field(
description="Input schema for credentials required by the agent",
@@ -81,25 +109,19 @@ class LibraryAgent(pydantic.BaseModel):
)
trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
new_output: bool
# Whether the user can access the underlying graph
execution_count: int = 0
success_rate: float | None = None
avg_correctness_score: float | None = None
recent_executions: list[RecentExecution] = pydantic.Field(
default_factory=list,
description="List of recent executions with status, score, and summary",
)
can_access_graph: bool
# Indicates if this agent is the latest version
is_latest_version: bool
# Whether the agent is marked as favorite by the user
is_favorite: bool
# Recommended schedule cron (from marketplace agents)
recommended_schedule_cron: str | None = None
# User-specific settings for this library agent
settings: GraphSettings = pydantic.Field(default_factory=GraphSettings)
# Marketplace listing information if the agent has been published
marketplace_listing: Optional["MarketplaceListing"] = None
@staticmethod
@@ -123,7 +145,6 @@ class LibraryAgent(pydantic.BaseModel):
agent_updated_at = agent.AgentGraph.updatedAt
lib_agent_updated_at = agent.updatedAt
# Compute updated_at as the latest between library agent and graph
updated_at = (
max(agent_updated_at, lib_agent_updated_at)
if agent_updated_at
@@ -136,7 +157,6 @@ class LibraryAgent(pydantic.BaseModel):
creator_name = agent.Creator.name or "Unknown"
creator_image_url = agent.Creator.avatarUrl or ""
# Logic to calculate status and new_output
week_ago = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(
days=7
)
@@ -145,13 +165,55 @@ class LibraryAgent(pydantic.BaseModel):
status = status_result.status
new_output = status_result.new_output
# Check if user can access the graph
can_access_graph = agent.AgentGraph.userId == agent.userId
execution_count = len(executions)
success_rate: float | None = None
avg_correctness_score: float | None = None
if execution_count > 0:
success_count = sum(
1
for e in executions
if e.executionStatus == prisma.enums.AgentExecutionStatus.COMPLETED
)
success_rate = (success_count / execution_count) * 100
# Hard-coded to True until a method to check is implemented
correctness_scores = []
for e in executions:
if e.stats and isinstance(e.stats, dict):
score = e.stats.get("correctness_score")
if score is not None and isinstance(score, (int, float)):
correctness_scores.append(float(score))
if correctness_scores:
avg_correctness_score = sum(correctness_scores) / len(
correctness_scores
)
recent_executions: list[RecentExecution] = []
for e in executions:
exec_score: float | None = None
exec_summary: str | None = None
if e.stats and isinstance(e.stats, dict):
score = e.stats.get("correctness_score")
if score is not None and isinstance(score, (int, float)):
exec_score = float(score)
summary = e.stats.get("activity_status")
if summary is not None and isinstance(summary, str):
exec_summary = summary
exec_status = (
e.executionStatus.value
if hasattr(e.executionStatus, "value")
else str(e.executionStatus)
)
recent_executions.append(
RecentExecution(
status=exec_status,
correctness_score=exec_score,
activity_summary=exec_summary,
)
)
can_access_graph = agent.AgentGraph.userId == agent.userId
is_latest_version = True
# Build marketplace_listing if available
marketplace_listing_data = None
if store_listing and store_listing.ActiveVersion and profile:
creator_data = MarketplaceListingCreator(
@@ -190,11 +252,15 @@ class LibraryAgent(pydantic.BaseModel):
has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
execution_count=execution_count,
success_rate=success_rate,
avg_correctness_score=avg_correctness_score,
recent_executions=recent_executions,
can_access_graph=can_access_graph,
is_latest_version=is_latest_version,
is_favorite=agent.isFavorite,
recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron,
settings=GraphSettings.model_validate(agent.settings),
settings=_parse_settings(agent.settings),
marketplace_listing=marketplace_listing_data,
)
@@ -220,18 +286,15 @@ def _calculate_agent_status(
if not executions:
return AgentStatusResult(status=LibraryAgentStatus.COMPLETED, new_output=False)
# Track how many times each execution status appears
status_counts = {status: 0 for status in prisma.enums.AgentExecutionStatus}
new_output = False
for execution in executions:
# Check if there's a completed run more recent than `recent_threshold`
if execution.createdAt >= recent_threshold:
if execution.executionStatus == prisma.enums.AgentExecutionStatus.COMPLETED:
new_output = True
status_counts[execution.executionStatus] += 1
# Determine the final status based on counts
if status_counts[prisma.enums.AgentExecutionStatus.FAILED] > 0:
return AgentStatusResult(status=LibraryAgentStatus.ERROR, new_output=new_output)
elif status_counts[prisma.enums.AgentExecutionStatus.QUEUED] > 0:
@@ -263,7 +326,7 @@ class LibraryAgentPresetCreatable(pydantic.BaseModel):
graph_id: str
graph_version: int
inputs: BlockInput
inputs: GraphInput
credentials: dict[str, CredentialsMetaInput]
name: str
@@ -292,7 +355,7 @@ class LibraryAgentPresetUpdatable(pydantic.BaseModel):
Request model used when updating a preset for a library agent.
"""
inputs: Optional[BlockInput] = None
inputs: Optional[GraphInput] = None
credentials: Optional[dict[str, CredentialsMetaInput]] = None
name: Optional[str] = None
@@ -335,7 +398,7 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable):
"Webhook must be included in AgentPreset query when webhookId is set"
)
input_data: BlockInput = {}
input_data: GraphInput = {}
input_credentials: dict[str, CredentialsMetaInput] = {}
for preset_input in preset.InputPresets:

View File

@@ -0,0 +1,404 @@
"""
MCP (Model Context Protocol) API routes.
Provides endpoints for MCP tool discovery and OAuth authentication so the
frontend can list available tools on an MCP server before placing a block.
"""
import logging
from typing import Annotated, Any
from urllib.parse import urlparse
import fastapi
from autogpt_libs.auth import get_user_id
from fastapi import Security
from pydantic import BaseModel, Field
from backend.api.features.integrations.router import CredentialsMetaResponse
from backend.blocks.mcp.client import MCPClient, MCPClientError
from backend.blocks.mcp.oauth import MCPOAuthHandler
from backend.data.model import OAuth2Credentials
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.providers import ProviderName
from backend.util.request import HTTPClientError, Requests
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
router = fastapi.APIRouter(tags=["mcp"])
creds_manager = IntegrationCredentialsManager()
# ====================== Tool Discovery ====================== #
class DiscoverToolsRequest(BaseModel):
"""Request to discover tools on an MCP server."""
server_url: str = Field(description="URL of the MCP server")
auth_token: str | None = Field(
default=None,
description="Optional Bearer token for authenticated MCP servers",
)
class MCPToolResponse(BaseModel):
"""A single MCP tool returned by discovery."""
name: str
description: str
input_schema: dict[str, Any]
class DiscoverToolsResponse(BaseModel):
"""Response containing the list of tools available on an MCP server."""
tools: list[MCPToolResponse]
server_name: str | None = None
protocol_version: str | None = None
@router.post(
"/discover-tools",
summary="Discover available tools on an MCP server",
response_model=DiscoverToolsResponse,
)
async def discover_tools(
request: DiscoverToolsRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> DiscoverToolsResponse:
"""
Connect to an MCP server and return its available tools.
If the user has a stored MCP credential for this server URL, it will be
used automatically — no need to pass an explicit auth token.
"""
auth_token = request.auth_token
# Auto-use stored MCP credential when no explicit token is provided.
if not auth_token:
mcp_creds = await creds_manager.store.get_creds_by_provider(
user_id, ProviderName.MCP.value
)
# Find the freshest credential for this server URL
best_cred: OAuth2Credentials | None = None
for cred in mcp_creds:
if (
isinstance(cred, OAuth2Credentials)
and (cred.metadata or {}).get("mcp_server_url") == request.server_url
):
if best_cred is None or (
(cred.access_token_expires_at or 0)
> (best_cred.access_token_expires_at or 0)
):
best_cred = cred
if best_cred:
# Refresh the token if expired before using it
best_cred = await creds_manager.refresh_if_needed(user_id, best_cred)
logger.info(
f"Using MCP credential {best_cred.id} for {request.server_url}, "
f"expires_at={best_cred.access_token_expires_at}"
)
auth_token = best_cred.access_token.get_secret_value()
client = MCPClient(request.server_url, auth_token=auth_token)
try:
init_result = await client.initialize()
tools = await client.list_tools()
except HTTPClientError as e:
if e.status_code in (401, 403):
raise fastapi.HTTPException(
status_code=401,
detail="This MCP server requires authentication. "
"Please provide a valid auth token.",
)
raise fastapi.HTTPException(status_code=502, detail=str(e))
except MCPClientError as e:
raise fastapi.HTTPException(status_code=502, detail=str(e))
except Exception as e:
raise fastapi.HTTPException(
status_code=502,
detail=f"Failed to connect to MCP server: {e}",
)
return DiscoverToolsResponse(
tools=[
MCPToolResponse(
name=t.name,
description=t.description,
input_schema=t.input_schema,
)
for t in tools
],
server_name=(
init_result.get("serverInfo", {}).get("name")
or urlparse(request.server_url).hostname
or "MCP"
),
protocol_version=init_result.get("protocolVersion"),
)
# ======================== OAuth Flow ======================== #
class MCPOAuthLoginRequest(BaseModel):
"""Request to start an OAuth flow for an MCP server."""
server_url: str = Field(description="URL of the MCP server that requires OAuth")
class MCPOAuthLoginResponse(BaseModel):
"""Response with the OAuth login URL for the user to authenticate."""
login_url: str
state_token: str
@router.post(
"/oauth/login",
summary="Initiate OAuth login for an MCP server",
)
async def mcp_oauth_login(
request: MCPOAuthLoginRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> MCPOAuthLoginResponse:
"""
Discover OAuth metadata from the MCP server and return a login URL.
1. Discovers the protected-resource metadata (RFC 9728)
2. Fetches the authorization server metadata (RFC 8414)
3. Performs Dynamic Client Registration (RFC 7591) if available
4. Returns the authorization URL for the frontend to open in a popup
"""
client = MCPClient(request.server_url)
# Step 1: Discover protected-resource metadata (RFC 9728)
protected_resource = await client.discover_auth()
metadata: dict[str, Any] | None = None
if protected_resource and protected_resource.get("authorization_servers"):
auth_server_url = protected_resource["authorization_servers"][0]
resource_url = protected_resource.get("resource", request.server_url)
# Step 2a: Discover auth-server metadata (RFC 8414)
metadata = await client.discover_auth_server_metadata(auth_server_url)
else:
# Fallback: Some MCP servers (e.g. Linear) are their own auth server
# and serve OAuth metadata directly without protected-resource metadata.
# Don't assume a resource_url — omitting it lets the auth server choose
# the correct audience for the token (RFC 8707 resource is optional).
resource_url = None
metadata = await client.discover_auth_server_metadata(request.server_url)
if (
not metadata
or "authorization_endpoint" not in metadata
or "token_endpoint" not in metadata
):
raise fastapi.HTTPException(
status_code=400,
detail="This MCP server does not advertise OAuth support. "
"You may need to provide an auth token manually.",
)
authorize_url = metadata["authorization_endpoint"]
token_url = metadata["token_endpoint"]
registration_endpoint = metadata.get("registration_endpoint")
revoke_url = metadata.get("revocation_endpoint")
# Step 3: Dynamic Client Registration (RFC 7591) if available
frontend_base_url = settings.config.frontend_base_url
if not frontend_base_url:
raise fastapi.HTTPException(
status_code=500,
detail="Frontend base URL is not configured.",
)
redirect_uri = f"{frontend_base_url}/auth/integrations/mcp_callback"
client_id = ""
client_secret = ""
if registration_endpoint:
reg_result = await _register_mcp_client(
registration_endpoint, redirect_uri, request.server_url
)
if reg_result:
client_id = reg_result.get("client_id", "")
client_secret = reg_result.get("client_secret", "")
if not client_id:
client_id = "autogpt-platform"
# Step 4: Store state token with OAuth metadata for the callback
scopes = (protected_resource or {}).get("scopes_supported") or metadata.get(
"scopes_supported", []
)
state_token, code_challenge = await creds_manager.store.store_state_token(
user_id,
ProviderName.MCP.value,
scopes,
state_metadata={
"authorize_url": authorize_url,
"token_url": token_url,
"revoke_url": revoke_url,
"resource_url": resource_url,
"server_url": request.server_url,
"client_id": client_id,
"client_secret": client_secret,
},
)
# Step 5: Build and return the login URL
handler = MCPOAuthHandler(
client_id=client_id,
client_secret=client_secret,
redirect_uri=redirect_uri,
authorize_url=authorize_url,
token_url=token_url,
resource_url=resource_url,
)
login_url = handler.get_login_url(
scopes, state_token, code_challenge=code_challenge
)
return MCPOAuthLoginResponse(login_url=login_url, state_token=state_token)
class MCPOAuthCallbackRequest(BaseModel):
"""Request to exchange an OAuth code for tokens."""
code: str = Field(description="Authorization code from OAuth callback")
state_token: str = Field(description="State token for CSRF verification")
class MCPOAuthCallbackResponse(BaseModel):
"""Response after successfully storing OAuth credentials."""
credential_id: str
@router.post(
"/oauth/callback",
summary="Exchange OAuth code for MCP tokens",
)
async def mcp_oauth_callback(
request: MCPOAuthCallbackRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> CredentialsMetaResponse:
"""
Exchange the authorization code for tokens and store the credential.
The frontend calls this after receiving the OAuth code from the popup.
On success, subsequent ``/discover-tools`` calls for the same server URL
will automatically use the stored credential.
"""
valid_state = await creds_manager.store.verify_state_token(
user_id, request.state_token, ProviderName.MCP.value
)
if not valid_state:
raise fastapi.HTTPException(
status_code=400,
detail="Invalid or expired state token.",
)
meta = valid_state.state_metadata
frontend_base_url = settings.config.frontend_base_url
if not frontend_base_url:
raise fastapi.HTTPException(
status_code=500,
detail="Frontend base URL is not configured.",
)
redirect_uri = f"{frontend_base_url}/auth/integrations/mcp_callback"
handler = MCPOAuthHandler(
client_id=meta["client_id"],
client_secret=meta.get("client_secret", ""),
redirect_uri=redirect_uri,
authorize_url=meta["authorize_url"],
token_url=meta["token_url"],
revoke_url=meta.get("revoke_url"),
resource_url=meta.get("resource_url"),
)
try:
credentials = await handler.exchange_code_for_tokens(
request.code, valid_state.scopes, valid_state.code_verifier
)
except Exception as e:
raise fastapi.HTTPException(
status_code=400,
detail=f"OAuth token exchange failed: {e}",
)
# Enrich credential metadata for future lookup and token refresh
if credentials.metadata is None:
credentials.metadata = {}
credentials.metadata["mcp_server_url"] = meta["server_url"]
credentials.metadata["mcp_client_id"] = meta["client_id"]
credentials.metadata["mcp_client_secret"] = meta.get("client_secret", "")
credentials.metadata["mcp_token_url"] = meta["token_url"]
credentials.metadata["mcp_resource_url"] = meta.get("resource_url", "")
hostname = urlparse(meta["server_url"]).hostname or meta["server_url"]
credentials.title = f"MCP: {hostname}"
# Remove old MCP credentials for the same server to prevent stale token buildup.
try:
old_creds = await creds_manager.store.get_creds_by_provider(
user_id, ProviderName.MCP.value
)
for old in old_creds:
if (
isinstance(old, OAuth2Credentials)
and (old.metadata or {}).get("mcp_server_url") == meta["server_url"]
):
await creds_manager.store.delete_creds_by_id(user_id, old.id)
logger.info(
f"Removed old MCP credential {old.id} for {meta['server_url']}"
)
except Exception:
logger.debug("Could not clean up old MCP credentials", exc_info=True)
await creds_manager.create(user_id, credentials)
return CredentialsMetaResponse(
id=credentials.id,
provider=credentials.provider,
type=credentials.type,
title=credentials.title,
scopes=credentials.scopes,
username=credentials.username,
host=credentials.metadata.get("mcp_server_url"),
)
# ======================== Helpers ======================== #
async def _register_mcp_client(
registration_endpoint: str,
redirect_uri: str,
server_url: str,
) -> dict[str, Any] | None:
"""Attempt Dynamic Client Registration (RFC 7591) with an MCP auth server."""
try:
response = await Requests(raise_for_status=True).post(
registration_endpoint,
json={
"client_name": "AutoGPT Platform",
"redirect_uris": [redirect_uri],
"grant_types": ["authorization_code"],
"response_types": ["code"],
"token_endpoint_auth_method": "client_secret_post",
},
)
data = response.json()
if isinstance(data, dict) and "client_id" in data:
return data
return None
except Exception as e:
logger.warning(f"Dynamic client registration failed for {server_url}: {e}")
return None

View File

@@ -0,0 +1,436 @@
"""Tests for MCP API routes.
Uses httpx.AsyncClient with ASGITransport instead of fastapi.testclient.TestClient
to avoid creating blocking portals that can corrupt pytest-asyncio's session event loop.
"""
from unittest.mock import AsyncMock, patch
import fastapi
import httpx
import pytest
import pytest_asyncio
from autogpt_libs.auth import get_user_id
from backend.api.features.mcp.routes import router
from backend.blocks.mcp.client import MCPClientError, MCPTool
from backend.util.request import HTTPClientError
app = fastapi.FastAPI()
app.include_router(router)
app.dependency_overrides[get_user_id] = lambda: "test-user-id"
@pytest_asyncio.fixture(scope="module")
async def client():
transport = httpx.ASGITransport(app=app)
async with httpx.AsyncClient(transport=transport, base_url="http://test") as c:
yield c
class TestDiscoverTools:
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_success(self, client):
mock_tools = [
MCPTool(
name="get_weather",
description="Get weather for a city",
input_schema={
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
),
MCPTool(
name="add_numbers",
description="Add two numbers",
input_schema={
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"},
},
},
),
]
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
instance = MockClient.return_value
instance.initialize = AsyncMock(
return_value={
"protocolVersion": "2025-03-26",
"serverInfo": {"name": "test-server"},
}
)
instance.list_tools = AsyncMock(return_value=mock_tools)
response = await client.post(
"/discover-tools",
json={"server_url": "https://mcp.example.com/mcp"},
)
assert response.status_code == 200
data = response.json()
assert len(data["tools"]) == 2
assert data["tools"][0]["name"] == "get_weather"
assert data["tools"][1]["name"] == "add_numbers"
assert data["server_name"] == "test-server"
assert data["protocol_version"] == "2025-03-26"
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_with_auth_token(self, client):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.initialize = AsyncMock(
return_value={"serverInfo": {}, "protocolVersion": "2025-03-26"}
)
instance.list_tools = AsyncMock(return_value=[])
response = await client.post(
"/discover-tools",
json={
"server_url": "https://mcp.example.com/mcp",
"auth_token": "my-secret-token",
},
)
assert response.status_code == 200
MockClient.assert_called_once_with(
"https://mcp.example.com/mcp",
auth_token="my-secret-token",
)
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_auto_uses_stored_credential(self, client):
"""When no explicit token is given, stored MCP credentials are used."""
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
stored_cred = OAuth2Credentials(
provider="mcp",
title="MCP: example.com",
access_token=SecretStr("stored-token-123"),
refresh_token=None,
access_token_expires_at=None,
refresh_token_expires_at=None,
scopes=[],
metadata={"mcp_server_url": "https://mcp.example.com/mcp"},
)
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[stored_cred])
mock_cm.refresh_if_needed = AsyncMock(return_value=stored_cred)
instance = MockClient.return_value
instance.initialize = AsyncMock(
return_value={"serverInfo": {}, "protocolVersion": "2025-03-26"}
)
instance.list_tools = AsyncMock(return_value=[])
response = await client.post(
"/discover-tools",
json={"server_url": "https://mcp.example.com/mcp"},
)
assert response.status_code == 200
MockClient.assert_called_once_with(
"https://mcp.example.com/mcp",
auth_token="stored-token-123",
)
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_mcp_error(self, client):
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
instance = MockClient.return_value
instance.initialize = AsyncMock(
side_effect=MCPClientError("Connection refused")
)
response = await client.post(
"/discover-tools",
json={"server_url": "https://bad-server.example.com/mcp"},
)
assert response.status_code == 502
assert "Connection refused" in response.json()["detail"]
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_generic_error(self, client):
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
instance = MockClient.return_value
instance.initialize = AsyncMock(side_effect=Exception("Network timeout"))
response = await client.post(
"/discover-tools",
json={"server_url": "https://timeout.example.com/mcp"},
)
assert response.status_code == 502
assert "Failed to connect" in response.json()["detail"]
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_auth_required(self, client):
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
instance = MockClient.return_value
instance.initialize = AsyncMock(
side_effect=HTTPClientError("HTTP 401 Error: Unauthorized", 401)
)
response = await client.post(
"/discover-tools",
json={"server_url": "https://auth-server.example.com/mcp"},
)
assert response.status_code == 401
assert "requires authentication" in response.json()["detail"]
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_forbidden(self, client):
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
instance = MockClient.return_value
instance.initialize = AsyncMock(
side_effect=HTTPClientError("HTTP 403 Error: Forbidden", 403)
)
response = await client.post(
"/discover-tools",
json={"server_url": "https://auth-server.example.com/mcp"},
)
assert response.status_code == 401
assert "requires authentication" in response.json()["detail"]
@pytest.mark.asyncio(loop_scope="session")
async def test_discover_tools_missing_url(self, client):
response = await client.post("/discover-tools", json={})
assert response.status_code == 422
class TestOAuthLogin:
@pytest.mark.asyncio(loop_scope="session")
async def test_oauth_login_success(self, client):
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
patch(
"backend.api.features.mcp.routes._register_mcp_client"
) as mock_register,
):
instance = MockClient.return_value
instance.discover_auth = AsyncMock(
return_value={
"authorization_servers": ["https://auth.sentry.io"],
"resource": "https://mcp.sentry.dev/mcp",
"scopes_supported": ["openid"],
}
)
instance.discover_auth_server_metadata = AsyncMock(
return_value={
"authorization_endpoint": "https://auth.sentry.io/authorize",
"token_endpoint": "https://auth.sentry.io/token",
"registration_endpoint": "https://auth.sentry.io/register",
}
)
mock_register.return_value = {
"client_id": "registered-client-id",
"client_secret": "registered-secret",
}
mock_cm.store.store_state_token = AsyncMock(
return_value=("state-token-123", "code-challenge-abc")
)
mock_settings.config.frontend_base_url = "http://localhost:3000"
response = await client.post(
"/oauth/login",
json={"server_url": "https://mcp.sentry.dev/mcp"},
)
assert response.status_code == 200
data = response.json()
assert "login_url" in data
assert data["state_token"] == "state-token-123"
assert "auth.sentry.io/authorize" in data["login_url"]
assert "registered-client-id" in data["login_url"]
@pytest.mark.asyncio(loop_scope="session")
async def test_oauth_login_no_oauth_support(self, client):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.discover_auth = AsyncMock(return_value=None)
instance.discover_auth_server_metadata = AsyncMock(return_value=None)
response = await client.post(
"/oauth/login",
json={"server_url": "https://simple-server.example.com/mcp"},
)
assert response.status_code == 400
assert "does not advertise OAuth" in response.json()["detail"]
@pytest.mark.asyncio(loop_scope="session")
async def test_oauth_login_fallback_to_public_client(self, client):
"""When DCR is unavailable, falls back to default public client ID."""
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
):
instance = MockClient.return_value
instance.discover_auth = AsyncMock(
return_value={
"authorization_servers": ["https://auth.example.com"],
"resource": "https://mcp.example.com/mcp",
}
)
instance.discover_auth_server_metadata = AsyncMock(
return_value={
"authorization_endpoint": "https://auth.example.com/authorize",
"token_endpoint": "https://auth.example.com/token",
# No registration_endpoint
}
)
mock_cm.store.store_state_token = AsyncMock(
return_value=("state-abc", "challenge-xyz")
)
mock_settings.config.frontend_base_url = "http://localhost:3000"
response = await client.post(
"/oauth/login",
json={"server_url": "https://mcp.example.com/mcp"},
)
assert response.status_code == 200
data = response.json()
assert "autogpt-platform" in data["login_url"]
class TestOAuthCallback:
@pytest.mark.asyncio(loop_scope="session")
async def test_oauth_callback_success(self, client):
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
mock_creds = OAuth2Credentials(
provider="mcp",
title=None,
access_token=SecretStr("access-token-xyz"),
refresh_token=None,
access_token_expires_at=None,
refresh_token_expires_at=None,
scopes=[],
metadata={
"mcp_token_url": "https://auth.sentry.io/token",
"mcp_resource_url": "https://mcp.sentry.dev/mcp",
},
)
with (
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
patch("backend.api.features.mcp.routes.MCPOAuthHandler") as MockHandler,
):
mock_settings.config.frontend_base_url = "http://localhost:3000"
# Mock state verification
mock_state = AsyncMock()
mock_state.state_metadata = {
"authorize_url": "https://auth.sentry.io/authorize",
"token_url": "https://auth.sentry.io/token",
"client_id": "test-client-id",
"client_secret": "test-secret",
"server_url": "https://mcp.sentry.dev/mcp",
}
mock_state.scopes = ["openid"]
mock_state.code_verifier = "verifier-123"
mock_cm.store.verify_state_token = AsyncMock(return_value=mock_state)
mock_cm.create = AsyncMock()
handler_instance = MockHandler.return_value
handler_instance.exchange_code_for_tokens = AsyncMock(
return_value=mock_creds
)
# Mock old credential cleanup
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
response = await client.post(
"/oauth/callback",
json={"code": "auth-code-abc", "state_token": "state-token-123"},
)
assert response.status_code == 200
data = response.json()
assert "id" in data
assert data["provider"] == "mcp"
assert data["type"] == "oauth2"
mock_cm.create.assert_called_once()
@pytest.mark.asyncio(loop_scope="session")
async def test_oauth_callback_invalid_state(self, client):
with patch("backend.api.features.mcp.routes.creds_manager") as mock_cm:
mock_cm.store.verify_state_token = AsyncMock(return_value=None)
response = await client.post(
"/oauth/callback",
json={"code": "auth-code", "state_token": "bad-state"},
)
assert response.status_code == 400
assert "Invalid or expired" in response.json()["detail"]
@pytest.mark.asyncio(loop_scope="session")
async def test_oauth_callback_token_exchange_fails(self, client):
with (
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
patch("backend.api.features.mcp.routes.MCPOAuthHandler") as MockHandler,
):
mock_settings.config.frontend_base_url = "http://localhost:3000"
mock_state = AsyncMock()
mock_state.state_metadata = {
"authorize_url": "https://auth.example.com/authorize",
"token_url": "https://auth.example.com/token",
"client_id": "cid",
"server_url": "https://mcp.example.com/mcp",
}
mock_state.scopes = []
mock_state.code_verifier = "v"
mock_cm.store.verify_state_token = AsyncMock(return_value=mock_state)
handler_instance = MockHandler.return_value
handler_instance.exchange_code_for_tokens = AsyncMock(
side_effect=RuntimeError("Token exchange failed")
)
response = await client.post(
"/oauth/callback",
json={"code": "bad-code", "state_token": "state"},
)
assert response.status_code == 400
assert "token exchange failed" in response.json()["detail"].lower()

View File

@@ -5,8 +5,8 @@ from typing import Optional
import aiohttp
from fastapi import HTTPException
from backend.blocks import get_block
from backend.data import graph as graph_db
from backend.data.block import get_block
from backend.util.settings import Settings
from .models import ApiResponse, ChatRequest, GraphData

View File

@@ -152,7 +152,7 @@ class BlockHandler(ContentHandler):
async def get_missing_items(self, batch_size: int) -> list[ContentItem]:
"""Fetch blocks without embeddings."""
from backend.data.block import get_blocks
from backend.blocks import get_blocks
# Get all available blocks
all_blocks = get_blocks()
@@ -249,7 +249,7 @@ class BlockHandler(ContentHandler):
async def get_stats(self) -> dict[str, int]:
"""Get statistics about block embedding coverage."""
from backend.data.block import get_blocks
from backend.blocks import get_blocks
all_blocks = get_blocks()

View File

@@ -93,7 +93,7 @@ async def test_block_handler_get_missing_items(mocker):
mock_existing = []
with patch(
"backend.data.block.get_blocks",
"backend.blocks.get_blocks",
return_value=mock_blocks,
):
with patch(
@@ -135,7 +135,7 @@ async def test_block_handler_get_stats(mocker):
mock_embedded = [{"count": 2}]
with patch(
"backend.data.block.get_blocks",
"backend.blocks.get_blocks",
return_value=mock_blocks,
):
with patch(
@@ -327,7 +327,7 @@ async def test_block_handler_handles_missing_attributes():
mock_blocks = {"block-minimal": mock_block_class}
with patch(
"backend.data.block.get_blocks",
"backend.blocks.get_blocks",
return_value=mock_blocks,
):
with patch(
@@ -360,7 +360,7 @@ async def test_block_handler_skips_failed_blocks():
mock_blocks = {"good-block": good_block, "bad-block": bad_block}
with patch(
"backend.data.block.get_blocks",
"backend.blocks.get_blocks",
return_value=mock_blocks,
):
with patch(

View File

@@ -1,7 +1,7 @@
import asyncio
import logging
from datetime import datetime, timezone
from typing import Any, Literal
from typing import Any, Literal, overload
import fastapi
import prisma.enums
@@ -11,8 +11,8 @@ import prisma.types
from backend.data.db import transaction
from backend.data.graph import (
GraphMeta,
GraphModel,
GraphModelWithoutNodes,
get_graph,
get_graph_as_admin,
get_sub_graphs,
@@ -112,6 +112,7 @@ async def get_store_agents(
description=agent["description"],
runs=agent["runs"],
rating=agent["rating"],
agent_graph_id=agent.get("agentGraphId", ""),
)
store_agents.append(store_agent)
except Exception as e:
@@ -170,6 +171,7 @@ async def get_store_agents(
description=agent.description,
runs=agent.runs,
rating=agent.rating,
agent_graph_id=agent.agentGraphId,
)
# Add to the list only if creation was successful
store_agents.append(store_agent)
@@ -332,7 +334,22 @@ async def get_store_agent_details(
raise DatabaseError("Failed to fetch agent details") from e
async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
@overload
async def get_available_graph(
store_listing_version_id: str, hide_nodes: Literal[False]
) -> GraphModel: ...
@overload
async def get_available_graph(
store_listing_version_id: str, hide_nodes: Literal[True] = True
) -> GraphModelWithoutNodes: ...
async def get_available_graph(
store_listing_version_id: str,
hide_nodes: bool = True,
) -> GraphModelWithoutNodes | GraphModel:
try:
# Get avaialble, non-deleted store listing version
store_listing_version = (
@@ -342,7 +359,7 @@ async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
"isAvailable": True,
"isDeleted": False,
},
include={"AgentGraph": {"include": {"Nodes": True}}},
include={"AgentGraph": {"include": AGENT_GRAPH_INCLUDE}},
)
)
@@ -352,7 +369,9 @@ async def get_available_graph(store_listing_version_id: str) -> GraphMeta:
detail=f"Store listing version {store_listing_version_id} not found",
)
return GraphModel.from_db(store_listing_version.AgentGraph).meta()
return (GraphModelWithoutNodes if hide_nodes else GraphModel).from_db(
store_listing_version.AgentGraph
)
except Exception as e:
logger.error(f"Error getting agent: {e}")

View File

@@ -662,7 +662,7 @@ async def cleanup_orphaned_embeddings() -> dict[str, Any]:
)
current_ids = {row["id"] for row in valid_agents}
elif content_type == ContentType.BLOCK:
from backend.data.block import get_blocks
from backend.blocks import get_blocks
current_ids = set(get_blocks().keys())
elif content_type == ContentType.DOCUMENTATION:

View File

@@ -454,6 +454,9 @@ async def test_unified_hybrid_search_pagination(
cleanup_embeddings: list,
):
"""Test unified search pagination works correctly."""
# Use a unique search term to avoid matching other test data
unique_term = f"xyzpagtest{uuid.uuid4().hex[:8]}"
# Create multiple items
content_ids = []
for i in range(5):
@@ -465,14 +468,14 @@ async def test_unified_hybrid_search_pagination(
content_type=ContentType.BLOCK,
content_id=content_id,
embedding=mock_embedding,
searchable_text=f"pagination test item number {i}",
searchable_text=f"{unique_term} item number {i}",
metadata={"index": i},
user_id=None,
)
# Get first page
page1_results, total1 = await unified_hybrid_search(
query="pagination test",
query=unique_term,
content_types=[ContentType.BLOCK],
page=1,
page_size=2,
@@ -480,7 +483,7 @@ async def test_unified_hybrid_search_pagination(
# Get second page
page2_results, total2 = await unified_hybrid_search(
query="pagination test",
query=unique_term,
content_types=[ContentType.BLOCK],
page=2,
page_size=2,

View File

@@ -8,6 +8,7 @@ Includes BM25 reranking for improved lexical relevance.
import logging
import re
import time
from dataclasses import dataclass
from typing import Any, Literal
@@ -362,7 +363,11 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(sql_query, *params)
try:
results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
total = results[0]["total_count"] if results else 0
# Apply BM25 reranking
@@ -600,6 +605,7 @@ async def hybrid_search(
sa.featured,
sa.is_available,
sa.updated_at,
sa."agentGraphId",
-- Searchable text for BM25 reranking
COALESCE(sa.agent_name, '') || ' ' || COALESCE(sa.sub_heading, '') || ' ' || COALESCE(sa.description, '') as searchable_text,
-- Semantic score
@@ -659,6 +665,7 @@ async def hybrid_search(
featured,
is_available,
updated_at,
"agentGraphId",
searchable_text,
semantic_score,
lexical_score,
@@ -684,7 +691,11 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(sql_query, *params)
try:
results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
total = results[0]["total_count"] if results else 0
@@ -716,6 +727,87 @@ async def hybrid_search_simple(
return await hybrid_search(query=query, page=page, page_size=page_size)
# ============================================================================
# Diagnostics
# ============================================================================
# Rate limit: only log vector error diagnostics once per this interval
_VECTOR_DIAG_INTERVAL_SECONDS = 60
_last_vector_diag_time: float = 0
async def _log_vector_error_diagnostics(error: Exception) -> None:
"""Log diagnostic info when 'type vector does not exist' error occurs.
Note: Diagnostic queries use query_raw_with_schema which may run on a different
pooled connection than the one that failed. Session-level search_path can differ,
so these diagnostics show cluster-wide state, not necessarily the failed session.
Includes rate limiting to avoid log spam - only logs once per minute.
Caller should re-raise the error after calling this function.
"""
global _last_vector_diag_time
# Check if this is the vector type error
error_str = str(error).lower()
if not (
"type" in error_str and "vector" in error_str and "does not exist" in error_str
):
return
# Rate limit: only log once per interval
now = time.time()
if now - _last_vector_diag_time < _VECTOR_DIAG_INTERVAL_SECONDS:
return
_last_vector_diag_time = now
try:
diagnostics: dict[str, object] = {}
try:
search_path_result = await query_raw_with_schema("SHOW search_path")
diagnostics["search_path"] = search_path_result
except Exception as e:
diagnostics["search_path"] = f"Error: {e}"
try:
schema_result = await query_raw_with_schema("SELECT current_schema()")
diagnostics["current_schema"] = schema_result
except Exception as e:
diagnostics["current_schema"] = f"Error: {e}"
try:
user_result = await query_raw_with_schema(
"SELECT current_user, session_user, current_database()"
)
diagnostics["user_info"] = user_result
except Exception as e:
diagnostics["user_info"] = f"Error: {e}"
try:
# Check pgvector extension installation (cluster-wide, stable info)
ext_result = await query_raw_with_schema(
"SELECT extname, extversion, nspname as schema "
"FROM pg_extension e "
"JOIN pg_namespace n ON e.extnamespace = n.oid "
"WHERE extname = 'vector'"
)
diagnostics["pgvector_extension"] = ext_result
except Exception as e:
diagnostics["pgvector_extension"] = f"Error: {e}"
logger.error(
f"Vector type error diagnostics:\n"
f" Error: {error}\n"
f" search_path: {diagnostics.get('search_path')}\n"
f" current_schema: {diagnostics.get('current_schema')}\n"
f" user_info: {diagnostics.get('user_info')}\n"
f" pgvector_extension: {diagnostics.get('pgvector_extension')}"
)
except Exception as diag_error:
logger.error(f"Failed to collect vector error diagnostics: {diag_error}")
# Backward compatibility alias - HybridSearchWeights maps to StoreAgentSearchWeights
# for existing code that expects the popularity parameter
HybridSearchWeights = StoreAgentSearchWeights

View File

@@ -7,16 +7,7 @@ from replicate.client import Client as ReplicateClient
from replicate.exceptions import ReplicateError
from replicate.helpers import FileOutput
from backend.blocks.ideogram import (
AspectRatio,
ColorPalettePreset,
IdeogramModelBlock,
IdeogramModelName,
MagicPromptOption,
StyleType,
UpscaleOption,
)
from backend.data.graph import BaseGraph
from backend.data.graph import GraphBaseMeta
from backend.data.model import CredentialsMetaInput, ProviderName
from backend.integrations.credentials_store import ideogram_credentials
from backend.util.request import Requests
@@ -34,14 +25,14 @@ class ImageStyle(str, Enum):
DIGITAL_ART = "digital art"
async def generate_agent_image(agent: BaseGraph | AgentGraph) -> io.BytesIO:
async def generate_agent_image(agent: GraphBaseMeta | AgentGraph) -> io.BytesIO:
if settings.config.use_agent_image_generation_v2:
return await generate_agent_image_v2(graph=agent)
else:
return await generate_agent_image_v1(agent=agent)
async def generate_agent_image_v2(graph: BaseGraph | AgentGraph) -> io.BytesIO:
async def generate_agent_image_v2(graph: GraphBaseMeta | AgentGraph) -> io.BytesIO:
"""
Generate an image for an agent using Ideogram model.
Returns:
@@ -50,18 +41,31 @@ async def generate_agent_image_v2(graph: BaseGraph | AgentGraph) -> io.BytesIO:
if not ideogram_credentials.api_key:
raise ValueError("Missing Ideogram API key")
from backend.blocks.ideogram import (
AspectRatio,
ColorPalettePreset,
IdeogramModelBlock,
IdeogramModelName,
MagicPromptOption,
StyleType,
UpscaleOption,
)
name = graph.name
description = f"{name} ({graph.description})" if graph.description else name
prompt = (
f"Create a visually striking retro-futuristic vector pop art illustration prominently featuring "
f'"{name}" in bold typography. The image clearly and literally depicts a {description}, '
f"along with recognizable objects directly associated with the primary function of a {name}. "
f"Ensure the imagery is concrete, intuitive, and immediately understandable, clearly conveying the "
f"purpose of a {name}. Maintain vibrant, limited-palette colors, sharp vector lines, geometric "
f"shapes, flat illustration techniques, and solid colors without gradients or shading. Preserve a "
f"retro-futuristic aesthetic influenced by mid-century futurism and 1960s psychedelia, "
f"prioritizing clear visual storytelling and thematic clarity above all else."
"Create a visually striking retro-futuristic vector pop art illustration "
f'prominently featuring "{name}" in bold typography. The image clearly and '
f"literally depicts a {description}, along with recognizable objects directly "
f"associated with the primary function of a {name}. "
f"Ensure the imagery is concrete, intuitive, and immediately understandable, "
f"clearly conveying the purpose of a {name}. "
"Maintain vibrant, limited-palette colors, sharp vector lines, "
"geometric shapes, flat illustration techniques, and solid colors "
"without gradients or shading. Preserve a retro-futuristic aesthetic "
"influenced by mid-century futurism and 1960s psychedelia, "
"prioritizing clear visual storytelling and thematic clarity above all else."
)
custom_colors = [
@@ -99,12 +103,12 @@ async def generate_agent_image_v2(graph: BaseGraph | AgentGraph) -> io.BytesIO:
return io.BytesIO(response.content)
async def generate_agent_image_v1(agent: BaseGraph | AgentGraph) -> io.BytesIO:
async def generate_agent_image_v1(agent: GraphBaseMeta | AgentGraph) -> io.BytesIO:
"""
Generate an image for an agent using Flux model via Replicate API.
Args:
agent (Graph): The agent to generate an image for
agent (GraphBaseMeta | AgentGraph): The agent to generate an image for
Returns:
io.BytesIO: The generated image as bytes
@@ -114,7 +118,13 @@ async def generate_agent_image_v1(agent: BaseGraph | AgentGraph) -> io.BytesIO:
raise ValueError("Missing Replicate API key in settings")
# Construct prompt from agent details
prompt = f"Create a visually engaging app store thumbnail for the AI agent that highlights what it does in a clear and captivating way:\n- **Name**: {agent.name}\n- **Description**: {agent.description}\nFocus on showcasing its core functionality with an appealing design."
prompt = (
"Create a visually engaging app store thumbnail for the AI agent "
"that highlights what it does in a clear and captivating way:\n"
f"- **Name**: {agent.name}\n"
f"- **Description**: {agent.description}\n"
f"Focus on showcasing its core functionality with an appealing design."
)
# Set up Replicate client
client = ReplicateClient(api_token=settings.secrets.replicate_api_key)

View File

@@ -38,6 +38,7 @@ class StoreAgent(pydantic.BaseModel):
description: str
runs: int
rating: float
agent_graph_id: str
class StoreAgentsResponse(pydantic.BaseModel):

View File

@@ -26,11 +26,13 @@ def test_store_agent():
description="Test description",
runs=50,
rating=4.5,
agent_graph_id="test-graph-id",
)
assert agent.slug == "test-agent"
assert agent.agent_name == "Test Agent"
assert agent.runs == 50
assert agent.rating == 4.5
assert agent.agent_graph_id == "test-graph-id"
def test_store_agents_response():
@@ -46,6 +48,7 @@ def test_store_agents_response():
description="Test description",
runs=50,
rating=4.5,
agent_graph_id="test-graph-id",
)
],
pagination=store_model.Pagination(

View File

@@ -278,7 +278,7 @@ async def get_agent(
)
async def get_graph_meta_by_store_listing_version_id(
store_listing_version_id: str,
) -> backend.data.graph.GraphMeta:
) -> backend.data.graph.GraphModelWithoutNodes:
"""
Get Agent Graph from Store Listing Version ID.
"""

View File

@@ -82,6 +82,7 @@ def test_get_agents_featured(
description="Featured agent description",
runs=100,
rating=4.5,
agent_graph_id="test-graph-1",
)
],
pagination=store_model.Pagination(
@@ -127,6 +128,7 @@ def test_get_agents_by_creator(
description="Creator agent description",
runs=50,
rating=4.0,
agent_graph_id="test-graph-2",
)
],
pagination=store_model.Pagination(
@@ -172,6 +174,7 @@ def test_get_agents_sorted(
description="Top agent description",
runs=1000,
rating=5.0,
agent_graph_id="test-graph-3",
)
],
pagination=store_model.Pagination(
@@ -217,6 +220,7 @@ def test_get_agents_search(
description="Specific search term description",
runs=75,
rating=4.2,
agent_graph_id="test-graph-search",
)
],
pagination=store_model.Pagination(
@@ -262,6 +266,7 @@ def test_get_agents_category(
description="Category agent description",
runs=60,
rating=4.1,
agent_graph_id="test-graph-category",
)
],
pagination=store_model.Pagination(
@@ -306,6 +311,7 @@ def test_get_agents_pagination(
description=f"Agent {i} description",
runs=i * 10,
rating=4.0,
agent_graph_id="test-graph-2",
)
for i in range(5)
],

View File

@@ -33,6 +33,7 @@ class TestCacheDeletion:
description="Test description",
runs=100,
rating=4.5,
agent_graph_id="test-graph-id",
)
],
pagination=Pagination(

View File

@@ -40,10 +40,11 @@ from backend.api.model import (
UpdateTimezoneRequest,
UploadFileResponse,
)
from backend.blocks import get_block, get_blocks
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data.auth import api_key as api_key_db
from backend.data.block import BlockInput, CompletedBlockOutput, get_block, get_blocks
from backend.data.block import BlockInput, CompletedBlockOutput
from backend.data.credit import (
AutoTopUpConfig,
RefundRequest,
@@ -101,7 +102,6 @@ from backend.util.timezone_utils import (
from backend.util.virus_scanner import scan_content_safe
from .library import db as library_db
from .library import model as library_model
from .store.model import StoreAgentDetails
@@ -823,18 +823,16 @@ async def update_graph(
graph: graph_db.Graph,
user_id: Annotated[str, Security(get_user_id)],
) -> graph_db.GraphModel:
# Sanity check
if graph.id and graph.id != graph_id:
raise HTTPException(400, detail="Graph ID does not match ID in URI")
# Determine new version
existing_versions = await graph_db.get_graph_all_versions(graph_id, user_id=user_id)
if not existing_versions:
raise HTTPException(404, detail=f"Graph #{graph_id} not found")
latest_version_number = max(g.version for g in existing_versions)
graph.version = latest_version_number + 1
graph.version = max(g.version for g in existing_versions) + 1
current_active_version = next((v for v in existing_versions if v.is_active), None)
graph = graph_db.make_graph_model(graph, user_id)
graph.reassign_ids(user_id=user_id, reassign_graph_id=False)
graph.validate_graph(for_run=False)
@@ -842,27 +840,23 @@ async def update_graph(
new_graph_version = await graph_db.create_graph(graph, user_id=user_id)
if new_graph_version.is_active:
# Keep the library agent up to date with the new active version
await _update_library_agent_version_and_settings(user_id, new_graph_version)
# Handle activation of the new graph first to ensure continuity
await library_db.update_library_agent_version_and_settings(
user_id, new_graph_version
)
new_graph_version = await on_graph_activate(new_graph_version, user_id=user_id)
# Ensure new version is the only active version
await graph_db.set_graph_active_version(
graph_id=graph_id, version=new_graph_version.version, user_id=user_id
)
if current_active_version:
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_version, user_id=user_id)
# Fetch new graph version *with sub-graphs* (needed for credentials input schema)
new_graph_version_with_subgraphs = await graph_db.get_graph(
graph_id,
new_graph_version.version,
user_id=user_id,
include_subgraphs=True,
)
assert new_graph_version_with_subgraphs # make type checker happy
assert new_graph_version_with_subgraphs
return new_graph_version_with_subgraphs
@@ -900,33 +894,15 @@ async def set_graph_active_version(
)
# Keep the library agent up to date with the new active version
await _update_library_agent_version_and_settings(user_id, new_active_graph)
await library_db.update_library_agent_version_and_settings(
user_id, new_active_graph
)
if current_active_graph and current_active_graph.version != new_active_version:
# Handle deactivation of the previously active version
await on_graph_deactivate(current_active_graph, user_id=user_id)
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id,
settings=updated_settings,
)
return library
@v1_router.patch(
path="/graphs/{graph_id}/settings",
summary="Update graph settings",

View File

@@ -0,0 +1 @@
# Workspace API feature module

View File

@@ -0,0 +1,122 @@
"""
Workspace API routes for managing user file storage.
"""
import logging
import re
from typing import Annotated
from urllib.parse import quote
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
from fastapi.responses import Response
from backend.data.workspace import WorkspaceFile, get_workspace, get_workspace_file
from backend.util.workspace_storage import get_workspace_storage
def _sanitize_filename_for_header(filename: str) -> str:
"""
Sanitize filename for Content-Disposition header to prevent header injection.
Removes/replaces characters that could break the header or inject new headers.
Uses RFC5987 encoding for non-ASCII characters.
"""
# Remove CR, LF, and null bytes (header injection prevention)
sanitized = re.sub(r"[\r\n\x00]", "", filename)
# Escape quotes
sanitized = sanitized.replace('"', '\\"')
# For non-ASCII, use RFC5987 filename* parameter
# Check if filename has non-ASCII characters
try:
sanitized.encode("ascii")
return f'attachment; filename="{sanitized}"'
except UnicodeEncodeError:
# Use RFC5987 encoding for UTF-8 filenames
encoded = quote(sanitized, safe="")
return f"attachment; filename*=UTF-8''{encoded}"
logger = logging.getLogger(__name__)
router = fastapi.APIRouter(
dependencies=[fastapi.Security(requires_user)],
)
def _create_streaming_response(content: bytes, file: WorkspaceFile) -> Response:
"""Create a streaming response for file content."""
return Response(
content=content,
media_type=file.mime_type,
headers={
"Content-Disposition": _sanitize_filename_for_header(file.name),
"Content-Length": str(len(content)),
},
)
async def _create_file_download_response(file: WorkspaceFile) -> Response:
"""
Create a download response for a workspace file.
Handles both local storage (direct streaming) and GCS (signed URL redirect
with fallback to streaming).
"""
storage = await get_workspace_storage()
# For local storage, stream the file directly
if file.storage_path.startswith("local://"):
content = await storage.retrieve(file.storage_path)
return _create_streaming_response(content, file)
# For GCS, try to redirect to signed URL, fall back to streaming
try:
url = await storage.get_download_url(file.storage_path, expires_in=300)
# If we got back an API path (fallback), stream directly instead
if url.startswith("/api/"):
content = await storage.retrieve(file.storage_path)
return _create_streaming_response(content, file)
return fastapi.responses.RedirectResponse(url=url, status_code=302)
except Exception as e:
# Log the signed URL failure with context
logger.error(
f"Failed to get signed URL for file {file.id} "
f"(storagePath={file.storage_path}): {e}",
exc_info=True,
)
# Fall back to streaming directly from GCS
try:
content = await storage.retrieve(file.storage_path)
return _create_streaming_response(content, file)
except Exception as fallback_error:
logger.error(
f"Fallback streaming also failed for file {file.id} "
f"(storagePath={file.storage_path}): {fallback_error}",
exc_info=True,
)
raise
@router.get(
"/files/{file_id}/download",
summary="Download file by ID",
)
async def download_file(
user_id: Annotated[str, fastapi.Security(get_user_id)],
file_id: str,
) -> Response:
"""
Download a file by its ID.
Returns the file content directly or redirects to a signed URL for GCS.
"""
workspace = await get_workspace(user_id)
if workspace is None:
raise fastapi.HTTPException(status_code=404, detail="Workspace not found")
file = await get_workspace_file(file_id, workspace.id)
if file is None:
raise fastapi.HTTPException(status_code=404, detail="File not found")
return await _create_file_download_response(file)

View File

@@ -26,12 +26,14 @@ import backend.api.features.executions.review.routes
import backend.api.features.library.db
import backend.api.features.library.model
import backend.api.features.library.routes
import backend.api.features.mcp.routes as mcp_routes
import backend.api.features.oauth
import backend.api.features.otto.routes
import backend.api.features.postmark.postmark
import backend.api.features.store.model
import backend.api.features.store.routes
import backend.api.features.v1
import backend.api.features.workspace.routes as workspace_routes
import backend.data.block
import backend.data.db
import backend.data.graph
@@ -40,6 +42,10 @@ import backend.integrations.webhooks.utils
import backend.util.service
import backend.util.settings
from backend.blocks.llm import DEFAULT_LLM_MODEL
from backend.copilot.completion_consumer import (
start_completion_consumer,
stop_completion_consumer,
)
from backend.data.model import Credentials
from backend.integrations.providers import ProviderName
from backend.monitoring.instrumentation import instrument_fastapi
@@ -52,6 +58,7 @@ from backend.util.exceptions import (
)
from backend.util.feature_flag import initialize_launchdarkly, shutdown_launchdarkly
from backend.util.service import UnhealthyServiceError
from backend.util.workspace_storage import shutdown_workspace_storage
from .external.fastapi_app import external_api
from .features.analytics import router as analytics_router
@@ -116,14 +123,31 @@ async def lifespan_context(app: fastapi.FastAPI):
await backend.data.graph.migrate_llm_models(DEFAULT_LLM_MODEL)
await backend.integrations.webhooks.utils.migrate_legacy_triggered_graphs()
# Start chat completion consumer for Redis Streams notifications
try:
await start_completion_consumer()
except Exception as e:
logger.warning(f"Could not start chat completion consumer: {e}")
with launch_darkly_context():
yield
# Stop chat completion consumer
try:
await stop_completion_consumer()
except Exception as e:
logger.warning(f"Error stopping chat completion consumer: {e}")
try:
await shutdown_cloud_storage_handler()
except Exception as e:
logger.warning(f"Error shutting down cloud storage handler: {e}")
try:
await shutdown_workspace_storage()
except Exception as e:
logger.warning(f"Error shutting down workspace storage: {e}")
await backend.data.db.disconnect()
@@ -315,6 +339,16 @@ app.include_router(
tags=["v2", "chat"],
prefix="/api/chat",
)
app.include_router(
workspace_routes.router,
tags=["workspace"],
prefix="/api/workspace",
)
app.include_router(
mcp_routes.router,
tags=["v2", "mcp"],
prefix="/api/mcp",
)
app.include_router(
backend.api.features.oauth.router,
tags=["oauth"],

View File

@@ -66,18 +66,24 @@ async def event_broadcaster(manager: ConnectionManager):
execution_bus = AsyncRedisExecutionEventBus()
notification_bus = AsyncRedisNotificationEventBus()
async def execution_worker():
async for event in execution_bus.listen("*"):
await manager.send_execution_update(event)
try:
async def notification_worker():
async for notification in notification_bus.listen("*"):
await manager.send_notification(
user_id=notification.user_id,
payload=notification.payload,
)
async def execution_worker():
async for event in execution_bus.listen("*"):
await manager.send_execution_update(event)
await asyncio.gather(execution_worker(), notification_worker())
async def notification_worker():
async for notification in notification_bus.listen("*"):
await manager.send_notification(
user_id=notification.user_id,
payload=notification.payload,
)
await asyncio.gather(execution_worker(), notification_worker())
finally:
# Ensure PubSub connections are closed on any exit to prevent leaks
await execution_bus.close()
await notification_bus.close()
async def authenticate_websocket(websocket: WebSocket) -> str:

View File

@@ -38,7 +38,9 @@ def main(**kwargs):
from backend.api.rest_api import AgentServer
from backend.api.ws_api import WebsocketServer
from backend.executor import DatabaseManager, ExecutionManager, Scheduler
from backend.copilot.executor.manager import CoPilotExecutor
from backend.data.db_manager import DatabaseManager
from backend.executor import ExecutionManager, Scheduler
from backend.notifications import NotificationManager
run_processes(
@@ -48,6 +50,7 @@ def main(**kwargs):
WebsocketServer(),
AgentServer(),
ExecutionManager(),
CoPilotExecutor(),
**kwargs,
)

View File

@@ -3,22 +3,19 @@ import logging
import os
import re
from pathlib import Path
from typing import TYPE_CHECKING, TypeVar
from typing import Sequence, Type, TypeVar
from backend.blocks._base import AnyBlockSchema, BlockType
from backend.util.cache import cached
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from backend.data.block import Block
T = TypeVar("T")
@cached(ttl_seconds=3600)
def load_all_blocks() -> dict[str, type["Block"]]:
from backend.data.block import Block
def load_all_blocks() -> dict[str, type["AnyBlockSchema"]]:
from backend.blocks._base import Block
from backend.util.settings import Config
# Check if example blocks should be loaded from settings
@@ -50,8 +47,8 @@ def load_all_blocks() -> dict[str, type["Block"]]:
importlib.import_module(f".{module}", package=__name__)
# Load all Block instances from the available modules
available_blocks: dict[str, type["Block"]] = {}
for block_cls in all_subclasses(Block):
available_blocks: dict[str, type["AnyBlockSchema"]] = {}
for block_cls in _all_subclasses(Block):
class_name = block_cls.__name__
if class_name.endswith("Base"):
@@ -64,7 +61,7 @@ def load_all_blocks() -> dict[str, type["Block"]]:
"please name the class with 'Base' at the end"
)
block = block_cls.create()
block = block_cls() # pyright: ignore[reportAbstractUsage]
if not isinstance(block.id, str) or len(block.id) != 36:
raise ValueError(
@@ -105,7 +102,7 @@ def load_all_blocks() -> dict[str, type["Block"]]:
available_blocks[block.id] = block_cls
# Filter out blocks with incomplete auth configs, e.g. missing OAuth server secrets
from backend.data.block import is_block_auth_configured
from ._utils import is_block_auth_configured
filtered_blocks = {}
for block_id, block_cls in available_blocks.items():
@@ -115,11 +112,48 @@ def load_all_blocks() -> dict[str, type["Block"]]:
return filtered_blocks
__all__ = ["load_all_blocks"]
def all_subclasses(cls: type[T]) -> list[type[T]]:
def _all_subclasses(cls: type[T]) -> list[type[T]]:
subclasses = cls.__subclasses__()
for subclass in subclasses:
subclasses += all_subclasses(subclass)
subclasses += _all_subclasses(subclass)
return subclasses
# ============== Block access helper functions ============== #
def get_blocks() -> dict[str, Type["AnyBlockSchema"]]:
return load_all_blocks()
# Note on the return type annotation: https://github.com/microsoft/pyright/issues/10281
def get_block(block_id: str) -> "AnyBlockSchema | None":
cls = get_blocks().get(block_id)
return cls() if cls else None
@cached(ttl_seconds=3600)
def get_webhook_block_ids() -> Sequence[str]:
return [
id
for id, B in get_blocks().items()
if B().block_type in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL)
]
@cached(ttl_seconds=3600)
def get_io_block_ids() -> Sequence[str]:
return [
id
for id, B in get_blocks().items()
if B().block_type in (BlockType.INPUT, BlockType.OUTPUT)
]
@cached(ttl_seconds=3600)
def get_human_in_the_loop_block_ids() -> Sequence[str]:
return [
id
for id, B in get_blocks().items()
if B().block_type == BlockType.HUMAN_IN_THE_LOOP
]

View File

@@ -0,0 +1,740 @@
import inspect
import logging
from abc import ABC, abstractmethod
from enum import Enum
from typing import (
TYPE_CHECKING,
Any,
Callable,
ClassVar,
Generic,
Optional,
Type,
TypeAlias,
TypeVar,
cast,
get_origin,
)
import jsonref
import jsonschema
from pydantic import BaseModel
from backend.data.block import BlockInput, BlockOutput, BlockOutputEntry
from backend.data.model import (
Credentials,
CredentialsFieldInfo,
CredentialsMetaInput,
SchemaField,
is_credentials_field_name,
)
from backend.integrations.providers import ProviderName
from backend.util import json
from backend.util.exceptions import (
BlockError,
BlockExecutionError,
BlockInputError,
BlockOutputError,
BlockUnknownError,
)
from backend.util.settings import Config
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from backend.data.execution import ExecutionContext
from backend.data.model import ContributorDetails, NodeExecutionStats
from ..data.graph import Link
app_config = Config()
BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]]
class BlockType(Enum):
STANDARD = "Standard"
INPUT = "Input"
OUTPUT = "Output"
NOTE = "Note"
WEBHOOK = "Webhook"
WEBHOOK_MANUAL = "Webhook (manual)"
AGENT = "Agent"
AI = "AI"
AYRSHARE = "Ayrshare"
HUMAN_IN_THE_LOOP = "Human In The Loop"
MCP_TOOL = "MCP Tool"
class BlockCategory(Enum):
AI = "Block that leverages AI to perform a task."
SOCIAL = "Block that interacts with social media platforms."
TEXT = "Block that processes text data."
SEARCH = "Block that searches or extracts information from the internet."
BASIC = "Block that performs basic operations."
INPUT = "Block that interacts with input of the graph."
OUTPUT = "Block that interacts with output of the graph."
LOGIC = "Programming logic to control the flow of your agent"
COMMUNICATION = "Block that interacts with communication platforms."
DEVELOPER_TOOLS = "Developer tools such as GitHub blocks."
DATA = "Block that interacts with structured data."
HARDWARE = "Block that interacts with hardware."
AGENT = "Block that interacts with other agents."
CRM = "Block that interacts with CRM services."
SAFETY = (
"Block that provides AI safety mechanisms such as detecting harmful content"
)
PRODUCTIVITY = "Block that helps with productivity"
ISSUE_TRACKING = "Block that helps with issue tracking"
MULTIMEDIA = "Block that interacts with multimedia content"
MARKETING = "Block that helps with marketing"
def dict(self) -> dict[str, str]:
return {"category": self.name, "description": self.value}
class BlockCostType(str, Enum):
RUN = "run" # cost X credits per run
BYTE = "byte" # cost X credits per byte
SECOND = "second" # cost X credits per second
class BlockCost(BaseModel):
cost_amount: int
cost_filter: BlockInput
cost_type: BlockCostType
def __init__(
self,
cost_amount: int,
cost_type: BlockCostType = BlockCostType.RUN,
cost_filter: Optional[BlockInput] = None,
**data: Any,
) -> None:
super().__init__(
cost_amount=cost_amount,
cost_filter=cost_filter or {},
cost_type=cost_type,
**data,
)
class BlockInfo(BaseModel):
id: str
name: str
inputSchema: dict[str, Any]
outputSchema: dict[str, Any]
costs: list[BlockCost]
description: str
categories: list[dict[str, str]]
contributors: list[dict[str, Any]]
staticOutput: bool
uiType: str
class BlockSchema(BaseModel):
cached_jsonschema: ClassVar[dict[str, Any]]
@classmethod
def jsonschema(cls) -> dict[str, Any]:
if cls.cached_jsonschema:
return cls.cached_jsonschema
model = jsonref.replace_refs(cls.model_json_schema(), merge_props=True)
def ref_to_dict(obj):
if isinstance(obj, dict):
# OpenAPI <3.1 does not support sibling fields that has a $ref key
# So sometimes, the schema has an "allOf"/"anyOf"/"oneOf" with 1 item.
keys = {"allOf", "anyOf", "oneOf"}
one_key = next((k for k in keys if k in obj and len(obj[k]) == 1), None)
if one_key:
obj.update(obj[one_key][0])
return {
key: ref_to_dict(value)
for key, value in obj.items()
if not key.startswith("$") and key != one_key
}
elif isinstance(obj, list):
return [ref_to_dict(item) for item in obj]
return obj
cls.cached_jsonschema = cast(dict[str, Any], ref_to_dict(model))
return cls.cached_jsonschema
@classmethod
def validate_data(cls, data: BlockInput) -> str | None:
return json.validate_with_jsonschema(
schema=cls.jsonschema(),
data={k: v for k, v in data.items() if v is not None},
)
@classmethod
def get_mismatch_error(cls, data: BlockInput) -> str | None:
return cls.validate_data(data)
@classmethod
def get_field_schema(cls, field_name: str) -> dict[str, Any]:
model_schema = cls.jsonschema().get("properties", {})
if not model_schema:
raise ValueError(f"Invalid model schema {cls}")
property_schema = model_schema.get(field_name)
if not property_schema:
raise ValueError(f"Invalid property name {field_name}")
return property_schema
@classmethod
def validate_field(cls, field_name: str, data: BlockInput) -> str | None:
"""
Validate the data against a specific property (one of the input/output name).
Returns the validation error message if the data does not match the schema.
"""
try:
property_schema = cls.get_field_schema(field_name)
jsonschema.validate(json.to_dict(data), property_schema)
return None
except jsonschema.ValidationError as e:
return str(e)
@classmethod
def get_fields(cls) -> set[str]:
return set(cls.model_fields.keys())
@classmethod
def get_required_fields(cls) -> set[str]:
return {
field
for field, field_info in cls.model_fields.items()
if field_info.is_required()
}
@classmethod
def __pydantic_init_subclass__(cls, **kwargs):
"""Validates the schema definition. Rules:
- Fields with annotation `CredentialsMetaInput` MUST be
named `credentials` or `*_credentials`
- Fields named `credentials` or `*_credentials` MUST be
of type `CredentialsMetaInput`
"""
super().__pydantic_init_subclass__(**kwargs)
# Reset cached JSON schema to prevent inheriting it from parent class
cls.cached_jsonschema = {}
credentials_fields = cls.get_credentials_fields()
for field_name in cls.get_fields():
if is_credentials_field_name(field_name):
if field_name not in credentials_fields:
raise TypeError(
f"Credentials field '{field_name}' on {cls.__qualname__} "
f"is not of type {CredentialsMetaInput.__name__}"
)
CredentialsMetaInput.validate_credentials_field_schema(
cls.get_field_schema(field_name), field_name
)
elif field_name in credentials_fields:
raise KeyError(
f"Credentials field '{field_name}' on {cls.__qualname__} "
"has invalid name: must be 'credentials' or *_credentials"
)
@classmethod
def get_credentials_fields(cls) -> dict[str, type[CredentialsMetaInput]]:
return {
field_name: info.annotation
for field_name, info in cls.model_fields.items()
if (
inspect.isclass(info.annotation)
and issubclass(
get_origin(info.annotation) or info.annotation,
CredentialsMetaInput,
)
)
}
@classmethod
def get_auto_credentials_fields(cls) -> dict[str, dict[str, Any]]:
"""
Get fields that have auto_credentials metadata (e.g., GoogleDriveFileInput).
Returns a dict mapping kwarg_name -> {field_name, auto_credentials_config}
Raises:
ValueError: If multiple fields have the same kwarg_name, as this would
cause silent overwriting and only the last field would be processed.
"""
result: dict[str, dict[str, Any]] = {}
schema = cls.jsonschema()
properties = schema.get("properties", {})
for field_name, field_schema in properties.items():
auto_creds = field_schema.get("auto_credentials")
if auto_creds:
kwarg_name = auto_creds.get("kwarg_name", "credentials")
if kwarg_name in result:
raise ValueError(
f"Duplicate auto_credentials kwarg_name '{kwarg_name}' "
f"in fields '{result[kwarg_name]['field_name']}' and "
f"'{field_name}' on {cls.__qualname__}"
)
result[kwarg_name] = {
"field_name": field_name,
"config": auto_creds,
}
return result
@classmethod
def get_credentials_fields_info(cls) -> dict[str, CredentialsFieldInfo]:
result = {}
# Regular credentials fields
for field_name in cls.get_credentials_fields().keys():
result[field_name] = CredentialsFieldInfo.model_validate(
cls.get_field_schema(field_name), by_alias=True
)
# Auto-generated credentials fields (from GoogleDriveFileInput etc.)
for kwarg_name, info in cls.get_auto_credentials_fields().items():
config = info["config"]
# Build a schema-like dict that CredentialsFieldInfo can parse
auto_schema = {
"credentials_provider": [config.get("provider", "google")],
"credentials_types": [config.get("type", "oauth2")],
"credentials_scopes": config.get("scopes"),
}
result[kwarg_name] = CredentialsFieldInfo.model_validate(
auto_schema, by_alias=True
)
return result
@classmethod
def get_input_defaults(cls, data: BlockInput) -> BlockInput:
return data # Return as is, by default.
@classmethod
def get_missing_links(cls, data: BlockInput, links: list["Link"]) -> set[str]:
input_fields_from_nodes = {link.sink_name for link in links}
return input_fields_from_nodes - set(data)
@classmethod
def get_missing_input(cls, data: BlockInput) -> set[str]:
return cls.get_required_fields() - set(data)
class BlockSchemaInput(BlockSchema):
"""
Base schema class for block inputs.
All block input schemas should extend this class for consistency.
"""
pass
class BlockSchemaOutput(BlockSchema):
"""
Base schema class for block outputs that includes a standard error field.
All block output schemas should extend this class to ensure consistent error handling.
"""
error: str = SchemaField(
description="Error message if the operation failed", default=""
)
BlockSchemaInputType = TypeVar("BlockSchemaInputType", bound=BlockSchemaInput)
BlockSchemaOutputType = TypeVar("BlockSchemaOutputType", bound=BlockSchemaOutput)
class EmptyInputSchema(BlockSchemaInput):
pass
class EmptyOutputSchema(BlockSchemaOutput):
pass
# For backward compatibility - will be deprecated
EmptySchema = EmptyOutputSchema
# --8<-- [start:BlockWebhookConfig]
class BlockManualWebhookConfig(BaseModel):
"""
Configuration model for webhook-triggered blocks on which
the user has to manually set up the webhook at the provider.
"""
provider: ProviderName
"""The service provider that the webhook connects to"""
webhook_type: str
"""
Identifier for the webhook type. E.g. GitHub has repo and organization level hooks.
Only for use in the corresponding `WebhooksManager`.
"""
event_filter_input: str = ""
"""
Name of the block's event filter input.
Leave empty if the corresponding webhook doesn't have distinct event/payload types.
"""
event_format: str = "{event}"
"""
Template string for the event(s) that a block instance subscribes to.
Applied individually to each event selected in the event filter input.
Example: `"pull_request.{event}"` -> `"pull_request.opened"`
"""
class BlockWebhookConfig(BlockManualWebhookConfig):
"""
Configuration model for webhook-triggered blocks for which
the webhook can be automatically set up through the provider's API.
"""
resource_format: str
"""
Template string for the resource that a block instance subscribes to.
Fields will be filled from the block's inputs (except `payload`).
Example: `f"{repo}/pull_requests"` (note: not how it's actually implemented)
Only for use in the corresponding `WebhooksManager`.
"""
# --8<-- [end:BlockWebhookConfig]
class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
def __init__(
self,
id: str = "",
description: str = "",
contributors: list["ContributorDetails"] = [],
categories: set[BlockCategory] | None = None,
input_schema: Type[BlockSchemaInputType] = EmptyInputSchema,
output_schema: Type[BlockSchemaOutputType] = EmptyOutputSchema,
test_input: BlockInput | list[BlockInput] | None = None,
test_output: BlockTestOutput | list[BlockTestOutput] | None = None,
test_mock: dict[str, Any] | None = None,
test_credentials: Optional[Credentials | dict[str, Credentials]] = None,
disabled: bool = False,
static_output: bool = False,
block_type: BlockType = BlockType.STANDARD,
webhook_config: Optional[BlockWebhookConfig | BlockManualWebhookConfig] = None,
is_sensitive_action: bool = False,
):
"""
Initialize the block with the given schema.
Args:
id: The unique identifier for the block, this value will be persisted in the
DB. So it should be a unique and constant across the application run.
Use the UUID format for the ID.
description: The description of the block, explaining what the block does.
contributors: The list of contributors who contributed to the block.
input_schema: The schema, defined as a Pydantic model, for the input data.
output_schema: The schema, defined as a Pydantic model, for the output data.
test_input: The list or single sample input data for the block, for testing.
test_output: The list or single expected output if the test_input is run.
test_mock: function names on the block implementation to mock on test run.
disabled: If the block is disabled, it will not be available for execution.
static_output: Whether the output links of the block are static by default.
"""
from backend.data.model import NodeExecutionStats
self.id = id
self.input_schema = input_schema
self.output_schema = output_schema
self.test_input = test_input
self.test_output = test_output
self.test_mock = test_mock
self.test_credentials = test_credentials
self.description = description
self.categories = categories or set()
self.contributors = contributors or set()
self.disabled = disabled
self.static_output = static_output
self.block_type = block_type
self.webhook_config = webhook_config
self.is_sensitive_action = is_sensitive_action
self.execution_stats: "NodeExecutionStats" = NodeExecutionStats()
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
# Enforce presence of credentials field on auto-setup webhook blocks
if not (cred_fields := self.input_schema.get_credentials_fields()):
raise TypeError(
"credentials field is required on auto-setup webhook blocks"
)
# Disallow multiple credentials inputs on webhook blocks
elif len(cred_fields) > 1:
raise ValueError(
"Multiple credentials inputs not supported on webhook blocks"
)
self.block_type = BlockType.WEBHOOK
else:
self.block_type = BlockType.WEBHOOK_MANUAL
# Enforce shape of webhook event filter, if present
if self.webhook_config.event_filter_input:
event_filter_field = self.input_schema.model_fields[
self.webhook_config.event_filter_input
]
if not (
isinstance(event_filter_field.annotation, type)
and issubclass(event_filter_field.annotation, BaseModel)
and all(
field.annotation is bool
for field in event_filter_field.annotation.model_fields.values()
)
):
raise NotImplementedError(
f"{self.name} has an invalid webhook event selector: "
"field must be a BaseModel and all its fields must be boolean"
)
# Enforce presence of 'payload' input
if "payload" not in self.input_schema.model_fields:
raise TypeError(
f"{self.name} is webhook-triggered but has no 'payload' input"
)
# Disable webhook-triggered block if webhook functionality not available
if not app_config.platform_base_url:
self.disabled = True
@abstractmethod
async def run(self, input_data: BlockSchemaInputType, **kwargs) -> BlockOutput:
"""
Run the block with the given input data.
Args:
input_data: The input data with the structure of input_schema.
Kwargs: Currently 14/02/2025 these include
graph_id: The ID of the graph.
node_id: The ID of the node.
graph_exec_id: The ID of the graph execution.
node_exec_id: The ID of the node execution.
user_id: The ID of the user.
Returns:
A Generator that yields (output_name, output_data).
output_name: One of the output name defined in Block's output_schema.
output_data: The data for the output_name, matching the defined schema.
"""
# --- satisfy the type checker, never executed -------------
if False: # noqa: SIM115
yield "name", "value" # pyright: ignore[reportMissingYield]
raise NotImplementedError(f"{self.name} does not implement the run method.")
async def run_once(
self, input_data: BlockSchemaInputType, output: str, **kwargs
) -> Any:
async for item in self.run(input_data, **kwargs):
name, data = item
if name == output:
return data
raise ValueError(f"{self.name} did not produce any output for {output}")
def merge_stats(self, stats: "NodeExecutionStats") -> "NodeExecutionStats":
self.execution_stats += stats
return self.execution_stats
@property
def name(self):
return self.__class__.__name__
def to_dict(self):
return {
"id": self.id,
"name": self.name,
"inputSchema": self.input_schema.jsonschema(),
"outputSchema": self.output_schema.jsonschema(),
"description": self.description,
"categories": [category.dict() for category in self.categories],
"contributors": [
contributor.model_dump() for contributor in self.contributors
],
"staticOutput": self.static_output,
"uiType": self.block_type.value,
}
def get_info(self) -> BlockInfo:
from backend.data.credit import get_block_cost
return BlockInfo(
id=self.id,
name=self.name,
inputSchema=self.input_schema.jsonschema(),
outputSchema=self.output_schema.jsonschema(),
costs=get_block_cost(self),
description=self.description,
categories=[category.dict() for category in self.categories],
contributors=[
contributor.model_dump() for contributor in self.contributors
],
staticOutput=self.static_output,
uiType=self.block_type.value,
)
async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
try:
async for output_name, output_data in self._execute(input_data, **kwargs):
yield output_name, output_data
except Exception as ex:
if isinstance(ex, BlockError):
raise ex
else:
raise (
BlockExecutionError
if isinstance(ex, ValueError)
else BlockUnknownError
)(
message=str(ex),
block_name=self.name,
block_id=self.id,
) from ex
async def is_block_exec_need_review(
self,
input_data: BlockInput,
*,
user_id: str,
node_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: "ExecutionContext",
**kwargs,
) -> tuple[bool, BlockInput]:
"""
Check if this block execution needs human review and handle the review process.
Returns:
Tuple of (should_pause, input_data_to_use)
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
if not (
self.is_sensitive_action and execution_context.sensitive_action_safe_mode
):
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper
# Handle the review request and get decision
decision = await HITLReviewHelper.handle_review_decision(
input_data=input_data,
user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
block_name=self.name,
editable=True,
)
if decision is None:
# We're awaiting review - pause execution
return True, input_data
if not decision.should_proceed:
# Review was rejected, raise an error to stop execution
raise BlockExecutionError(
message=f"Block execution rejected by reviewer: {decision.message}",
block_name=self.name,
block_id=self.id,
)
# Review was approved - use the potentially modified data
# ReviewResult.data must be a dict for block inputs
reviewed_data = decision.review_result.data
if not isinstance(reviewed_data, dict):
raise BlockExecutionError(
message=f"Review data must be a dict for block input, got {type(reviewed_data).__name__}",
block_name=self.name,
block_id=self.id,
)
return False, reviewed_data
async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
# Check for review requirement only if running within a graph execution context
# Direct block execution (e.g., from chat) skips the review process
has_graph_context = all(
key in kwargs
for key in (
"node_exec_id",
"graph_exec_id",
"graph_id",
"execution_context",
)
)
if has_graph_context:
should_pause, input_data = await self.is_block_exec_need_review(
input_data, **kwargs
)
if should_pause:
return
# Validate the input data (original or reviewer-modified) once
if error := self.input_schema.validate_data(input_data):
raise BlockInputError(
message=f"Unable to execute block with invalid input data: {error}",
block_name=self.name,
block_id=self.id,
)
# Use the validated input data
async for output_name, output_data in self.run(
self.input_schema(**{k: v for k, v in input_data.items() if v is not None}),
**kwargs,
):
if output_name == "error":
raise BlockExecutionError(
message=output_data, block_name=self.name, block_id=self.id
)
if self.block_type == BlockType.STANDARD and (
error := self.output_schema.validate_field(output_name, output_data)
):
raise BlockOutputError(
message=f"Block produced an invalid output data: {error}",
block_name=self.name,
block_id=self.id,
)
yield output_name, output_data
def is_triggered_by_event_type(
self, trigger_config: dict[str, Any], event_type: str
) -> bool:
if not self.webhook_config:
raise TypeError("This method can't be used on non-trigger blocks")
if not self.webhook_config.event_filter_input:
return True
event_filter = trigger_config.get(self.webhook_config.event_filter_input)
if not event_filter:
raise ValueError("Event filter is not configured on trigger")
return event_type in [
self.webhook_config.event_format.format(event=k)
for k in event_filter
if event_filter[k] is True
]
# Type alias for any block with standard input/output schemas
AnyBlockSchema: TypeAlias = Block[BlockSchemaInput, BlockSchemaOutput]

View File

@@ -0,0 +1,122 @@
import logging
import os
from backend.integrations.providers import ProviderName
from ._base import AnyBlockSchema
logger = logging.getLogger(__name__)
def is_block_auth_configured(
block_cls: type[AnyBlockSchema],
) -> bool:
"""
Check if a block has a valid authentication method configured at runtime.
For example if a block is an OAuth-only block and there env vars are not set,
do not show it in the UI.
"""
from backend.sdk.registry import AutoRegistry
# Create an instance to access input_schema
try:
block = block_cls()
except Exception as e:
# If we can't create a block instance, assume it's not OAuth-only
logger.error(f"Error creating block instance for {block_cls.__name__}: {e}")
return True
logger.debug(
f"Checking if block {block_cls.__name__} has a valid provider configured"
)
# Get all credential inputs from input schema
credential_inputs = block.input_schema.get_credentials_fields_info()
required_inputs = block.input_schema.get_required_fields()
if not credential_inputs:
logger.debug(
f"Block {block_cls.__name__} has no credential inputs - Treating as valid"
)
return True
# Check credential inputs
if len(required_inputs.intersection(credential_inputs.keys())) == 0:
logger.debug(
f"Block {block_cls.__name__} has only optional credential inputs"
" - will work without credentials configured"
)
# Check if the credential inputs for this block are correctly configured
for field_name, field_info in credential_inputs.items():
provider_names = field_info.provider
if not provider_names:
logger.warning(
f"Block {block_cls.__name__} "
f"has credential input '{field_name}' with no provider options"
" - Disabling"
)
return False
# If a field has multiple possible providers, each one needs to be usable to
# prevent breaking the UX
for _provider_name in provider_names:
provider_name = _provider_name.value
if provider_name in ProviderName.__members__.values():
logger.debug(
f"Block {block_cls.__name__} credential input '{field_name}' "
f"provider '{provider_name}' is part of the legacy provider system"
" - Treating as valid"
)
break
provider = AutoRegistry.get_provider(provider_name)
if not provider:
logger.warning(
f"Block {block_cls.__name__} credential input '{field_name}' "
f"refers to unknown provider '{provider_name}' - Disabling"
)
return False
# Check the provider's supported auth types
if field_info.supported_types != provider.supported_auth_types:
logger.warning(
f"Block {block_cls.__name__} credential input '{field_name}' "
f"has mismatched supported auth types (field <> Provider): "
f"{field_info.supported_types} != {provider.supported_auth_types}"
)
if not (supported_auth_types := provider.supported_auth_types):
# No auth methods are been configured for this provider
logger.warning(
f"Block {block_cls.__name__} credential input '{field_name}' "
f"provider '{provider_name}' "
"has no authentication methods configured - Disabling"
)
return False
# Check if provider supports OAuth
if "oauth2" in supported_auth_types:
# Check if OAuth environment variables are set
if (oauth_config := provider.oauth_config) and bool(
os.getenv(oauth_config.client_id_env_var)
and os.getenv(oauth_config.client_secret_env_var)
):
logger.debug(
f"Block {block_cls.__name__} credential input '{field_name}' "
f"provider '{provider_name}' is configured for OAuth"
)
else:
logger.error(
f"Block {block_cls.__name__} credential input '{field_name}' "
f"provider '{provider_name}' "
"is missing OAuth client ID or secret - Disabling"
)
return False
logger.debug(
f"Block {block_cls.__name__} credential input '{field_name}' is valid; "
f"supported credential types: {', '.join(field_info.supported_types)}"
)
return True

View File

@@ -1,7 +1,7 @@
import logging
from typing import Any, Optional
from typing import TYPE_CHECKING, Any, Optional
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockInput,
@@ -9,13 +9,15 @@ from backend.data.block import (
BlockSchema,
BlockSchemaInput,
BlockType,
get_block,
)
from backend.data.execution import ExecutionContext, ExecutionStatus, NodesInputMasks
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
if TYPE_CHECKING:
from backend.executor.utils import LogMetadata
_logger = logging.getLogger(__name__)
@@ -124,9 +126,10 @@ class AgentExecutorBlock(Block):
graph_version: int,
graph_exec_id: str,
user_id: str,
logger,
logger: "LogMetadata",
) -> BlockOutput:
from backend.blocks import get_block
from backend.data.execution import ExecutionEventType
from backend.executor import utils as execution_utils
@@ -198,7 +201,7 @@ class AgentExecutorBlock(Block):
self,
graph_exec_id: str,
user_id: str,
logger,
logger: "LogMetadata",
) -> None:
from backend.executor import utils as execution_utils

View File

@@ -1,5 +1,11 @@
from typing import Any
from backend.blocks._base import (
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.blocks.llm import (
DEFAULT_LLM_MODEL,
TEST_CREDENTIALS,
@@ -11,12 +17,6 @@ from backend.blocks.llm import (
LLMResponse,
llm_call,
)
from backend.data.block import (
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, NodeExecutionStats, SchemaField

View File

@@ -6,13 +6,14 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -117,11 +118,13 @@ class AIImageCustomizerBlock(Block):
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("image_url", "https://replicate.delivery/generated-image.jpg"),
# Output will be a workspace ref or data URI depending on context
("image_url", lambda x: x.startswith(("workspace://", "data:"))),
],
test_mock={
# Use data URI to avoid HTTP requests during tests
"run_model": lambda *args, **kwargs: MediaFileType(
"https://replicate.delivery/generated-image.jpg"
"data:image/jpeg;base64,/9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAABAAEDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iiigD//2Q=="
),
},
test_credentials=TEST_CREDENTIALS,
@@ -132,8 +135,7 @@ class AIImageCustomizerBlock(Block):
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
user_id: str,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
try:
@@ -141,10 +143,9 @@ class AIImageCustomizerBlock(Block):
processed_images = await asyncio.gather(
*(
store_media_file(
graph_exec_id=graph_exec_id,
file=img,
user_id=user_id,
return_content=True,
execution_context=execution_context,
return_format="for_external_api", # Get content for Replicate API
)
for img in input_data.images
)
@@ -158,7 +159,14 @@ class AIImageCustomizerBlock(Block):
aspect_ratio=input_data.aspect_ratio.value,
output_format=input_data.output_format.value,
)
yield "image_url", result
# Store the generated image to the user's workspace for persistence
stored_url = await store_media_file(
file=result,
execution_context=execution_context,
return_format="for_block_output",
)
yield "image_url", stored_url
except Exception as e:
yield "error", str(e)

View File

@@ -5,7 +5,13 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockSchemaInput, BlockSchemaOutput
from backend.blocks._base import (
Block,
BlockCategory,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -13,6 +19,8 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import store_media_file
from backend.util.type import MediaFileType
class ImageSize(str, Enum):
@@ -165,11 +173,13 @@ class AIImageGeneratorBlock(Block):
test_output=[
(
"image_url",
"https://replicate.delivery/generated-image.webp",
# Test output is a data URI since we now store images
lambda x: x.startswith("data:image/"),
),
],
test_mock={
"_run_client": lambda *args, **kwargs: "https://replicate.delivery/generated-image.webp"
# Return a data URI directly so store_media_file doesn't need to download
"_run_client": lambda *args, **kwargs: "data:image/webp;base64,UklGRiQAAABXRUJQVlA4IBgAAAAwAQCdASoBAAEAAQAcJYgCdAEO"
},
)
@@ -318,11 +328,24 @@ class AIImageGeneratorBlock(Block):
style_text = style_map.get(style, "")
return f"{style_text} of" if style_text else ""
async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: ExecutionContext,
**kwargs,
):
try:
url = await self.generate_image(input_data, credentials)
if url:
yield "image_url", url
# Store the generated image to the user's workspace/execution folder
stored_url = await store_media_file(
file=MediaFileType(url),
execution_context=execution_context,
return_format="for_block_output",
)
yield "image_url", stored_url
else:
yield "error", "Image generation returned an empty result."
except Exception as e:

View File

@@ -6,7 +6,7 @@ from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -6,13 +6,14 @@ from typing import Literal
from pydantic import SecretStr
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -21,7 +22,9 @@ from backend.data.model import (
)
from backend.integrations.providers import ProviderName
from backend.util.exceptions import BlockExecutionError
from backend.util.file import store_media_file
from backend.util.request import Requests
from backend.util.type import MediaFileType
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -271,7 +274,10 @@ class AIShortformVideoCreatorBlock(Block):
"voice": Voice.LILY,
"video_style": VisualMediaType.STOCK_VIDEOS,
},
test_output=("video_url", "https://example.com/video.mp4"),
test_output=(
"video_url",
lambda x: x.startswith(("workspace://", "data:")),
),
test_mock={
"create_webhook": lambda *args, **kwargs: (
"test_uuid",
@@ -280,15 +286,21 @@ class AIShortformVideoCreatorBlock(Block):
"create_video": lambda *args, **kwargs: {"pid": "test_pid"},
"check_video_status": lambda *args, **kwargs: {
"status": "ready",
"videoUrl": "https://example.com/video.mp4",
"videoUrl": "data:video/mp4;base64,AAAA",
},
"wait_for_video": lambda *args, **kwargs: "https://example.com/video.mp4",
# Use data URI to avoid HTTP requests during tests
"wait_for_video": lambda *args, **kwargs: "data:video/mp4;base64,AAAA",
},
test_credentials=TEST_CREDENTIALS,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# Create a new Webhook.site URL
webhook_token, webhook_url = await self.create_webhook()
@@ -340,7 +352,13 @@ class AIShortformVideoCreatorBlock(Block):
)
video_url = await self.wait_for_video(credentials.api_key, pid)
logger.debug(f"Video ready: {video_url}")
yield "video_url", video_url
# Store the generated video to the user's workspace for persistence
stored_url = await store_media_file(
file=MediaFileType(video_url),
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_url", stored_url
class AIAdMakerVideoCreatorBlock(Block):
@@ -447,7 +465,10 @@ class AIAdMakerVideoCreatorBlock(Block):
"https://cdn.revid.ai/uploads/1747076315114-image.png",
],
},
test_output=("video_url", "https://example.com/ad.mp4"),
test_output=(
"video_url",
lambda x: x.startswith(("workspace://", "data:")),
),
test_mock={
"create_webhook": lambda *args, **kwargs: (
"test_uuid",
@@ -456,14 +477,21 @@ class AIAdMakerVideoCreatorBlock(Block):
"create_video": lambda *args, **kwargs: {"pid": "test_pid"},
"check_video_status": lambda *args, **kwargs: {
"status": "ready",
"videoUrl": "https://example.com/ad.mp4",
"videoUrl": "data:video/mp4;base64,AAAA",
},
"wait_for_video": lambda *args, **kwargs: "https://example.com/ad.mp4",
"wait_for_video": lambda *args, **kwargs: "data:video/mp4;base64,AAAA",
},
test_credentials=TEST_CREDENTIALS,
)
async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: ExecutionContext,
**kwargs,
):
webhook_token, webhook_url = await self.create_webhook()
payload = {
@@ -531,7 +559,13 @@ class AIAdMakerVideoCreatorBlock(Block):
raise RuntimeError("Failed to create video: No project ID returned")
video_url = await self.wait_for_video(credentials.api_key, pid)
yield "video_url", video_url
# Store the generated video to the user's workspace for persistence
stored_url = await store_media_file(
file=MediaFileType(video_url),
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_url", stored_url
class AIScreenshotToVideoAdBlock(Block):
@@ -626,7 +660,10 @@ class AIScreenshotToVideoAdBlock(Block):
"script": "Amazing numbers!",
"screenshot_url": "https://cdn.revid.ai/uploads/1747080376028-image.png",
},
test_output=("video_url", "https://example.com/screenshot.mp4"),
test_output=(
"video_url",
lambda x: x.startswith(("workspace://", "data:")),
),
test_mock={
"create_webhook": lambda *args, **kwargs: (
"test_uuid",
@@ -635,14 +672,21 @@ class AIScreenshotToVideoAdBlock(Block):
"create_video": lambda *args, **kwargs: {"pid": "test_pid"},
"check_video_status": lambda *args, **kwargs: {
"status": "ready",
"videoUrl": "https://example.com/screenshot.mp4",
"videoUrl": "data:video/mp4;base64,AAAA",
},
"wait_for_video": lambda *args, **kwargs: "https://example.com/screenshot.mp4",
"wait_for_video": lambda *args, **kwargs: "data:video/mp4;base64,AAAA",
},
test_credentials=TEST_CREDENTIALS,
)
async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: ExecutionContext,
**kwargs,
):
webhook_token, webhook_url = await self.create_webhook()
payload = {
@@ -710,4 +754,10 @@ class AIScreenshotToVideoAdBlock(Block):
raise RuntimeError("Failed to create video: No project ID returned")
video_url = await self.wait_for_video(credentials.api_key, pid)
yield "video_url", video_url
# Store the generated video to the user's workspace for persistence
stored_url = await store_media_file(
file=MediaFileType(video_url),
execution_context=execution_context,
return_format="for_block_output",
)
yield "video_url", stored_url

View File

@@ -1,3 +1,10 @@
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.blocks.apollo._api import ApolloClient
from backend.blocks.apollo._auth import (
TEST_CREDENTIALS,
@@ -10,13 +17,6 @@ from backend.blocks.apollo.models import (
PrimaryPhone,
SearchOrganizationsRequest,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField

View File

@@ -1,5 +1,12 @@
import asyncio
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.blocks.apollo._api import ApolloClient
from backend.blocks.apollo._auth import (
TEST_CREDENTIALS,
@@ -14,13 +21,6 @@ from backend.blocks.apollo.models import (
SearchPeopleRequest,
SenorityLevels,
)
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField

View File

@@ -1,3 +1,10 @@
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.blocks.apollo._api import ApolloClient
from backend.blocks.apollo._auth import (
TEST_CREDENTIALS,
@@ -6,13 +13,6 @@ from backend.blocks.apollo._auth import (
ApolloCredentialsInput,
)
from backend.blocks.apollo.models import Contact, EnrichPersonRequest
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField

View File

@@ -3,7 +3,7 @@ from typing import Optional
from pydantic import BaseModel, Field
from backend.data.block import BlockSchemaInput
from backend.blocks._base import BlockSchemaInput
from backend.data.model import SchemaField, UserIntegrations
from backend.integrations.ayrshare import AyrshareClient
from backend.util.clients import get_database_manager_async_client

View File

@@ -6,6 +6,7 @@ if TYPE_CHECKING:
from pydantic import SecretStr
from backend.data.execution import ExecutionContext
from backend.sdk import (
APIKeyCredentials,
Block,
@@ -17,6 +18,8 @@ from backend.sdk import (
Requests,
SchemaField,
)
from backend.util.file import store_media_file
from backend.util.type import MediaFileType
from ._config import bannerbear
@@ -135,15 +138,17 @@ class BannerbearTextOverlayBlock(Block):
},
test_output=[
("success", True),
("image_url", "https://cdn.bannerbear.com/test-image.jpg"),
# Output will be a workspace ref or data URI depending on context
("image_url", lambda x: x.startswith(("workspace://", "data:"))),
("uid", "test-uid-123"),
("status", "completed"),
],
test_mock={
# Use data URI to avoid HTTP requests during tests
"_make_api_request": lambda *args, **kwargs: {
"uid": "test-uid-123",
"status": "completed",
"image_url": "https://cdn.bannerbear.com/test-image.jpg",
"image_url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/wAALCAABAAEBAREA/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/APn+v//Z",
}
},
test_credentials=TEST_CREDENTIALS,
@@ -177,7 +182,12 @@ class BannerbearTextOverlayBlock(Block):
raise Exception(error_msg)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# Build the modifications array
modifications = []
@@ -234,6 +244,18 @@ class BannerbearTextOverlayBlock(Block):
# Synchronous request - image should be ready
yield "success", True
yield "image_url", data.get("image_url", "")
# Store the generated image to workspace for persistence
image_url = data.get("image_url", "")
if image_url:
stored_url = await store_media_file(
file=MediaFileType(image_url),
execution_context=execution_context,
return_format="for_block_output",
)
yield "image_url", stored_url
else:
yield "image_url", ""
yield "uid", data.get("uid", "")
yield "status", data.get("status", "completed")

View File

@@ -1,7 +1,7 @@
import enum
from typing import Any
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
@@ -9,6 +9,7 @@ from backend.data.block import (
BlockSchemaOutput,
BlockType,
)
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
from backend.util.file import store_media_file
from backend.util.type import MediaFileType, convert
@@ -17,10 +18,10 @@ from backend.util.type import MediaFileType, convert
class FileStoreBlock(Block):
class Input(BlockSchemaInput):
file_in: MediaFileType = SchemaField(
description="The file to store in the temporary directory, it can be a URL, data URI, or local path."
description="The file to download and store. Can be a URL (https://...), data URI, or local path."
)
base_64: bool = SchemaField(
description="Whether produce an output in base64 format (not recommended, you can pass the string path just fine accross blocks).",
description="Whether to produce output in base64 format (not recommended, you can pass the file reference across blocks).",
default=False,
advanced=True,
title="Produce Base64 Output",
@@ -28,13 +29,18 @@ class FileStoreBlock(Block):
class Output(BlockSchemaOutput):
file_out: MediaFileType = SchemaField(
description="The relative path to the stored file in the temporary directory."
description="Reference to the stored file. In CoPilot: workspace:// URI (visible in list_workspace_files). In graphs: data URI for passing to other blocks."
)
def __init__(self):
super().__init__(
id="cbb50872-625b-42f0-8203-a2ae78242d8a",
description="Stores the input file in the temporary directory.",
description=(
"Downloads and stores a file from a URL, data URI, or local path. "
"Use this to fetch images, documents, or other files for processing. "
"In CoPilot: saves to workspace (use list_workspace_files to see it). "
"In graphs: outputs a data URI to pass to other blocks."
),
categories={BlockCategory.BASIC, BlockCategory.MULTIMEDIA},
input_schema=FileStoreBlock.Input,
output_schema=FileStoreBlock.Output,
@@ -45,15 +51,18 @@ class FileStoreBlock(Block):
self,
input_data: Input,
*,
graph_exec_id: str,
user_id: str,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
# Determine return format based on user preference
# for_external_api: always returns data URI (base64) - honors "Produce Base64 Output"
# for_block_output: smart format - workspace:// in CoPilot, data URI in graphs
return_format = "for_external_api" if input_data.base_64 else "for_block_output"
yield "file_out", await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.file_in,
user_id=user_id,
return_content=input_data.base_64,
execution_context=execution_context,
return_format=return_format,
)
@@ -117,6 +126,7 @@ class PrintToConsoleBlock(Block):
output_schema=PrintToConsoleBlock.Output,
test_input={"text": "Hello, World!"},
is_sensitive_action=True,
disabled=True, # Disabled per Nick Tindle's request (OPEN-3000)
test_output=[
("output", "Hello, World!"),
("status", "printed"),

View File

@@ -2,7 +2,7 @@ import os
import re
from typing import Type
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Any
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -1,12 +1,12 @@
import json
import shlex
import uuid
from typing import Literal, Optional
from typing import TYPE_CHECKING, Literal, Optional
from e2b import AsyncSandbox as BaseAsyncSandbox
from pydantic import BaseModel, SecretStr
from pydantic import SecretStr
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
@@ -20,6 +20,13 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.sandbox_files import (
SandboxFileOutput,
extract_and_store_sandbox_files,
)
if TYPE_CHECKING:
from backend.executor.utils import ExecutionContext
class ClaudeCodeExecutionError(Exception):
@@ -174,22 +181,15 @@ class ClaudeCodeBlock(Block):
advanced=True,
)
class FileOutput(BaseModel):
"""A file extracted from the sandbox."""
path: str
relative_path: str # Path relative to working directory (for GitHub, etc.)
name: str
content: str
class Output(BlockSchemaOutput):
response: str = SchemaField(
description="The output/response from Claude Code execution"
)
files: list["ClaudeCodeBlock.FileOutput"] = SchemaField(
files: list[SandboxFileOutput] = SchemaField(
description=(
"List of text files created/modified by Claude Code during this execution. "
"Each file has 'path', 'relative_path', 'name', and 'content' fields."
"Each file has 'path', 'relative_path', 'name', 'content', and 'workspace_ref' fields. "
"workspace_ref contains a workspace:// URI if the file was stored to workspace."
)
)
conversation_history: str = SchemaField(
@@ -252,6 +252,7 @@ class ClaudeCodeBlock(Block):
"relative_path": "index.html",
"name": "index.html",
"content": "<html>Hello World</html>",
"workspace_ref": None,
}
],
),
@@ -267,11 +268,12 @@ class ClaudeCodeBlock(Block):
"execute_claude_code": lambda *args, **kwargs: (
"Created index.html with hello world content", # response
[
ClaudeCodeBlock.FileOutput(
SandboxFileOutput(
path="/home/user/index.html",
relative_path="index.html",
name="index.html",
content="<html>Hello World</html>",
workspace_ref=None,
)
], # files
"User: Create a hello world HTML file\n"
@@ -294,7 +296,8 @@ class ClaudeCodeBlock(Block):
existing_sandbox_id: str,
conversation_history: str,
dispose_sandbox: bool,
) -> tuple[str, list["ClaudeCodeBlock.FileOutput"], str, str, str]:
execution_context: "ExecutionContext",
) -> tuple[str, list[SandboxFileOutput], str, str, str]:
"""
Execute Claude Code in an E2B sandbox.
@@ -449,14 +452,18 @@ class ClaudeCodeBlock(Block):
else:
new_conversation_history = turn_entry
# Extract files created/modified during this run
files = await self._extract_files(
sandbox, working_directory, start_timestamp
# Extract files created/modified during this run and store to workspace
sandbox_files = await extract_and_store_sandbox_files(
sandbox=sandbox,
working_directory=working_directory,
execution_context=execution_context,
since_timestamp=start_timestamp,
text_only=True,
)
return (
response,
files,
sandbox_files, # Already SandboxFileOutput objects
new_conversation_history,
current_session_id,
sandbox_id,
@@ -471,140 +478,6 @@ class ClaudeCodeBlock(Block):
if dispose_sandbox and sandbox:
await sandbox.kill()
async def _extract_files(
self,
sandbox: BaseAsyncSandbox,
working_directory: str,
since_timestamp: str | None = None,
) -> list["ClaudeCodeBlock.FileOutput"]:
"""
Extract text files created/modified during this Claude Code execution.
Args:
sandbox: The E2B sandbox instance
working_directory: Directory to search for files
since_timestamp: ISO timestamp - only return files modified after this time
Returns:
List of FileOutput objects with path, relative_path, name, and content
"""
files: list[ClaudeCodeBlock.FileOutput] = []
# Text file extensions we can safely read as text
text_extensions = {
".txt",
".md",
".html",
".htm",
".css",
".js",
".ts",
".jsx",
".tsx",
".json",
".xml",
".yaml",
".yml",
".toml",
".ini",
".cfg",
".conf",
".py",
".rb",
".php",
".java",
".c",
".cpp",
".h",
".hpp",
".cs",
".go",
".rs",
".swift",
".kt",
".scala",
".sh",
".bash",
".zsh",
".sql",
".graphql",
".env",
".gitignore",
".dockerfile",
"Dockerfile",
".vue",
".svelte",
".astro",
".mdx",
".rst",
".tex",
".csv",
".log",
}
try:
# List files recursively using find command
# Exclude node_modules and .git directories, but allow hidden files
# like .env and .gitignore (they're filtered by text_extensions later)
# Filter by timestamp to only get files created/modified during this run
safe_working_dir = shlex.quote(working_directory)
timestamp_filter = ""
if since_timestamp:
timestamp_filter = f"-newermt {shlex.quote(since_timestamp)} "
find_result = await sandbox.commands.run(
f"find {safe_working_dir} -type f "
f"{timestamp_filter}"
f"-not -path '*/node_modules/*' "
f"-not -path '*/.git/*' "
f"2>/dev/null"
)
if find_result.stdout:
for file_path in find_result.stdout.strip().split("\n"):
if not file_path:
continue
# Check if it's a text file we can read
is_text = any(
file_path.endswith(ext) for ext in text_extensions
) or file_path.endswith("Dockerfile")
if is_text:
try:
content = await sandbox.files.read(file_path)
# Handle bytes or string
if isinstance(content, bytes):
content = content.decode("utf-8", errors="replace")
# Extract filename from path
file_name = file_path.split("/")[-1]
# Calculate relative path by stripping working directory
relative_path = file_path
if file_path.startswith(working_directory):
relative_path = file_path[len(working_directory) :]
# Remove leading slash if present
if relative_path.startswith("/"):
relative_path = relative_path[1:]
files.append(
ClaudeCodeBlock.FileOutput(
path=file_path,
relative_path=relative_path,
name=file_name,
content=content,
)
)
except Exception:
# Skip files that can't be read
pass
except Exception:
# If file extraction fails, return empty results
pass
return files
def _escape_prompt(self, prompt: str) -> str:
"""Escape the prompt for safe shell execution."""
# Use single quotes and escape any single quotes in the prompt
@@ -617,6 +490,7 @@ class ClaudeCodeBlock(Block):
*,
e2b_credentials: APIKeyCredentials,
anthropic_credentials: APIKeyCredentials,
execution_context: "ExecutionContext",
**kwargs,
) -> BlockOutput:
try:
@@ -637,6 +511,7 @@ class ClaudeCodeBlock(Block):
existing_sandbox_id=input_data.sandbox_id,
conversation_history=input_data.conversation_history,
dispose_sandbox=input_data.dispose_sandbox,
execution_context=execution_context,
)
yield "response", response

View File

@@ -1,12 +1,12 @@
from enum import Enum
from typing import Any, Literal, Optional
from typing import TYPE_CHECKING, Any, Literal, Optional
from e2b_code_interpreter import AsyncSandbox
from e2b_code_interpreter import Result as E2BExecutionResult
from e2b_code_interpreter.charts import Chart as E2BExecutionResultChart
from pydantic import BaseModel, Field, JsonValue, SecretStr
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
@@ -20,6 +20,13 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.sandbox_files import (
SandboxFileOutput,
extract_and_store_sandbox_files,
)
if TYPE_CHECKING:
from backend.executor.utils import ExecutionContext
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -85,6 +92,9 @@ class CodeExecutionResult(MainCodeExecutionResult):
class BaseE2BExecutorMixin:
"""Shared implementation methods for E2B executor blocks."""
# Default working directory in E2B sandboxes
WORKING_DIR = "/home/user"
async def execute_code(
self,
api_key: str,
@@ -95,14 +105,21 @@ class BaseE2BExecutorMixin:
timeout: Optional[int] = None,
sandbox_id: Optional[str] = None,
dispose_sandbox: bool = False,
execution_context: Optional["ExecutionContext"] = None,
extract_files: bool = False,
):
"""
Unified code execution method that handles all three use cases:
1. Create new sandbox and execute (ExecuteCodeBlock)
2. Create new sandbox, execute, and return sandbox_id (InstantiateCodeSandboxBlock)
3. Connect to existing sandbox and execute (ExecuteCodeStepBlock)
Args:
extract_files: If True and execution_context provided, extract files
created/modified during execution and store to workspace.
""" # noqa
sandbox = None
files: list[SandboxFileOutput] = []
try:
if sandbox_id:
# Connect to existing sandbox (ExecuteCodeStepBlock case)
@@ -118,6 +135,12 @@ class BaseE2BExecutorMixin:
for cmd in setup_commands:
await sandbox.commands.run(cmd)
# Capture timestamp before execution to scope file extraction
start_timestamp = None
if extract_files:
ts_result = await sandbox.commands.run("date -u +%Y-%m-%dT%H:%M:%S")
start_timestamp = ts_result.stdout.strip() if ts_result.stdout else None
# Execute the code
execution = await sandbox.run_code(
code,
@@ -133,7 +156,24 @@ class BaseE2BExecutorMixin:
stdout_logs = "".join(execution.logs.stdout)
stderr_logs = "".join(execution.logs.stderr)
return results, text_output, stdout_logs, stderr_logs, sandbox.sandbox_id
# Extract files created/modified during this execution
if extract_files and execution_context:
files = await extract_and_store_sandbox_files(
sandbox=sandbox,
working_directory=self.WORKING_DIR,
execution_context=execution_context,
since_timestamp=start_timestamp,
text_only=False, # Include binary files too
)
return (
results,
text_output,
stdout_logs,
stderr_logs,
sandbox.sandbox_id,
files,
)
finally:
# Dispose of sandbox if requested to reduce usage costs
if dispose_sandbox and sandbox:
@@ -238,6 +278,12 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
files: list[SandboxFileOutput] = SchemaField(
description=(
"Files created or modified during execution. "
"Each file has path, name, content, and workspace_ref (if stored)."
),
)
def __init__(self):
super().__init__(
@@ -259,23 +305,30 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
("results", []),
("response", "Hello World"),
("stdout_logs", "Hello World\n"),
("files", []),
],
test_mock={
"execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox: ( # noqa
"execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox, execution_context, extract_files: ( # noqa
[], # results
"Hello World", # text_output
"Hello World\n", # stdout_logs
"", # stderr_logs
"sandbox_id", # sandbox_id
[], # files
),
},
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
execution_context: "ExecutionContext",
**kwargs,
) -> BlockOutput:
try:
results, text_output, stdout, stderr, _ = await self.execute_code(
results, text_output, stdout, stderr, _, files = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.code,
language=input_data.language,
@@ -283,6 +336,8 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
setup_commands=input_data.setup_commands,
timeout=input_data.timeout,
dispose_sandbox=input_data.dispose_sandbox,
execution_context=execution_context,
extract_files=True,
)
# Determine result object shape & filter out empty formats
@@ -296,6 +351,8 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
yield "stdout_logs", stdout
if stderr:
yield "stderr_logs", stderr
# Always yield files (empty list if none)
yield "files", [f.model_dump() for f in files]
except Exception as e:
yield "error", str(e)
@@ -393,6 +450,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
"Hello World\n", # stdout_logs
"", # stderr_logs
"sandbox_id", # sandbox_id
[], # files
),
},
)
@@ -401,7 +459,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
_, text_output, stdout, stderr, sandbox_id = await self.execute_code(
_, text_output, stdout, stderr, sandbox_id, _ = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.setup_code,
language=input_data.language,
@@ -500,6 +558,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
"Hello World\n", # stdout_logs
"", # stderr_logs
sandbox_id, # sandbox_id
[], # files
),
},
)
@@ -508,7 +567,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
results, text_output, stdout, stderr, _ = await self.execute_code(
results, text_output, stdout, stderr, _, _ = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.step_code,
language=input_data.language,

View File

@@ -1,6 +1,6 @@
import re
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -6,7 +6,7 @@ from openai import AsyncOpenAI
from openai.types.responses import Response as OpenAIResponse
from pydantic import SecretStr
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -1,6 +1,6 @@
from pydantic import BaseModel
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockManualWebhookConfig,

View File

@@ -1,4 +1,4 @@
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -1,6 +1,6 @@
from typing import Any, List
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
@@ -682,17 +682,219 @@ class ListIsEmptyBlock(Block):
yield "is_empty", len(input_data.list) == 0
# =============================================================================
# List Concatenation Helpers
# =============================================================================
def _validate_list_input(item: Any, index: int) -> str | None:
"""Validate that an item is a list. Returns error message or None."""
if item is None:
return None # None is acceptable, will be skipped
if not isinstance(item, list):
return (
f"Invalid input at index {index}: expected a list, "
f"got {type(item).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return None
def _validate_all_lists(lists: List[Any]) -> str | None:
"""Validate that all items in a sequence are lists. Returns first error or None."""
for idx, item in enumerate(lists):
error = _validate_list_input(item, idx)
if error is not None and item is not None:
return error
return None
def _concatenate_lists_simple(lists: List[List[Any]]) -> List[Any]:
"""Concatenate a sequence of lists into a single list, skipping None values."""
result: List[Any] = []
for lst in lists:
if lst is None:
continue
result.extend(lst)
return result
def _flatten_nested_list(nested: List[Any], max_depth: int = -1) -> List[Any]:
"""
Recursively flatten a nested list structure.
Args:
nested: The list to flatten.
max_depth: Maximum recursion depth. -1 means unlimited.
Returns:
A flat list with all nested elements extracted.
"""
result: List[Any] = []
_flatten_recursive(nested, result, current_depth=0, max_depth=max_depth)
return result
_MAX_FLATTEN_DEPTH = 1000
def _flatten_recursive(
items: List[Any],
result: List[Any],
current_depth: int,
max_depth: int,
) -> None:
"""Internal recursive helper for flattening nested lists."""
if current_depth > _MAX_FLATTEN_DEPTH:
raise RecursionError(
f"Flattening exceeded maximum depth of {_MAX_FLATTEN_DEPTH} levels. "
"Input may be too deeply nested."
)
for item in items:
if isinstance(item, list) and (max_depth == -1 or current_depth < max_depth):
_flatten_recursive(item, result, current_depth + 1, max_depth)
else:
result.append(item)
def _deduplicate_list(items: List[Any]) -> List[Any]:
"""
Remove duplicate elements from a list, preserving order of first occurrences.
Args:
items: The list to deduplicate.
Returns:
A list with duplicates removed, maintaining original order.
"""
seen: set = set()
result: List[Any] = []
for item in items:
item_id = _make_hashable(item)
if item_id not in seen:
seen.add(item_id)
result.append(item)
return result
def _make_hashable(item: Any):
"""
Create a hashable representation of any item for deduplication.
Converts unhashable types (dicts, lists) into deterministic tuple structures.
"""
if isinstance(item, dict):
return tuple(
sorted(
((_make_hashable(k), _make_hashable(v)) for k, v in item.items()),
key=lambda x: (str(type(x[0])), str(x[0])),
)
)
if isinstance(item, (list, tuple)):
return tuple(_make_hashable(i) for i in item)
if isinstance(item, set):
return frozenset(_make_hashable(i) for i in item)
return item
def _filter_none_values(items: List[Any]) -> List[Any]:
"""Remove None values from a list."""
return [item for item in items if item is not None]
def _compute_nesting_depth(
items: Any, current: int = 0, max_depth: int = _MAX_FLATTEN_DEPTH
) -> int:
"""
Compute the maximum nesting depth of a list structure using iteration to avoid RecursionError.
Uses a stack-based approach to handle deeply nested structures without hitting Python's
recursion limit (~1000 levels).
"""
if not isinstance(items, list):
return current
# Stack contains tuples of (item, depth)
stack = [(items, current)]
max_observed_depth = current
while stack:
item, depth = stack.pop()
if depth > max_depth:
return depth
if not isinstance(item, list):
max_observed_depth = max(max_observed_depth, depth)
continue
if len(item) == 0:
max_observed_depth = max(max_observed_depth, depth + 1)
continue
# Add all children to stack with incremented depth
for child in item:
stack.append((child, depth + 1))
return max_observed_depth
def _interleave_lists(lists: List[List[Any]]) -> List[Any]:
"""
Interleave elements from multiple lists in round-robin fashion.
Example: [[1,2,3], [a,b], [x,y,z]] -> [1, a, x, 2, b, y, 3, z]
"""
if not lists:
return []
filtered = [lst for lst in lists if lst is not None]
if not filtered:
return []
result: List[Any] = []
max_len = max(len(lst) for lst in filtered)
for i in range(max_len):
for lst in filtered:
if i < len(lst):
result.append(lst[i])
return result
# =============================================================================
# List Concatenation Blocks
# =============================================================================
class ConcatenateListsBlock(Block):
"""
Concatenates two or more lists into a single list.
This block accepts a list of lists and combines all their elements
in order into one flat output list. It supports options for
deduplication and None-filtering to provide flexible list merging
capabilities for workflow pipelines.
"""
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to concatenate together. All lists will be combined in order into a single list.",
placeholder="e.g., [[1, 2], [3, 4], [5, 6]]",
)
deduplicate: bool = SchemaField(
description="If True, remove duplicate elements from the concatenated result while preserving order.",
default=False,
advanced=True,
)
remove_none: bool = SchemaField(
description="If True, remove None values from the concatenated result.",
default=False,
advanced=True,
)
class Output(BlockSchemaOutput):
concatenated_list: List[Any] = SchemaField(
description="The concatenated list containing all elements from all input lists in order."
)
length: int = SchemaField(
description="The total number of elements in the concatenated list."
)
error: str = SchemaField(
description="Error message if concatenation failed due to invalid input types."
)
@@ -700,7 +902,7 @@ class ConcatenateListsBlock(Block):
def __init__(self):
super().__init__(
id="3cf9298b-5817-4141-9d80-7c2cc5199c8e",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order.",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order. Supports optional deduplication and None removal.",
categories={BlockCategory.BASIC},
input_schema=ConcatenateListsBlock.Input,
output_schema=ConcatenateListsBlock.Output,
@@ -709,29 +911,497 @@ class ConcatenateListsBlock(Block):
{"lists": [["a", "b"], ["c"], ["d", "e", "f"]]},
{"lists": [[1, 2], []]},
{"lists": []},
{"lists": [[1, 2, 2, 3], [3, 4]], "deduplicate": True},
{"lists": [[1, None, 2], [None, 3]], "remove_none": True},
],
test_output=[
("concatenated_list", [1, 2, 3, 4, 5, 6]),
("length", 6),
("concatenated_list", ["a", "b", "c", "d", "e", "f"]),
("length", 6),
("concatenated_list", [1, 2]),
("length", 2),
("concatenated_list", []),
("length", 0),
("concatenated_list", [1, 2, 3, 4]),
("length", 4),
("concatenated_list", [1, 2, 3]),
("length", 3),
],
)
def _validate_inputs(self, lists: List[Any]) -> str | None:
return _validate_all_lists(lists)
def _perform_concatenation(self, lists: List[List[Any]]) -> List[Any]:
return _concatenate_lists_simple(lists)
def _apply_deduplication(self, items: List[Any]) -> List[Any]:
return _deduplicate_list(items)
def _apply_none_removal(self, items: List[Any]) -> List[Any]:
return _filter_none_values(items)
def _post_process(
self, items: List[Any], deduplicate: bool, remove_none: bool
) -> List[Any]:
"""Apply all post-processing steps to the concatenated result."""
result = items
if remove_none:
result = self._apply_none_removal(result)
if deduplicate:
result = self._apply_deduplication(result)
return result
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
concatenated = []
for idx, lst in enumerate(input_data.lists):
if lst is None:
# Skip None values to avoid errors
continue
if not isinstance(lst, list):
# Type validation: each item must be a list
# Strings are iterable and would cause extend() to iterate character-by-character
# Non-iterable types would raise TypeError
yield "error", (
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return
concatenated.extend(lst)
yield "concatenated_list", concatenated
# Validate all inputs are lists
validation_error = self._validate_inputs(input_data.lists)
if validation_error is not None:
yield "error", validation_error
return
# Perform concatenation
concatenated = self._perform_concatenation(input_data.lists)
# Apply post-processing
result = self._post_process(
concatenated, input_data.deduplicate, input_data.remove_none
)
yield "concatenated_list", result
yield "length", len(result)
class FlattenListBlock(Block):
"""
Flattens a nested list structure into a single flat list.
This block takes a list that may contain nested lists at any depth
and produces a single-level list with all leaf elements. Useful
for normalizing data structures from multiple sources that may
have varying levels of nesting.
"""
class Input(BlockSchemaInput):
nested_list: List[Any] = SchemaField(
description="A potentially nested list to flatten into a single-level list.",
placeholder="e.g., [[1, [2, 3]], [4, [5, [6]]]]",
)
max_depth: int = SchemaField(
description="Maximum depth to flatten. -1 means flatten completely. 1 means flatten only one level.",
default=-1,
advanced=True,
)
class Output(BlockSchemaOutput):
flattened_list: List[Any] = SchemaField(
description="The flattened list with all nested elements extracted."
)
length: int = SchemaField(
description="The number of elements in the flattened list."
)
original_depth: int = SchemaField(
description="The maximum nesting depth of the original input list."
)
error: str = SchemaField(description="Error message if flattening failed.")
def __init__(self):
super().__init__(
id="cc45bb0f-d035-4756-96a7-fe3e36254b4d",
description="Flattens a nested list structure into a single flat list. Supports configurable maximum flattening depth.",
categories={BlockCategory.BASIC},
input_schema=FlattenListBlock.Input,
output_schema=FlattenListBlock.Output,
test_input=[
{"nested_list": [[1, 2], [3, [4, 5]]]},
{"nested_list": [1, [2, [3, [4]]]]},
{"nested_list": [1, [2, [3, [4]]], 5], "max_depth": 1},
{"nested_list": []},
{"nested_list": [1, 2, 3]},
],
test_output=[
("flattened_list", [1, 2, 3, 4, 5]),
("length", 5),
("original_depth", 3),
("flattened_list", [1, 2, 3, 4]),
("length", 4),
("original_depth", 4),
("flattened_list", [1, 2, [3, [4]], 5]),
("length", 4),
("original_depth", 4),
("flattened_list", []),
("length", 0),
("original_depth", 1),
("flattened_list", [1, 2, 3]),
("length", 3),
("original_depth", 1),
],
)
def _compute_depth(self, items: List[Any]) -> int:
"""Compute the nesting depth of the input list."""
return _compute_nesting_depth(items)
def _flatten(self, items: List[Any], max_depth: int) -> List[Any]:
"""Flatten the list to the specified depth."""
return _flatten_nested_list(items, max_depth=max_depth)
def _validate_max_depth(self, max_depth: int) -> str | None:
"""Validate the max_depth parameter."""
if max_depth < -1:
return f"max_depth must be -1 (unlimited) or a non-negative integer, got {max_depth}"
return None
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Validate max_depth
depth_error = self._validate_max_depth(input_data.max_depth)
if depth_error is not None:
yield "error", depth_error
return
original_depth = self._compute_depth(input_data.nested_list)
flattened = self._flatten(input_data.nested_list, input_data.max_depth)
yield "flattened_list", flattened
yield "length", len(flattened)
yield "original_depth", original_depth
class InterleaveListsBlock(Block):
"""
Interleaves elements from multiple lists in round-robin fashion.
Given multiple input lists, this block takes one element from each
list in turn, producing an output where elements alternate between
sources. Lists of different lengths are handled gracefully - shorter
lists simply stop contributing once exhausted.
"""
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to interleave. Elements will be taken in round-robin order.",
placeholder="e.g., [[1, 2, 3], ['a', 'b', 'c']]",
)
class Output(BlockSchemaOutput):
interleaved_list: List[Any] = SchemaField(
description="The interleaved list with elements alternating from each input list."
)
length: int = SchemaField(
description="The total number of elements in the interleaved list."
)
error: str = SchemaField(description="Error message if interleaving failed.")
def __init__(self):
super().__init__(
id="9f616084-1d9f-4f8e-bc00-5b9d2a75cd75",
description="Interleaves elements from multiple lists in round-robin fashion, alternating between sources.",
categories={BlockCategory.BASIC},
input_schema=InterleaveListsBlock.Input,
output_schema=InterleaveListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], ["a", "b", "c"]]},
{"lists": [[1, 2, 3], ["a", "b"], ["x", "y", "z"]]},
{"lists": [[1], [2], [3]]},
{"lists": []},
],
test_output=[
("interleaved_list", [1, "a", 2, "b", 3, "c"]),
("length", 6),
("interleaved_list", [1, "a", "x", 2, "b", "y", 3, "z"]),
("length", 8),
("interleaved_list", [1, 2, 3]),
("length", 3),
("interleaved_list", []),
("length", 0),
],
)
def _validate_inputs(self, lists: List[Any]) -> str | None:
return _validate_all_lists(lists)
def _interleave(self, lists: List[List[Any]]) -> List[Any]:
return _interleave_lists(lists)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
validation_error = self._validate_inputs(input_data.lists)
if validation_error is not None:
yield "error", validation_error
return
result = self._interleave(input_data.lists)
yield "interleaved_list", result
yield "length", len(result)
class ZipListsBlock(Block):
"""
Zips multiple lists together into a list of grouped tuples/lists.
Takes two or more input lists and combines corresponding elements
into sub-lists. For example, zipping [1,2,3] and ['a','b','c']
produces [[1,'a'], [2,'b'], [3,'c']]. Supports both truncating
to shortest list and padding to longest list with a fill value.
"""
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to zip together. Corresponding elements will be grouped.",
placeholder="e.g., [[1, 2, 3], ['a', 'b', 'c']]",
)
pad_to_longest: bool = SchemaField(
description="If True, pad shorter lists with fill_value to match the longest list. If False, truncate to shortest.",
default=False,
advanced=True,
)
fill_value: Any = SchemaField(
description="Value to use for padding when pad_to_longest is True.",
default=None,
advanced=True,
)
class Output(BlockSchemaOutput):
zipped_list: List[List[Any]] = SchemaField(
description="The zipped list of grouped elements."
)
length: int = SchemaField(
description="The number of groups in the zipped result."
)
error: str = SchemaField(description="Error message if zipping failed.")
def __init__(self):
super().__init__(
id="0d0e684f-5cb9-4c4b-b8d1-47a0860e0c07",
description="Zips multiple lists together into a list of grouped elements. Supports padding to longest or truncating to shortest.",
categories={BlockCategory.BASIC},
input_schema=ZipListsBlock.Input,
output_schema=ZipListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], ["a", "b", "c"]]},
{"lists": [[1, 2, 3], ["a", "b"]]},
{
"lists": [[1, 2], ["a", "b", "c"]],
"pad_to_longest": True,
"fill_value": 0,
},
{"lists": []},
],
test_output=[
("zipped_list", [[1, "a"], [2, "b"], [3, "c"]]),
("length", 3),
("zipped_list", [[1, "a"], [2, "b"]]),
("length", 2),
("zipped_list", [[1, "a"], [2, "b"], [0, "c"]]),
("length", 3),
("zipped_list", []),
("length", 0),
],
)
def _validate_inputs(self, lists: List[Any]) -> str | None:
return _validate_all_lists(lists)
def _zip_truncate(self, lists: List[List[Any]]) -> List[List[Any]]:
"""Zip lists, truncating to shortest."""
filtered = [lst for lst in lists if lst is not None]
if not filtered:
return []
return [list(group) for group in zip(*filtered)]
def _zip_pad(self, lists: List[List[Any]], fill_value: Any) -> List[List[Any]]:
"""Zip lists, padding shorter ones with fill_value."""
if not lists:
return []
lists = [lst for lst in lists if lst is not None]
if not lists:
return []
max_len = max(len(lst) for lst in lists)
result: List[List[Any]] = []
for i in range(max_len):
group: List[Any] = []
for lst in lists:
if i < len(lst):
group.append(lst[i])
else:
group.append(fill_value)
result.append(group)
return result
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
validation_error = self._validate_inputs(input_data.lists)
if validation_error is not None:
yield "error", validation_error
return
if not input_data.lists:
yield "zipped_list", []
yield "length", 0
return
if input_data.pad_to_longest:
result = self._zip_pad(input_data.lists, input_data.fill_value)
else:
result = self._zip_truncate(input_data.lists)
yield "zipped_list", result
yield "length", len(result)
class ListDifferenceBlock(Block):
"""
Computes the difference between two lists (elements in the first
list that are not in the second list).
This is useful for finding items that exist in one dataset but
not in another, such as finding new items, missing items, or
items that need to be processed.
"""
class Input(BlockSchemaInput):
list_a: List[Any] = SchemaField(
description="The primary list to check elements from.",
placeholder="e.g., [1, 2, 3, 4, 5]",
)
list_b: List[Any] = SchemaField(
description="The list to subtract. Elements found here will be removed from list_a.",
placeholder="e.g., [3, 4, 5, 6]",
)
symmetric: bool = SchemaField(
description="If True, compute symmetric difference (elements in either list but not both).",
default=False,
advanced=True,
)
class Output(BlockSchemaOutput):
difference: List[Any] = SchemaField(
description="Elements from list_a not found in list_b (or symmetric difference if enabled)."
)
length: int = SchemaField(
description="The number of elements in the difference result."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="05309873-9d61-447e-96b5-b804e2511829",
description="Computes the difference between two lists. Returns elements in the first list not found in the second, or symmetric difference.",
categories={BlockCategory.BASIC},
input_schema=ListDifferenceBlock.Input,
output_schema=ListDifferenceBlock.Output,
test_input=[
{"list_a": [1, 2, 3, 4, 5], "list_b": [3, 4, 5, 6, 7]},
{
"list_a": [1, 2, 3, 4, 5],
"list_b": [3, 4, 5, 6, 7],
"symmetric": True,
},
{"list_a": ["a", "b", "c"], "list_b": ["b"]},
{"list_a": [], "list_b": [1, 2, 3]},
],
test_output=[
("difference", [1, 2]),
("length", 2),
("difference", [1, 2, 6, 7]),
("length", 4),
("difference", ["a", "c"]),
("length", 2),
("difference", []),
("length", 0),
],
)
def _compute_difference(self, list_a: List[Any], list_b: List[Any]) -> List[Any]:
"""Compute elements in list_a not in list_b."""
b_hashes = {_make_hashable(item) for item in list_b}
return [item for item in list_a if _make_hashable(item) not in b_hashes]
def _compute_symmetric_difference(
self, list_a: List[Any], list_b: List[Any]
) -> List[Any]:
"""Compute elements in either list but not both."""
a_hashes = {_make_hashable(item) for item in list_a}
b_hashes = {_make_hashable(item) for item in list_b}
only_in_a = [item for item in list_a if _make_hashable(item) not in b_hashes]
only_in_b = [item for item in list_b if _make_hashable(item) not in a_hashes]
return only_in_a + only_in_b
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
if input_data.symmetric:
result = self._compute_symmetric_difference(
input_data.list_a, input_data.list_b
)
else:
result = self._compute_difference(input_data.list_a, input_data.list_b)
yield "difference", result
yield "length", len(result)
class ListIntersectionBlock(Block):
"""
Computes the intersection of two lists (elements present in both lists).
This is useful for finding common items between two datasets,
such as shared tags, mutual connections, or overlapping categories.
"""
class Input(BlockSchemaInput):
list_a: List[Any] = SchemaField(
description="The first list to intersect.",
placeholder="e.g., [1, 2, 3, 4, 5]",
)
list_b: List[Any] = SchemaField(
description="The second list to intersect.",
placeholder="e.g., [3, 4, 5, 6, 7]",
)
class Output(BlockSchemaOutput):
intersection: List[Any] = SchemaField(
description="Elements present in both list_a and list_b."
)
length: int = SchemaField(
description="The number of elements in the intersection."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="b6eb08b6-dbe3-411b-b9b4-2508cb311a1f",
description="Computes the intersection of two lists, returning only elements present in both.",
categories={BlockCategory.BASIC},
input_schema=ListIntersectionBlock.Input,
output_schema=ListIntersectionBlock.Output,
test_input=[
{"list_a": [1, 2, 3, 4, 5], "list_b": [3, 4, 5, 6, 7]},
{"list_a": ["a", "b", "c"], "list_b": ["c", "d", "e"]},
{"list_a": [1, 2], "list_b": [3, 4]},
{"list_a": [], "list_b": [1, 2, 3]},
],
test_output=[
("intersection", [3, 4, 5]),
("length", 3),
("intersection", ["c"]),
("length", 1),
("intersection", []),
("length", 0),
("intersection", []),
("length", 0),
],
)
def _compute_intersection(self, list_a: List[Any], list_b: List[Any]) -> List[Any]:
"""Compute elements present in both lists, preserving order from list_a."""
b_hashes = {_make_hashable(item) for item in list_b}
seen: set = set()
result: List[Any] = []
for item in list_a:
h = _make_hashable(item)
if h in b_hashes and h not in seen:
result.append(item)
seen.add(h)
return result
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
result = self._compute_intersection(input_data.list_a, input_data.list_b)
yield "intersection", result
yield "length", len(result)

View File

@@ -1,6 +1,6 @@
import codecs
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -8,13 +8,14 @@ from typing import Any, Literal, cast
import discord
from pydantic import SecretStr
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import ExecutionContext
from backend.data.model import APIKeyCredentials, SchemaField
from backend.util.file import store_media_file
from backend.util.request import Requests
@@ -666,8 +667,7 @@ class SendDiscordFileBlock(Block):
file: MediaFileType,
filename: str,
message_content: str,
graph_exec_id: str,
user_id: str,
execution_context: ExecutionContext,
) -> dict:
intents = discord.Intents.default()
intents.guilds = True
@@ -731,10 +731,9 @@ class SendDiscordFileBlock(Block):
# Local file path - read from stored media file
# This would be a path from a previous block's output
stored_file = await store_media_file(
graph_exec_id=graph_exec_id,
file=file,
user_id=user_id,
return_content=True, # Get as data URI
execution_context=execution_context,
return_format="for_external_api", # Get content to send to Discord
)
# Now process as data URI
header, encoded = stored_file.split(",", 1)
@@ -781,8 +780,7 @@ class SendDiscordFileBlock(Block):
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
user_id: str,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
try:
@@ -793,8 +791,7 @@ class SendDiscordFileBlock(Block):
file=input_data.file,
filename=input_data.filename,
message_content=input_data.message_content,
graph_exec_id=graph_exec_id,
user_id=user_id,
execution_context=execution_context,
)
yield "status", result.get("status", "Unknown error")

View File

@@ -2,7 +2,7 @@
Discord OAuth-based blocks.
"""
from backend.data.block import (
from backend.blocks._base import (
Block,
BlockCategory,
BlockOutput,

View File

@@ -0,0 +1,28 @@
"""ElevenLabs integration blocks - test credentials and shared utilities."""
from typing import Literal
from pydantic import SecretStr
from backend.data.model import APIKeyCredentials, CredentialsMetaInput
from backend.integrations.providers import ProviderName
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="elevenlabs",
api_key=SecretStr("mock-elevenlabs-api-key"),
title="Mock ElevenLabs API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
ElevenLabsCredentials = APIKeyCredentials
ElevenLabsCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.ELEVENLABS], Literal["api_key"]
]

Some files were not shown because too many files have changed in this diff Show More