Compare commits

..

59 Commits

Author SHA1 Message Date
Zamil Majdy
4364a771d4 fix(mcp): Validate required tool args and fix title fallback for existing blocks
Backend: Add required-field validation in MCPToolBlock.run() before
calling the MCP server. The executor-level validation is bypassed for
MCP blocks because get_input_defaults() flattens tool_arguments,
stripping tool_input_schema from the validation context.

Frontend: NodeHeader now derives the MCP server label from the server
URL hostname when server_name is missing (pruned by pruneEmptyValues).
This fixes the title for existing blocks that don't have customized_name
in metadata.
2026-02-10 15:42:51 +04:00
Zamil Majdy
4d4ed562f0 fix(frontend/mcp): Ensure MCP block title persists across save/refresh
When the MCP server returns a null server_name, fall back to the URL
hostname so customized_name is always set in metadata. This prevents
the title from degrading to "MCP:" after save and reload.
2026-02-10 15:31:29 +04:00
Zamil Majdy
8bea7cf875 chore(mcp): Remove dev artifacts and simplify credential lookup
- Remove MCP_BLOCK_IMPLEMENTATION.md development doc
- Remove console.log debug statements from OAuth callback
- Simplify credential lookup to single call (get_creds_by_provider
  already handles Python 3.13 StrEnum bug via _provider_matches)
- Remove unused Credentials import from routes.py
2026-02-10 15:18:55 +04:00
Zamil Majdy
c1c269c4a9 fix(frontend/credentials): Auto-select first credential and persist MCP block title
- CredentialsSelect: default to first available credential instead of
  "None" when credentials exist, reorder options to show credentials
  before the "None" option, and notify parent on auto-select
- Revert CredentialsGroupedView user auto-select effect (now handled
  at the CredentialsSelect level)
- Block.tsx: persist MCP block title as customized_name in metadata
  so it survives save/load
2026-02-10 14:56:19 +04:00
Zamil Majdy
65987ff15e fix(frontend/credentials): Auto-select user credentials in run dialog
Add auto-selection for user credentials (like MCP OAuth) in the
CredentialsGroupedView run dialog. When exactly one credential matches
the provider, type, and discriminator values (e.g. MCP server URL),
it is pre-selected instead of defaulting to "None (skip this credential)".
2026-02-10 14:21:23 +04:00
Zamil Majdy
ed50f7f87d fix(mcp): Wire credentials into MCP block form and add auto-lookup fallback
Frontend: Include credentials field in MCP block's dynamic input schema
so users can select OAuth credentials from the node form. Separate
credentials from tool_arguments in FormCreator to store them at the
correct level in hardcodedValues.

Backend: Add _auto_lookup_credential fallback in MCPToolBlock.run() for
legacy nodes that don't have credentials explicitly set. This resolves
the credential by matching mcp_server_url in stored OAuth metadata.
2026-02-10 14:05:09 +04:00
Zamil Majdy
c03fb170e0 fix(backend/credentials): Handle Python 3.13 str(StrEnum) bug in OAuth state verification
verify_state_token and get_creds_by_provider compared provider strings
with ==, which failed when OAuth states were stored with the buggy
"ProviderName.MCP" format from Python 3.13's str(Enum) behavior.

Also fix double-append in store_state_token where the state was written
once via edit_user_integrations and again via a redundant manual block.
2026-02-10 13:32:38 +04:00
Zamil Majdy
8a2f98b23c fix(mcp): Fix discover_tools test mock and credential auto-unselect
- Add missing `refresh_if_needed` mock to test_discover_tools_auto_uses_stored_credential
  so it returns the stored credential instead of a MagicMock
- Fix credential auto-unselect clearing MCP credentials on initial render:
  skip the "unselect if not available" check when the saved credentials
  list is empty (empty list means not loaded yet, not invalid)
2026-02-10 12:55:35 +04:00
Zamil Majdy
5e2ae3cec5 Merge branch 'dev' into feat/mcp-blocks 2026-02-10 12:45:34 +04:00
Zamil Majdy
f8771484fe fix(mcp): Fix CI failures and credential validation issues
- Fix pyright errors in graph_test.py by properly typing frozenset[CredentialsType]
- Fix executor validation crash when credentials is empty {} by nullifying
  the field before JSON schema validation
- Exclude MCP Tool block from e2e block discovery test (requires dialog)
- Normalize provider string in CredentialsMetaResponse to handle Python 3.13
  str(Enum) bug for stored credentials
- Fix get_host() to match MCP provider regardless of enum string format
2026-02-10 12:34:00 +04:00
Zamil Majdy
81e4f0a4b0 fix(mcp): Refresh expired OAuth tokens before tool discovery
The discover_tools endpoint was reading raw access tokens from stored
credentials without checking if they had expired. This caused users
to be prompted to re-authenticate every time the token expired (~1h).

Now uses creds_manager.refresh_if_needed() to transparently refresh
expired tokens before using them.
2026-02-10 12:18:43 +04:00
Zamil Majdy
66aada30f0 fix(tests): Revert conftest.py and oauth_test.py fixture scope changes
The pytest_asyncio fixture changes with loop_scope="session" caused
"Event loop is closed" errors in all 31 oauth_test.py tests on CI.
MCP tests have their own conftest override and don't need these changes.
2026-02-10 11:56:28 +04:00
Zamil Majdy
74e04f71f4 fix(tests): Properly mock get_required_fields in credential validation tests
The tests used MagicMock for block.input_schema but didn't mock
get_required_fields(), causing the "required missing creds" test to
silently treat all credentials as optional.
2026-02-10 11:44:12 +04:00
Otto
81f8290f01 debug(backend/db): Add diagnostic logging for vector type errors (#12024)
Adds diagnostic logging when the `type vector does not exist` error
occurs in raw SQL queries.

## Problem

We're seeing intermittent "type vector does not exist" errors on
dev-behave ([Sentry
issue](https://significant-gravitas.sentry.io/issues/7205929979/)). The
pgvector extension should be in the search_path, but occasionally
queries fail to resolve the vector type.

## Solution

When a query fails with this specific error, we now log:
- `SHOW search_path` - what schemas are being searched
- `SELECT current_schema()` - the active schema
- `SELECT current_user, session_user, current_database()` - connection
context

This diagnostic info will help identify why the vector extension isn't
visible in certain cases.

## Changes

- Added `_log_vector_error_diagnostics()` helper function in
`backend/data/db.py`
- Wrapped SQL execution in try/except to catch and diagnose vector type
errors
- Original exception is re-raised after logging (no behavior change)

## Testing

This is observational/diagnostic code. It will be validated by waiting
for the error to occur naturally on dev and checking the logs.

## Rollout

Once we've captured diagnostic logs and identified the root cause, this
logging can be removed or reduced in verbosity.
2026-02-10 07:35:13 +00:00
Zamil Majdy
4db27ca112 fix(mcp): Auto-select credential and gracefully handle stale IDs
- Auto-select credential when exactly one match exists (even for
  optional fields). Only skip auto-select for optional fields with
  multiple choices.
- In executor, catch ValueError from creds_manager.acquire() for
  optional credential fields — fall back to running without credentials
  instead of crashing when stale IDs reference deleted credentials.
2026-02-10 11:25:06 +04:00
Zamil Majdy
27ba4e8e93 fix(frontend): Remove credential field reordering on selection
The sortByUnsetFirst comparator in splitCredentialFieldsBySystem
caused credential inputs to jump positions every time a credential
was selected (set fields moved to bottom, unset moved to top).
Remove the sort to keep stable ordering.
2026-02-10 11:18:42 +04:00
Zamil Majdy
1a1985186a fix(mcp): Normalize credential input_data for JSON schema validation
The model_validator on CredentialsMetaInput normalizes legacy
"ProviderName.MCP" format for Pydantic validation, but validate_data()
uses raw JSON schema which bypasses Pydantic. Write normalized values
back to input_data after Pydantic processes them so both validation
paths see correct data.
2026-02-10 11:15:10 +04:00
Zamil Majdy
8fd13ade74 fix(mcp): Normalize legacy ProviderName format and fix credential optionality
- Add model_validator on CredentialsMetaInput to auto-normalize old
  "ProviderName.MCP" format to "mcp" at the model level, eliminating
  the need for string parsing hacks in every consumer.

- Fix aggregate_credentials_inputs to check block schema defaults when
  determining if credentials are required, not just node metadata.
  MCP blocks with default={} are always optional regardless of metadata.
2026-02-10 09:43:28 +04:00
Zamil Majdy
88ee4b3a11 fix(mcp): Clean up old credentials stored with wrong provider string
Also search for credentials stored with "ProviderName.MCP" (from the
Python 3.13 str(Enum) bug) during both discover-tools auto-lookup and
OAuth callback cleanup. Remove the temporary debug endpoint.
2026-02-10 09:22:50 +04:00
Zamil Majdy
8eed4ad653 fix(mcp): Use ProviderName enum directly instead of str() for credential provider
Python 3.13 changed str(StrEnum) to return "ClassName.MEMBER" instead of
the plain value. This caused MCP credentials to be stored with provider
"ProviderName.MCP" instead of "mcp", leading to type/provider mismatch
errors during graph validation and execution.

Fix: Pass the enum directly to Pydantic (which extracts .value automatically),
matching the pattern used by all other OAuth handlers. Use .value explicitly
only in non-Pydantic contexts (string comparisons, API calls).
2026-02-10 09:06:29 +04:00
Zamil Majdy
7744b89e96 fix(mcp): Use ProviderName.MCP.value instead of str() for credential provider
Python 3.13 changed str(StrEnum) to return "ClassName.MEMBER" instead of
the plain value. This caused MCP credentials to be stored with provider
"ProviderName.MCP" instead of "mcp", leading to type/provider mismatch
errors during graph validation and execution.
2026-02-10 09:04:38 +04:00
Zamil Majdy
4c02cd8f2f fix(mcp): Handle optional credentials in graph save and execution validation
- _on_graph_activate: Clear stale credential references for optional
  fields instead of blocking the save. Checks both node metadata
  (credentials_optional) and block schema (field not in required_fields).
- _validate_node_input_credentials: Use block schema's required_fields
  as fallback for credentials_optional check, so MCP blocks with
  default={} credentials are properly treated as optional.
- Set credentials_optional metadata on new MCP nodes in the frontend.
2026-02-10 08:53:15 +04:00
Zamil Majdy
909f313e1e fix(frontend/mcp): Filter credential auto-select by server URL discriminator
Prevent MCP credential cross-contamination where a credential for one
server (e.g. Sentry) fills credential fields for other servers (e.g.
Linear). Adds matchesDiscriminatorValues() to match credentials by host
against discriminator_values from the schema.
2026-02-10 07:39:44 +04:00
Zamil Majdy
edd9a90903 refactor(mcp): Share OAuth popup logic and fix credential persistence
- Extract shared OAuth popup utility (oauth-popup.ts) used by both
  MCPToolDialog and useCredentialsInput, eliminating ~200 lines of
  duplicated BroadcastChannel/postMessage/localStorage listener code
- Add mcpOAuthCallback to credentials provider so MCP credentials
  are added to the in-memory cache after OAuth (fixes credentials not
  appearing in the credential picker after OAuth via MCPToolDialog)
- Fix oauth_test.py async fixtures missing loop_scope="session"
- Add MCP token refresh handler in creds_manager for dynamic endpoints
- Fix enum string representation in CredentialsFieldInfo.combine()
2026-02-10 07:17:05 +04:00
Zamil Majdy
ba031329e9 fix(mcp): Integrate MCPToolBlock with standard credentials system
- Replace manual credential_id field with CredentialsMetaInput pattern
- Fix credential deduplication so different MCP server URLs get separate
  credential entries in the task credentials panel
- Add descriptive display names (e.g. "MCP: mcp.sentry.dev")
- Fix OAuth popup callback by adding mcp_callback route to middleware
  exclusion list and adding localStorage polling fallback
- Fix SSRF test fixture to patch Requests constructor directly
- Add MCP server URL matching for credential auto-assignment
- Return CredentialsMetaResponse from MCP OAuth callback
- Support MCP-specific OAuth flow in frontend credential input
- Filter MCP credentials by server URL in frontend
- Add test coverage for credential deduplication logic
2026-02-09 20:59:37 +04:00
Zamil Majdy
6ab1a6867e fix(backend/mcp): Fix pyright errors and formatting in MCP block and tests
- Use isinstance(creds, APIKeyCredentials) instead of hasattr check
- Rewrite integration tests to use user_id param and mock _resolve_auth_token
- Fix f-string and line-length formatting issues in routes.py
2026-02-09 19:16:17 +04:00
Zamil Majdy
d9269310cc fix(frontend/mcp): Loop HTML tag stripping to prevent XSS bypass
The single-pass regex `/<[^>]+>/g` can be bypassed with nested tags
like `<scr<script>ipt>`. Loop until no more tags are found.
Note: React auto-escapes JSX so this is defense-in-depth.
2026-02-09 19:10:17 +04:00
Zamil Majdy
fe70b6929f fix(mcp): Remove trusted_origins to prevent SSRF on user-provided URLs
User-provided MCP server URLs should not bypass SSRF IP-blocking
validation. Remove trusted_origins from all MCP code so that
private/internal IPs are properly blocked. Keep ThreadedResolver
in HostResolver fallback for DNS reliability in subprocess
environments.
2026-02-09 18:55:17 +04:00
Zamil Majdy
340520ba85 fix(mcp): OAuth discovery fallback, session ID, credential lookup, and DNS reliability
- Support MCP servers that serve OAuth metadata directly without
  protected-resource metadata (e.g. Linear) by falling back to
  discover_auth_server_metadata on the server's own origin
- Omit resource_url when no protected-resource metadata exists to
  avoid token audience mismatch errors (RFC 8707 resource is optional)
- Add Mcp-Session-Id header tracking per MCP Streamable HTTP spec
- Fall back to server_url credential lookup when credential_id is
  empty (pruneEmptyValues strips it from saved graphs)
- Use ThreadedResolver instead of c-ares AsyncResolver to avoid DNS
  failures in forked subprocess environments
- Simplify OAuth UX: single "Sign in & Connect" button on 401,
  remove sticky localStorage URL prefill
- Clean up stale MCP credentials on re-authentication
2026-02-09 18:51:53 +04:00
Reinier van der Leer
6467f6734f debug(backend/chat): Add timing logging to chat stream generation mechanism (#12019)
[SECRT-1912: Investigate & eliminate chat session start
latency](https://linear.app/autogpt/issue/SECRT-1912)

### Changes 🏗️

- Add timing logs to `backend.api.features.chat` in `routes.py`,
`service.py`, and `stream_registry.py`
- Remove unneeded DB join in `create_chat_session`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI checks
2026-02-09 14:05:29 +00:00
Zamil Majdy
6c2791b00b fix(frontend/mcp): Robust OAuth callback with localStorage fallback and popup close detection
BroadcastChannel can silently fail in some browser scenarios. Added:
- localStorage as third communication method in callback page
- storage event listener in dialog
- Popup close detection that checks localStorage directly
- Cleaned up auth-required box styling (gray instead of amber)
2026-02-09 17:52:02 +04:00
Otto
5a30d11416 refactor(copilot): Code cleanup and deduplication (#11950)
## Summary

Code cleanup of the AI Copilot codebase - rebased onto latest dev.

## Changes

### New Files
- `backend/util/validation.py` - UUID validation helpers
- `backend/api/features/chat/tools/helpers.py` - Shared tool utilities

### Credential Matching Consolidation  
- Added shared utilities to `utils.py`
- Refactored `run_block._check_block_credentials()` with discriminator
support
- Extracted `_resolve_discriminated_credentials()` for multi-provider
handling

### Routes Cleanup
- Extracted `_create_stream_generator()` and `SSE_RESPONSE_HEADERS`

### Tool Files Cleanup
- Updated `run_agent.py` and `run_block.py` to use shared helpers

**WIP** - This PR will be updated incrementally.
2026-02-09 13:43:55 +00:00
Zamil Majdy
7decc20a32 fix(backend/mcp): Auto-refresh expired OAuth tokens before MCP tool calls
_resolve_auth_token now checks token expiry and refreshes using
MCPOAuthHandler with metadata (token_url, client_id, client_secret)
stored during the OAuth callback flow.
2026-02-09 17:37:24 +04:00
Zamil Majdy
54375065d5 fix(mcp): Reshape execution input for MCPToolBlock like AgentExecutorBlock
The dynamic get_input_defaults returns only tool_arguments, so the
execution engine loses block-level fields like server_url. Reconstruct
the full Input from node.input_default and set tool_arguments from the
resolved dynamic input, matching the AgentExecutorBlock pattern.
2026-02-09 14:51:49 +04:00
Zamil Majdy
d62fde9445 fix(mcp): Use manual credential resolution instead of CredentialsField
The block framework's CredentialsField requires credentials to always be
present, which doesn't work for public MCP servers. Replace it with a
plain credential_id field and manual resolution from the credential store,
allowing both authenticated and public MCP servers to work seamlessly.
2026-02-09 14:41:14 +04:00
Zamil Majdy
03487f7b4d fix(frontend/mcp): Remove broken credentials widget and disable auto-connect
- Remove credentials field from MCP dynamic schema since auth is handled
  by the dialog's OAuth flow (the standard credentials widget doesn't
  support MCP as a provider and fails with 404)
- Simplify FormCreator MCP handling — all form fields are tool arguments
- Disable auto-connect on dialog open; pre-fill last URL instead so user
  can edit before connecting
2026-02-09 14:27:36 +04:00
Bently
1f4105e8f9 fix(frontend): Handle object values in FileInput component (#11948)
Fixes
[#11800](https://github.com/Significant-Gravitas/AutoGPT/issues/11800)

## Problem
The FileInput component crashed with `TypeError: e.startsWith is not a
function` when the value was an object (from external API) instead of a
string.

## Example Input Object
When using the external API
(`/external-api/v1/graphs/{id}/execute/{version}`), file inputs can be
passed as objects:

```json
{
  "node_input": {
    "input_image": {
      "name": "image.jpeg",
      "type": "image/jpeg",
      "size": 131147,
      "data": "/9j/4QAW..."
    }
  }
}
```

## Changes
- Updated `getFileLabelFromValue()` to handle object format: `{ name,
type, size, data }`
- Added type guards for string vs object values
- Graceful fallback for edge cases (null, undefined, empty object)

## Test cases verified
- Object with name: returns filename
- Object with type only: extracts and formats MIME type
- String data URI: parses correctly
- String file path: extracts extension
- Edge cases: returns "File" fallback
2026-02-09 10:25:08 +00:00
Bently
caf9ff34e6 fix(backend): Handle stale RabbitMQ channels on connection drop (#11929)
### Changes 🏗️

Fixes
[**AUTOGPT-SERVER-1TN**](https://autoagpt.sentry.io/issues/?query=AUTOGPT-SERVER-1TN)
(~39K events since Feb 2025) and related connection issues
**6JC/6JD/6JE/6JF** (~6K combined).

#### Problem

When the RabbitMQ TCP connection drops (network blip, server restart,
etc.):

1. `connect_robust` (aio_pika) automatically reconnects the underlying
AMQP connection
2. But `AsyncRabbitMQ._channel` still references the **old dead
channel**
3. `is_ready` checks `not self._channel.is_closed` — but the channel
object doesn't know the transport is gone
4. `publish_message` tries to use the stale channel →
`ChannelInvalidStateError: No active transport in channel`
5. `@func_retry` retries 5 times, but each retry hits the same stale
channel (it passes `is_ready`)

This means every connection drop generates errors until the process is
restarted.

#### Fix

**New `_ensure_channel()` helper** that resets stale channels before
reconnecting, so `connect()` creates a fresh one instead of
short-circuiting on `is_connected`.

**Explicit `ChannelInvalidStateError` handling in `publish_message`:**
1. First attempt uses `_ensure_channel()` (handles normal staleness)
2. If publish throws `ChannelInvalidStateError`, does a full reconnect
(resets both `_channel` and `_connection`) and retries once
3. `@func_retry` provides additional retry resilience on top

**Simplified `get_channel()`** to use the same resilient helper.

**1 file changed, 62 insertions, 24 deletions.**

#### Impact
- Eliminates ~39K `ChannelInvalidStateError` Sentry events
- RabbitMQ operations self-heal after connection drops without process
restart
- Related transport EOF errors (6JC/6JD/6JE/6JF) should also reduce
2026-02-09 10:24:08 +00:00
Zamil Majdy
df41d02fce feat(frontend/mcp): Add MCP tool discovery UI, OAuth flow, and dynamic block schema
- Add MCPToolDialog with tool discovery, OAuth sign-in, and card-based tool selection
- Add OAuth callback route using BroadcastChannel API for popup communication
- Add API client methods for MCP discovery, OAuth login, and callback
- Register MCP API routes on the backend REST API
- Render dynamic input schema for MCP blocks (credentials + tool params)
  in both legacy and new builder CustomNode components
- Nest MCP tool argument values under tool_arguments in hardcodedValues
- Display tool name with server name prefix in block header
- Add backend route tests for discovery, OAuth login, and callback endpoints
2026-02-09 14:18:59 +04:00
Otto
7c9e47ba76 fix(mcp): Remove redundant exception handling and unnecessary str() cast
- client.py: except (ValueError, Exception) → except Exception
  (Exception already catches ValueError, so it's redundant)
- oauth.py: SecretStr(str(tokens[...])) → SecretStr(tokens[...])
  (refresh_token is already a string, no cast needed)
2026-02-09 08:40:58 +00:00
Nicholas Tindle
e8fc8ee623 fix(backend): filter graph-only blocks from CoPilot's find_block results (#11892)
Filters out blocks that are unsuitable for standalone execution from
CoPilot's block search and execution. These blocks serve graph-specific
purposes and will either fail, hang, or confuse users when run outside
of a graph context.

**Important:** This does NOT affect the Builder UI which uses
`load_all_blocks()` directly.

### Changes 🏗️

- **find_block.py**: Added `EXCLUDED_BLOCK_TYPES` and
`EXCLUDED_BLOCK_IDS` constants, skip excluded blocks in search results
- **run_block.py**: Added execution guard that returns clear error
message for excluded blocks
- **content_handlers.py**: Added filtering to
`BlockHandler.get_missing_items()` and `get_stats()` to prevent indexing
excluded blocks

**Excluded by BlockType:**
| BlockType | Reason |
|-----------|--------|
| `INPUT` | Graph interface definition - data enters via chat, not graph
inputs |
| `OUTPUT` | Graph interface definition - data exits via chat, not graph
outputs |
| `WEBHOOK` | Wait for external events - would hang forever in CoPilot |
| `WEBHOOK_MANUAL` | Same as WEBHOOK |
| `NOTE` | Visual annotation only - no runtime behavior |
| `HUMAN_IN_THE_LOOP` | Pauses for human approval - CoPilot IS
human-in-the-loop |
| `AGENT` | AgentExecutorBlock requires graph context - use `run_agent`
tool instead |

**Excluded by ID:**
| Block | Reason |
|-------|--------|
| `SmartDecisionMakerBlock` | Dynamically discovers downstream blocks
via graph topology |

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [ ] Search for "input" in CoPilot - should NOT return AgentInputBlock
variants
- [ ] Search for "output" in CoPilot - should NOT return
AgentOutputBlock
- [ ] Search for "webhook" in CoPilot - should NOT return trigger blocks
- [ ] Search for "human" in CoPilot - should NOT return
HumanInTheLoopBlock
- [ ] Search for "decision" in CoPilot - should NOT return
SmartDecisionMakerBlock
- [ ] Verify functional blocks still appear (e.g., "email", "http",
"text")
  - [ ] Verify Builder UI still shows ALL blocks (no regression)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required.

---

Resolves: [SECRT-1831](https://linear.app/autogpt/issue/SECRT-1831)

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Low Risk**
> Behavior change is limited to CoPilot’s block discovery/execution
guards and is covered by new tests; main risk is inadvertently excluding
a block that should be runnable.
> 
> **Overview**
> CoPilot now **filters out graph-only blocks** from `find_block`
results and prevents them from being executed via `run_block`, returning
a clear error when a user attempts to run an excluded block.
> 
> `find_block` introduces explicit exclusion lists (by `BlockType` and a
specific block ID), over-fetches search results to maintain up to 10
usable matches after filtering, and adds debug logging when results are
reduced. New unit tests cover both the search filtering and the
`run_block` execution guard; a minor cleanup removes an unused `pytest`
import in `execution_queue_test.py`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
bc50755dcf. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-09 07:19:43 +00:00
dependabot[bot]
1a16e203b8 chore(deps): Bump actions/setup-node from 4 to 6 (#11213)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4
to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<p><strong>Breaking Changes</strong></p>
<ul>
<li>Limit automatic caching to npm, update workflows and documentation
by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1374">actions/setup-node#1374</a></li>
</ul>
<p><strong>Dependency Upgrades</strong></p>
<ul>
<li>Upgrade ts-jest from 29.1.2 to 29.4.1 and document breaking changes
in v5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1336">#1336</a></li>
<li>Upgrade prettier from 2.8.8 to 3.6.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1334">#1334</a></li>
<li>Upgrade actions/publish-action from 0.3.0 to 0.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1362">#1362</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v5...v6.0.0">https://github.com/actions/setup-node/compare/v5...v6.0.0</a></p>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Enhance caching in setup-node with automatic package manager
detection by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
</ul>
<p>This update, introduces automatic caching when a valid
<code>packageManager</code> field is present in your
<code>package.json</code>. This aims to improve workflow performance and
make dependency management more seamless.
To disable this automatic caching, set <code>package-manager-cache:
false</code></p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
  with:
    package-manager-cache: false
</code></pre>
<ul>
<li>Upgrade action to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​octokit/request-error</code> and
<code>@​actions/github</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1227">actions/setup-node#1227</a></li>
<li>Upgrade uuid from 9.0.1 to 11.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1273">actions/setup-node#1273</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1295">actions/setup-node#1295</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1332">actions/setup-node#1332</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1345">actions/setup-node#1345</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v5.0.0">https://github.com/actions/setup-node/compare/v4...v5.0.0</a></p>
<h2>v4.4.0</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2028fbc5c2"><code>2028fbc</code></a>
Limit automatic caching to npm, update workflows and documentation (<a
href="https://redirect.github.com/actions/setup-node/issues/1374">#1374</a>)</li>
<li><a
href="13427813f7"><code>1342781</code></a>
Bump actions/publish-action from 0.3.0 to 0.4.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1362">#1362</a>)</li>
<li><a
href="89d709d423"><code>89d709d</code></a>
Bump prettier from 2.8.8 to 3.6.2 (<a
href="https://redirect.github.com/actions/setup-node/issues/1334">#1334</a>)</li>
<li><a
href="cd2651c462"><code>cd2651c</code></a>
Bump ts-jest from 29.1.2 to 29.4.1 (<a
href="https://redirect.github.com/actions/setup-node/issues/1336">#1336</a>)</li>
<li><a
href="a0853c2454"><code>a0853c2</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1345">#1345</a>)</li>
<li><a
href="b7234cc9fe"><code>b7234cc</code></a>
Upgrade action to use node24 (<a
href="https://redirect.github.com/actions/setup-node/issues/1325">#1325</a>)</li>
<li><a
href="d7a11313b5"><code>d7a1131</code></a>
Enhance caching in setup-node with automatic package manager detection
(<a
href="https://redirect.github.com/actions/setup-node/issues/1348">#1348</a>)</li>
<li><a
href="5e2628c959"><code>5e2628c</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/setup-node/issues/1332">#1332</a>)</li>
<li><a
href="65beceff8e"><code>65becef</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1295">#1295</a>)</li>
<li><a
href="7e24a656e1"><code>7e24a65</code></a>
Bump uuid from 9.0.1 to 11.1.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1273">#1273</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-node/compare/v4...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=4&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 07:11:21 +00:00
dependabot[bot]
5dae303ce0 chore(frontend/deps): Bump react-window and @types/react-window in /autogpt_platform/frontend (#10943)
Bumps [react-window](https://github.com/bvaughn/react-window) and
[@types/react-window](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/react-window).
These dependencies needed to be updated together.
Updates `react-window` from 1.8.11 to 2.1.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/bvaughn/react-window/releases">react-window's
releases</a>.</em></p>
<blockquote>
<h2>2.1.0</h2>
<p>Improved ARIA support:</p>
<ul>
<li>Add better default ARIA attributes for outer
<code>HTMLDivElement</code></li>
<li>Add optional <code>ariaAttributes</code> prop to row and cell
renderers to simplify better ARIA attributes for user-rendered
cells</li>
<li>Remove intermediate <code>HTMLDivElement</code> from
<code>List</code> and <code>Grid</code>
<ul>
<li>This may enable more/better custom CSS styling</li>
<li>This may also enable adding an optional <code>children</code> prop
to <code>List</code> and <code>Grid</code> for e.g.
overlays/tooltips</li>
</ul>
</li>
<li>Add optional <code>tagName</code> prop; defaults to
<code>&quot;div&quot;</code> but can be changed to e.g.
<code>&quot;ul&quot;</code></li>
</ul>
<pre lang="tsx"><code>// Example of how to use new `ariaAttributes` prop
function RowComponent({
  ariaAttributes,
  index,
  style,
  ...rest
}: RowComponentProps&lt;object&gt;) {
  return (
    &lt;div style={style} {...ariaAttributes}&gt;
      ...
    &lt;/div&gt;
  );
}
</code></pre>
<p>Added optional <code>children</code> prop to better support edge
cases like sticky rows.</p>
<p>Minor changes to <code>onRowsRendered</code> and
<code>onCellsRendered</code> callbacks to make it easier to
differentiate between <em>visible</em> items and items rendered due to
overscan settings. These methods will now receive two params– the first
for <em>visible</em> rows and the second for <em>all</em> rows
(including overscan), e.g.:</p>
<pre lang="ts"><code>function onRowsRendered(
  visibleRows: {
    startIndex: number;
    stopIndex: number;
  },
  allRows: {
    startIndex: number;
    stopIndex: number;
  }
): void {
  // ...
}
<p>function onCellsRendered(<br />
visibleCells: {<br />
columnStartIndex: number;<br />
columnStopIndex: number;<br />
rowStartIndex: number;<br />
rowStopIndex: number;<br />
&lt;/tr&gt;&lt;/table&gt;<br />
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/bvaughn/react-window/blob/master/CHANGELOG.md">react-window's
changelog</a>.</em></p>
<blockquote>
<h2>2.1.0</h2>
<p>Improved ARIA support:</p>
<ul>
<li>Add better default ARIA attributes for outer
<code>HTMLDivElement</code></li>
<li>Add optional <code>ariaAttributes</code> prop to row and cell
renderers to simplify better ARIA attributes for user-rendered
cells</li>
<li>Remove intermediate <code>HTMLDivElement</code> from
<code>List</code> and <code>Grid</code>
<ul>
<li>This may enable more/better custom CSS styling</li>
<li>This may also enable adding an optional <code>children</code> prop
to <code>List</code> and <code>Grid</code> for e.g.
overlays/tooltips</li>
</ul>
</li>
<li>Add optional <code>tagName</code> prop; defaults to
<code>&quot;div&quot;</code> but can be changed to e.g.
<code>&quot;ul&quot;</code></li>
</ul>
<pre lang="tsx"><code>// Example of how to use new `ariaAttributes` prop
function RowComponent({
  ariaAttributes,
  index,
  style,
  ...rest
}: RowComponentProps&lt;object&gt;) {
  return (
    &lt;div style={style} {...ariaAttributes}&gt;
      ...
    &lt;/div&gt;
  );
}
</code></pre>
<p>Added optional <code>children</code> prop to better support edge
cases like sticky rows.</p>
<p>Minor changes to <code>onRowsRendered</code> and
<code>onCellsRendered</code> callbacks to make it easier to
differentiate between <em>visible</em> items and items rendered due to
overscan settings. These methods will now receive two params– the first
for <em>visible</em> rows and the second for <em>all</em> rows
(including overscan), e.g.:</p>
<pre lang="ts"><code>function onRowsRendered(
  visibleRows: {
    startIndex: number;
    stopIndex: number;
  },
  allRows: {
    startIndex: number;
    stopIndex: number;
  }
): void {
  // ...
}
<p>function onCellsRendered(<br />
visibleCells: {<br />
columnStartIndex: number;<br />
columnStopIndex: number;<br />
rowStartIndex: number;<br />
&lt;/tr&gt;&lt;/table&gt;<br />
</code></pre></p>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1b6840ba35"><code>1b6840b</code></a>
Merge pull request <a
href="https://redirect.github.com/bvaughn/react-window/issues/836">#836</a>
from bvaughn/ARIA-roles</li>
<li><a
href="35f651b615"><code>35f651b</code></a>
Revert accidental change to docs example</li>
<li><a
href="8bce7f555b"><code>8bce7f5</code></a>
onRowsRendered/onCellsRendered separate visible and overscan items</li>
<li><a
href="9f1e8f2f0a"><code>9f1e8f2</code></a>
Support custom tagName for outer element and (optional) children</li>
<li><a
href="7f07ac33cb"><code>7f07ac3</code></a>
Improve ARIA attributes</li>
<li><a
href="7234ec3c09"><code>7234ec3</code></a>
Reduced network waterfalls between routes</li>
<li><a
href="5c431a294f"><code>5c431a2</code></a>
Stronger typing for doc website routes</li>
<li><a
href="c9349a4b7b"><code>c9349a4</code></a>
2.0.1 -&gt; 2.0.2</li>
<li><a
href="6adc6c04a1"><code>6adc6c0</code></a>
Merge pull request <a
href="https://redirect.github.com/bvaughn/react-window/issues/832">#832</a>
from bvaughn/issues/831</li>
<li><a
href="bd562c5734"><code>bd562c5</code></a>
Add tests</li>
<li>Additional commits viewable in <a
href="https://github.com/bvaughn/react-window/compare/1.8.11...2.1.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/react-window` from 1.8.8 to 2.0.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/react-window">compare
view</a></li>
</ul>
</details>
<br />


You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 06:42:47 +00:00
Zamil Majdy
e59e8dd9a9 fix(mcp): Skip e2e tests in CI unless --run-e2e is passed
E2e tests hit a real external MCP server and are inherently flaky.
Skip them by default, require --run-e2e flag to opt in.
2026-02-09 10:14:35 +04:00
dependabot[bot]
6cbfbdd013 chore(libs/deps-dev): bump the development-dependencies group across 1 directory with 4 updates (#11349)
Bumps the development-dependencies group with 4 updates in the
/autogpt_platform/autogpt_libs directory:
[pyright](https://github.com/RobertCraigie/pyright-python),
[pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio),
[pytest-mock](https://github.com/pytest-dev/pytest-mock) and
[ruff](https://github.com/astral-sh/ruff).

Updates `pyright` from 1.1.404 to 1.1.407
<details>
<summary>Commits</summary>
<ul>
<li><a
href="53e8efb463"><code>53e8efb</code></a>
Pyright NPM Package update to 1.1.407 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/356">#356</a>)</li>
<li><a
href="1d515b7129"><code>1d515b7</code></a>
Pyright NPM Package update to 1.1.406 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/355">#355</a>)</li>
<li><a
href="e211ec8df8"><code>e211ec8</code></a>
Pyright NPM Package update to 1.1.405 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/353">#353</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.404...v1.1.407">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-asyncio` from 1.1.0 to 1.3.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-asyncio/releases">pytest-asyncio's
releases</a>.</em></p>
<blockquote>
<h2>pytest-asyncio 1.3.0</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/1.3.0">1.3.0</a>
- 2025-11-10</h1>
<h2>Removed</h2>
<ul>
<li>Support for Python 3.9 (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1278">#1278</a>)</li>
</ul>
<h2>Added</h2>
<ul>
<li>Support for pytest 9 (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1279">#1279</a>)</li>
</ul>
<h2>Notes for Downstream Packagers</h2>
<ul>
<li>Tested Python versions include free threaded Python 3.14t (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1274">#1274</a>)</li>
<li>Tests are run in the same pytest process, instead of spawning a
subprocess with <code>pytest.Pytester.runpytest_subprocess</code>. This
prevents the test suite from accidentally using a system installation of
pytest-asyncio, which could result in test errors. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1275">#1275</a>)</li>
</ul>
<h2>pytest-asyncio 1.2.0</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/1.2.0">1.2.0</a>
- 2025-09-12</h1>
<h2>Added</h2>
<ul>
<li><code>--asyncio-debug</code> CLI option and
<code>asyncio_debug</code> configuration option to enable asyncio debug
mode for the default event loop. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/980">#980</a>)</li>
<li>A <code>pytest.UsageError</code> for invalid configuration values of
<code>asyncio_default_fixture_loop_scope</code> and
<code>asyncio_default_test_loop_scope</code>. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1189">#1189</a>)</li>
<li>Compatibility with the Pyright type checker (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/731">#731</a>)</li>
</ul>
<h2>Fixed</h2>
<ul>
<li><code>RuntimeError: There is no current event loop in thread
'MainThread'</code> when any test unsets the event loop (such as when
using <code>asyncio.run</code> and <code>asyncio.Runner</code>). (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1177">#1177</a>)</li>
<li>Deprecation warning when decorating an asynchronous fixture with
<code>@pytest.fixture</code> in [strict]{.title-ref} mode. The warning
message now refers to the correct package. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1198">#1198</a>)</li>
</ul>
<h2>Notes for Downstream Packagers</h2>
<ul>
<li>Bump the minimum required version of tox to v4.28. This change is
only relevant if you use the <code>tox.ini</code> file provided by
pytest-asyncio to run tests.</li>
<li>Extend dependency on typing-extensions&gt;=4.12 from Python&lt;3.10
to Python&lt;3.13.</li>
</ul>
<h2>pytest-asyncio 1.1.1</h2>
<h1><a
href="https://github.com/pytest-dev/pytest-asyncio/tree/v1.1.1">v1.1.1</a>
- 2025-09-12</h1>
<h2>Notes for Downstream Packagers</h2>
<p>- Addresses a build problem with setuptoos-scm &gt;= 9 caused by
invalid setuptools-scm configuration in pytest-asyncio. (<a
href="https://redirect.github.com/pytest-dev/pytest-asyncio/issues/1192">#1192</a>)</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2e9695fcf8"><code>2e9695f</code></a>
docs: Compile changelog for v1.3.0</li>
<li><a
href="dd0e9ba3fa"><code>dd0e9ba</code></a>
docs: Reference correct issue in news fragment.</li>
<li><a
href="4c31abe5bf"><code>4c31abe</code></a>
Build(deps): Bump nh3 from 0.3.1 to 0.3.2</li>
<li><a
href="13e94770d7"><code>13e9477</code></a>
Link to migration guides from changelog</li>
<li><a
href="4d2cf3c36f"><code>4d2cf3c</code></a>
tests: handle Python 3.14 DefaultEventLoopPolicy deprecation
warnings</li>
<li><a
href="ee3549b6ef"><code>ee3549b</code></a>
test: Remove obsolete test for the event_loop fixture.</li>
<li><a
href="7a67c82c5a"><code>7a67c82</code></a>
tests: Fix failing test by preventing warning conversion to error.</li>
<li><a
href="a17b689a75"><code>a17b689</code></a>
test: add pytest config to isolated test directories</li>
<li><a
href="18afc9df5a"><code>18afc9d</code></a>
fix(tests): replace runpytest_subprocess with runpytest</li>
<li><a
href="cdc6bd1de7"><code>cdc6bd1</code></a>
Add support for pytest 9 and drop Python 3.9 support</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-asyncio/compare/v1.1.0...v1.3.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-mock` from 3.14.1 to 3.15.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/releases">pytest-mock's
releases</a>.</em></p>
<blockquote>
<h2>v3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/529">#529</a>:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>v3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/pull/524">#524</a>:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/blob/main/CHANGELOG.rst">pytest-mock's
changelog</a>.</em></p>
<blockquote>
<h2>3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><code>[#529](https://github.com/pytest-dev/pytest-mock/issues/529)
&lt;https://github.com/pytest-dev/pytest-mock/issues/529&gt;</code>_:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><code>[#524](https://github.com/pytest-dev/pytest-mock/issues/524)
&lt;https://github.com/pytest-dev/pytest-mock/pull/524&gt;</code>_:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e1b5c62a38"><code>e1b5c62</code></a>
Release 3.15.1</li>
<li><a
href="184eb190d6"><code>184eb19</code></a>
Set <code>spy_return_iter</code> only when explicitly requested (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/537">#537</a>)</li>
<li><a
href="4fa0088a0a"><code>4fa0088</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/536">#536</a>)</li>
<li><a
href="f5aff33ce7"><code>f5aff33</code></a>
Fix test failure with pytest 8+ and verbose mode (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/535">#535</a>)</li>
<li><a
href="adc41873c9"><code>adc4187</code></a>
Bump actions/setup-python from 5 to 6 in the github-actions group (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/533">#533</a>)</li>
<li><a
href="95ad570060"><code>95ad570</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/532">#532</a>)</li>
<li><a
href="e696bf02c1"><code>e696bf0</code></a>
Fix standalone mock support (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/531">#531</a>)</li>
<li><a
href="5b29b03ce9"><code>5b29b03</code></a>
Fix gen-release-notes script</li>
<li><a
href="7d22ef4e56"><code>7d22ef4</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/528">#528</a>
from pytest-dev/release-3.15.0</li>
<li><a
href="90b29f89e2"><code>90b29f8</code></a>
Update CHANGELOG for 3.15.0</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-mock/compare/v3.14.1...v3.15.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.12.11 to 0.14.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.14.4</h2>
<h2>Release Notes</h2>
<p>Released on 2025-11-06.</p>
<h3>Preview features</h3>
<ul>
<li>[formatter] Allow newlines after function headers without docstrings
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21110">#21110</a>)</li>
<li>[formatter] Avoid extra parentheses for long <code>match</code>
patterns with <code>as</code> captures (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21176">#21176</a>)</li>
<li>[<code>refurb</code>] Expand fix safety for keyword arguments and
<code>Decimal</code>s (<code>FURB164</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21259">#21259</a>)</li>
<li>[<code>refurb</code>] Preserve argument ordering in autofix
(<code>FURB103</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20790">#20790</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[server] Fix missing diagnostics for notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21156">#21156</a>)</li>
<li>[<code>flake8-bugbear</code>] Ignore non-NFKC attribute names in
<code>B009</code> and <code>B010</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21131">#21131</a>)</li>
<li>[<code>refurb</code>] Fix false negative for underscores before sign
in <code>Decimal</code> constructor (<code>FURB157</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21190">#21190</a>)</li>
<li>[<code>ruff</code>] Fix false positives on starred arguments
(<code>RUF057</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21256">#21256</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>airflow</code>] extend deprecated argument
<code>concurrency</code> in <code>airflow..DAG</code>
(<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21220">#21220</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Improve <code>extend</code> docs (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21135">#21135</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix typo in <code>C416</code>
documentation (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21184">#21184</a>)</li>
<li>Revise Ruff setup instructions for Zed editor (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20935">#20935</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Make <code>ruff analyze graph</code> work with jupyter notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21161">#21161</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/musicinmybrain"><code>@​musicinmybrain</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/tjkuson"><code>@​tjkuson</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/renovate"><code>@​renovate</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/gauthsvenkat"><code>@​gauthsvenkat</code></a></li>
<li><a
href="https://github.com/LoicRiegel"><code>@​LoicRiegel</code></a></li>
</ul>
<h2>Install ruff 0.14.4</h2>
<h3>Install prebuilt binaries via shell script</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.14.4</h2>
<p>Released on 2025-11-06.</p>
<h3>Preview features</h3>
<ul>
<li>[formatter] Allow newlines after function headers without docstrings
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21110">#21110</a>)</li>
<li>[formatter] Avoid extra parentheses for long <code>match</code>
patterns with <code>as</code> captures (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21176">#21176</a>)</li>
<li>[<code>refurb</code>] Expand fix safety for keyword arguments and
<code>Decimal</code>s (<code>FURB164</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21259">#21259</a>)</li>
<li>[<code>refurb</code>] Preserve argument ordering in autofix
(<code>FURB103</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20790">#20790</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[server] Fix missing diagnostics for notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21156">#21156</a>)</li>
<li>[<code>flake8-bugbear</code>] Ignore non-NFKC attribute names in
<code>B009</code> and <code>B010</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21131">#21131</a>)</li>
<li>[<code>refurb</code>] Fix false negative for underscores before sign
in <code>Decimal</code> constructor (<code>FURB157</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21190">#21190</a>)</li>
<li>[<code>ruff</code>] Fix false positives on starred arguments
(<code>RUF057</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21256">#21256</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>airflow</code>] extend deprecated argument
<code>concurrency</code> in <code>airflow..DAG</code>
(<code>AIR301</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21220">#21220</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Improve <code>extend</code> docs (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21135">#21135</a>)</li>
<li>[<code>flake8-comprehensions</code>] Fix typo in <code>C416</code>
documentation (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21184">#21184</a>)</li>
<li>Revise Ruff setup instructions for Zed editor (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20935">#20935</a>)</li>
</ul>
<h3>Other changes</h3>
<ul>
<li>Make <code>ruff analyze graph</code> work with jupyter notebooks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21161">#21161</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/musicinmybrain"><code>@​musicinmybrain</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/tjkuson"><code>@​tjkuson</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/renovate"><code>@​renovate</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a
href="https://github.com/gauthsvenkat"><code>@​gauthsvenkat</code></a></li>
<li><a
href="https://github.com/LoicRiegel"><code>@​LoicRiegel</code></a></li>
</ul>
<h2>0.14.3</h2>
<p>Released on 2025-10-30.</p>
<h3>Preview features</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c7ff9826d6"><code>c7ff982</code></a>
Bump 0.14.4 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21306">#21306</a>)</li>
<li><a
href="35640dd853"><code>35640dd</code></a>
Fix main by using <code>infer_expression</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21299">#21299</a>)</li>
<li><a
href="cb2e277482"><code>cb2e277</code></a>
[ty] Understand legacy and PEP 695 <code>ParamSpec</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21139">#21139</a>)</li>
<li><a
href="132d10fb6f"><code>132d10f</code></a>
[ty] Discover site-packages from the environment that ty is installed in
(<a
href="https://redirect.github.com/astral-sh/ruff/issues/21">#21</a>...</li>
<li><a
href="f189aad6d2"><code>f189aad</code></a>
[ty] Make special cases for <code>UnionType</code> slightly narrower (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21276">#21276</a>)</li>
<li><a
href="5517c9943a"><code>5517c99</code></a>
Require ignore 0.4.24 in <code>Cargo.toml</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21292">#21292</a>)</li>
<li><a
href="b5ff96595d"><code>b5ff965</code></a>
[ty] Favour imported symbols over builtin symbols (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21285">#21285</a>)</li>
<li><a
href="c6573b16ac"><code>c6573b1</code></a>
docs: revise Ruff setup instructions for Zed editor (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20935">#20935</a>)</li>
<li><a
href="76127e5fb5"><code>76127e5</code></a>
[ty] Update salsa (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21281">#21281</a>)</li>
<li><a
href="cddc0fedc2"><code>cddc0fe</code></a>
[syntax-error]: no binding for nonlocal PLE0117 as a semantic syntax
error (...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.11...0.14.4">compare
view</a></li>
</ul>
</details>
<br />


You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 04:54:05 +00:00
dependabot[bot]
0c6fa60436 chore(deps): Bump actions/github-script from 7 to 8 (#10870)
Bumps [actions/github-script](https://github.com/actions/github-script)
from 7 to 8.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/github-script/releases">actions/github-script's
releases</a>.</em></p>
<blockquote>
<h2>v8.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update Node.js version support to 24.x by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li>README for updating actions/github-script from v7 to v8 by <a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/637">actions/github-script#637</a></li>
<li><a
href="https://github.com/sneha-krip"><code>@​sneha-krip</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/653">actions/github-script#653</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7.1.0...v8.0.0">https://github.com/actions/github-script/compare/v7.1.0...v8.0.0</a></p>
<h2>v7.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Upgrade husky to v9 by <a
href="https://github.com/benelan"><code>@​benelan</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li>Upgrade IA Publish by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/486">actions/github-script#486</a></li>
<li>Fix workflow status badges by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/497">actions/github-script#497</a></li>
<li>Update usage of <code>actions/upload-artifact</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/512">actions/github-script#512</a></li>
<li>Clear up package name confusion by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/514">actions/github-script#514</a></li>
<li>Update dependencies with <code>npm audit fix</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/515">actions/github-script#515</a></li>
<li>Specify that the used script is JavaScript by <a
href="https://github.com/timotk"><code>@​timotk</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li>chore: Add Dependabot for NPM and Actions by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/472">actions/github-script#472</a></li>
<li>Define <code>permissions</code> in workflows and update actions by
<a href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in
<a
href="https://redirect.github.com/actions/github-script/pull/531">actions/github-script#531</a></li>
<li>chore: Add Dependabot for .github/actions/install-dependencies by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/532">actions/github-script#532</a></li>
<li>chore: Remove .vscode settings by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/533">actions/github-script#533</a></li>
<li>ci: Use github/setup-licensed by <a
href="https://github.com/nschonni"><code>@​nschonni</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/473">actions/github-script#473</a></li>
<li>make octokit instance available as octokit on top of github, to make
it easier to seamlessly copy examples from GitHub rest api or octokit
documentations by <a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li>Remove <code>octokit</code> README updates for v7 by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/557">actions/github-script#557</a></li>
<li>docs: add &quot;exec&quot; usage examples by <a
href="https://github.com/neilime"><code>@​neilime</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li>Bump ruby/setup-ruby from 1.213.0 to 1.222.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/563">actions/github-script#563</a></li>
<li>Bump ruby/setup-ruby from 1.222.0 to 1.229.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/github-script/pull/575">actions/github-script#575</a></li>
<li>Clearly document passing inputs to the <code>script</code> by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/603">actions/github-script#603</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/benelan"><code>@​benelan</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/482">actions/github-script#482</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/485">actions/github-script#485</a></li>
<li><a href="https://github.com/timotk"><code>@​timotk</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/478">actions/github-script#478</a></li>
<li><a
href="https://github.com/iamstarkov"><code>@​iamstarkov</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/508">actions/github-script#508</a></li>
<li><a href="https://github.com/neilime"><code>@​neilime</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/546">actions/github-script#546</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/github-script/pull/610">actions/github-script#610</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/github-script/compare/v7...v7.1.0">https://github.com/actions/github-script/compare/v7...v7.1.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ed597411d8"><code>ed59741</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/653">#653</a>
from actions/sneha-krip/readme-for-v8</li>
<li><a
href="2dc352e4ba"><code>2dc352e</code></a>
Bold minimum Actions Runner version in README</li>
<li><a
href="01e118c8d0"><code>01e118c</code></a>
Update README for Node 24 runtime requirements</li>
<li><a
href="8b222ac82e"><code>8b222ac</code></a>
Apply suggestion from <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a></li>
<li><a
href="adc0eeac99"><code>adc0eea</code></a>
README for updating actions/github-script from v7 to v8</li>
<li><a
href="20fe497b3f"><code>20fe497</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/github-script/issues/637">#637</a>
from actions/node24</li>
<li><a
href="e7b7f222b1"><code>e7b7f22</code></a>
update licenses</li>
<li><a
href="2c81ba05f3"><code>2c81ba0</code></a>
Update Node.js version support to 24.x</li>
<li>See full diff in <a
href="https://github.com/actions/github-script/compare/v7...v8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/github-script&package-manager=github_actions&previous-version=7&new-version=8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Update GitHub Actions workflows to use actions/github-script v8.
> 
> - **CI Workflows**:
>   - Update `actions/github-script` from `v7` to `v8` in:
>     - `.github/workflows/claude-ci-failure-auto-fix.yml`
>     - `.github/workflows/platform-dev-deploy-event-dispatcher.yml`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cfdccf966b. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-09 04:27:07 +00:00
dependabot[bot]
b04e916c23 chore(backend/deps-dev): bump the development-dependencies group across 1 directory with 3 updates (#12005)
Bumps the development-dependencies group with 3 updates in the
/autogpt_platform/backend directory:
[poethepoet](https://github.com/nat-n/poethepoet),
[pytest-watcher](https://github.com/olzhasar/pytest-watcher) and
[ruff](https://github.com/astral-sh/ruff).

Updates `poethepoet` from 0.37.0 to 0.40.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nat-n/poethepoet/releases">poethepoet's
releases</a>.</em></p>
<blockquote>
<h2>0.40.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Allow optional envfiles without warnings by <a
href="https://github.com/cnaples79"><code>@​cnaples79</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/337">nat-n/poethepoet#337</a></li>
<li>Add support for the <code>capture_output</code> option in ref tasks
by <a href="https://github.com/kzrnm"><code>@​kzrnm</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/343">nat-n/poethepoet#343</a></li>
<li>Set uv to quiet mode during shell completion to avoid console spam
by <a href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/338">nat-n/poethepoet#338</a></li>
<li>Support <code>ignore_fail</code> on execution task types and ref
tasks by <a href="https://github.com/nat-n"><code>@​nat-n</code></a> in
<a
href="https://redirect.github.com/nat-n/poethepoet/pull/347">nat-n/poethepoet#347</a></li>
<li>Add choices option to constrain named arguments by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/348">nat-n/poethepoet#348</a></li>
</ul>
<h2>Fixes</h2>
<ul>
<li>Handle SIGHUP and SIGBREAK signals to stop tasks by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/344">nat-n/poethepoet#344</a></li>
<li>Accept string for type name in global executor option by <a
href="https://github.com/kzrnm"><code>@​kzrnm</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/340">nat-n/poethepoet#340</a></li>
</ul>
<h2>Code improvements</h2>
<ul>
<li>Modernize type annotations by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/339">nat-n/poethepoet#339</a></li>
<li>Ensure test virtual environments are always cleaned up by <a
href="https://github.com/kzrnm"><code>@​kzrnm</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/346">nat-n/poethepoet#346</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.39.0...v0.40.0">https://github.com/nat-n/poethepoet/compare/v0.39.0...v0.40.0</a></p>
<h2>0.39.0</h2>
<h2>Enhancements</h2>
<ul>
<li>Add support for uv executor options by <a
href="https://github.com/rochacbruno"><code>@​rochacbruno</code></a> and
<a href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/327">nat-n/poethepoet#327</a>
<ul>
<li>feat: add <a
href="https://poethepoet.natn.io/global_options.html#uv-executor">various
options to the uv executor</a> to be passed to the uv run command</li>
<li>feat: allow task executor to be configure with just the type as a
string</li>
<li>feat executor options to be set at runtime via the new
--executor-opt cli global option</li>
<li>feat: allow inheritance of compatible executor options from global
to task to runtime</li>
<li>refactor: extend PoeOptions to support annotating config fields with
a config_name to parse, separate from the attribute name</li>
<li>refactor: some micro-optimizations to PoeOptions and
AnnotationType</li>
<li>doc: Add <a
href="https://poethepoet.natn.io/guides/tox_replacement_guide.html">guide
for replacing tox with poe + uv</a></li>
<li>doc: tidy up executor docs</li>
<li>doc: fix typo in doc for expr task</li>
<li>test: improve test coverage of PoeOptions</li>
<li>test: disable some test cases on windows that are too flaky</li>
</ul>
</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/rochacbruno"><code>@​rochacbruno</code></a>
made their first contribution in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/327">nat-n/poethepoet#327</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nat-n/poethepoet/compare/v0.38.0...v0.39.0">https://github.com/nat-n/poethepoet/compare/v0.38.0...v0.39.0</a></p>
<h2>0.38.0</h2>
<h2>Enhancements</h2>
<ul>
<li>feat: Add parallel task type by <a
href="https://github.com/nat-n"><code>@​nat-n</code></a> in <a
href="https://redirect.github.com/nat-n/poethepoet/pull/323">nat-n/poethepoet#323</a></li>
</ul>
<h2>Breaking changes</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0a7247d8f7"><code>0a7247d</code></a>
Bump version to 0.40.0</li>
<li><a
href="312e74a5be"><code>312e74a</code></a>
feat: Add choices option to constrain named arguments (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/348">#348</a>)</li>
<li><a
href="5e0b3e5590"><code>5e0b3e5</code></a>
feat: support ignore_fail on execution task types and ref tasks (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/347">#347</a>)</li>
<li><a
href="a3c97e1e94"><code>a3c97e1</code></a>
test: ensure the test virtual environment is always removed (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/346">#346</a>)</li>
<li><a
href="bc04e2fe18"><code>bc04e2f</code></a>
feat: support <code>capture_output</code> on ref tasks (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/343">#343</a>)</li>
<li><a
href="f7b82ef954"><code>f7b82ef</code></a>
fix: global executor option (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/340">#340</a>)</li>
<li><a
href="8e7b1166a0"><code>8e7b116</code></a>
fix: handle SIGHUP and SIGBREAK signals to stop tasks (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/344">#344</a>)</li>
<li><a
href="8e51f2b79f"><code>8e51f2b</code></a>
refactor: modernize type annotations (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/339">#339</a>)</li>
<li><a
href="72a9225dac"><code>72a9225</code></a>
fix: set uv to quiet during shell completion (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/338">#338</a>)</li>
<li><a
href="c6c7306276"><code>c6c7306</code></a>
feat: allow optional envfiles without warnings (<a
href="https://redirect.github.com/nat-n/poethepoet/issues/337">#337</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/nat-n/poethepoet/compare/v0.37.0...v0.40.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-watcher` from 0.4.3 to 0.6.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/olzhasar/pytest-watcher/releases">pytest-watcher's
releases</a>.</em></p>
<blockquote>
<h2>v0.6.3</h2>
<h3>Features</h3>
<ul>
<li>Add debug mode activated with <code>PTW_DEBUG</code> environment
variable and improve log messages.</li>
</ul>
<h3>Bugfixes</h3>
<ul>
<li>Fix terminal flushing after menu and header prints.</li>
<li>Use monotonic clock for trigger detection to avoid misbehavior on
clock changes.</li>
</ul>
<h2>v0.6.2</h2>
<h3>Bugfixes</h3>
<ul>
<li>Allow specifying blank patterns via CLI</li>
<li>Fix duplicate command entries in menu</li>
</ul>
<h2>v0.6.1</h2>
<h3>Bugfixes</h3>
<ul>
<li>Trigger tests in interactive mode for carriage return character</li>
</ul>
<h3>Improved Documentation</h3>
<ul>
<li>Add contributing guide</li>
</ul>
<h3>Misc</h3>
<ul>
<li>Integrate <a
href="https://towncrier.readthedocs.io/en/stable/index.html">towncrier</a>
into the development process</li>
</ul>
<h2>v0.6.0</h2>
<h2>Features</h2>
<ul>
<li>Add <code>notify-on-failure</code> flag (and config option) to emit
BEL symbol on test suite failure.</li>
</ul>
<h2>Infrastructure</h2>
<ul>
<li>Migrate from poetry to uv.</li>
<li>Remove tox.</li>
</ul>
<h2>v0.5.0</h2>
<h2>Fixes</h2>
<ul>
<li>Merge arguments passed to the runner from config and CLI instead of
overriding.</li>
</ul>
<h2>Changes</h2>
<ul>
<li>Drop support for Python 3.7 &amp; 3.8</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/olzhasar/pytest-watcher/blob/master/CHANGELOG.md">pytest-watcher's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.3">0.6.3</a>
- 2026-01-11</h2>
<h3>Features</h3>
<ul>
<li>Add debug mode activated with <code>PTW_DEBUG</code> environment
variable and improve log messages.</li>
</ul>
<h3>Bugfixes</h3>
<ul>
<li>Fix terminal flushing after menu and header prints.</li>
<li>Use monotonic clock for trigger detection to avoid misbehavior on
clock changes.</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.2">0.6.2</a>
- 2025-12-28</h2>
<h3>Bugfixes</h3>
<ul>
<li>Allow specifying blank patterns via CLI</li>
<li>Fix duplicate command entries in menu</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.1">0.6.1</a>
- 2025-12-26</h2>
<h3>Bugfixes</h3>
<ul>
<li>Trigger tests in interactive mode for carriage return character</li>
</ul>
<h3>Improved Documentation</h3>
<ul>
<li>Add contributing guide</li>
</ul>
<h3>Misc</h3>
<ul>
<li>Integrate <a
href="https://towncrier.readthedocs.io/en/stable/index.html">towncrier</a>
into the development process</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.6.0">0.6.0</a>
- 2025-12-22</h2>
<h3>Features</h3>
<ul>
<li>Add notify-on-failure flag (and config option) to emit BEL symbol on
test suite failure.</li>
</ul>
<h3>Infrastructure</h3>
<ul>
<li>Migrate from <code>poetry</code> to <code>uv</code>.</li>
<li>Remove <code>tox</code>.</li>
</ul>
<h2><a
href="https://github.com/olzhasar/pytest-watcher/releases/tag/0.5.0">0.5.0</a>
- 2025-12-21</h2>
<h3>Fixes</h3>
<ul>
<li>Merge arguments passed to the runner from config and CLI instead of
overriding.</li>
</ul>
<h3>Changes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c52925b613"><code>c52925b</code></a>
release v0.6.3</li>
<li><a
href="23d49893f7"><code>23d4989</code></a>
Add debug mode. Improve log messages</li>
<li><a
href="e3dffa1cb3"><code>e3dffa1</code></a>
Fix terminal flushing after menu and header prints</li>
<li><a
href="0eeaf6080e"><code>0eeaf60</code></a>
Use monotonic clock for trigger detection</li>
<li><a
href="5ed9d0e262"><code>5ed9d0e</code></a>
Update CHANGELOG. Fix changelog_reader action</li>
<li><a
href="756f005f5d"><code>756f005</code></a>
release v0.6.2</li>
<li><a
href="902aa9e07b"><code>902aa9e</code></a>
Merge pull request <a
href="https://redirect.github.com/olzhasar/pytest-watcher/issues/51">#51</a>
from olzhasar/fix-duplicate-menu</li>
<li><a
href="e6b20d35b9"><code>e6b20d3</code></a>
Allow specifying empty patterns via CLI</li>
<li><a
href="2d522dabf9"><code>2d522da</code></a>
Fix duplicate menu entries</li>
<li><a
href="171e6f1282"><code>171e6f1</code></a>
Fix towncrier CHANGELOG versioning</li>
<li>Additional commits viewable in <a
href="https://github.com/olzhasar/pytest-watcher/compare/v0.4.3...v0.6.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.14.14 to 0.15.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.15.0</h2>
<h2>Release Notes</h2>
<p>Released on 2026-02-03.</p>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.15.0">blog
post</a> for a migration guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p>Ruff now formats your code according to the 2026 style guide. See the
formatter section below or in the blog post for a detailed list of
changes.</p>
</li>
<li>
<p>The linter now supports block suppression comments. For example, to
suppress <code>N803</code> for all parameters in this function:</p>
<pre lang="python"><code># ruff: disable[N803]
def foo(
    legacyArg1,
    legacyArg2,
    legacyArg3,
    legacyArg4,
): ...
# ruff: enable[N803]
</code></pre>
<p>See the <a
href="https://docs.astral.sh/ruff/linter/#block-level">documentation</a>
for more details.</p>
</li>
<li>
<p>The <code>ruff:alpine</code> Docker image is now based on Alpine 3.23
(up from 3.21).</p>
</li>
<li>
<p>The <code>ruff:debian</code> and <code>ruff:debian-slim</code> Docker
images are now based on Debian 13 &quot;Trixie&quot; instead of Debian
12 &quot;Bookworm.&quot;</p>
</li>
<li>
<p>Binaries for the <code>ppc64</code> (64-bit big-endian PowerPC)
architecture are no longer included in our releases. It should still be
possible to build Ruff manually for this platform, if needed.</p>
</li>
<li>
<p>Ruff now resolves all <code>extend</code>ed configuration files
before falling back on a default Python version.</p>
</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-http-call-httpx-in-async-function"><code>blocking-http-call-httpx-in-async-function</code></a>
(<code>ASYNC212</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-path-method-in-async-function"><code>blocking-path-method-in-async-function</code></a>
(<code>ASYNC240</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-input-in-async-function"><code>blocking-input-in-async-function</code></a>
(<code>ASYNC250</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/map-without-explicit-strict"><code>map-without-explicit-strict</code></a>
(<code>B912</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/if-exp-instead-of-or-operator"><code>if-exp-instead-of-or-operator</code></a>
(<code>FURB110</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/single-item-membership-test"><code>single-item-membership-test</code></a>
(<code>FURB171</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/missing-maxsplit-arg"><code>missing-maxsplit-arg</code></a>
(<code>PLC0207</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unnecessary-lambda"><code>unnecessary-lambda</code></a>
(<code>PLW0108</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unnecessary-empty-iterable-within-deque-call"><code>unnecessary-empty-iterable-within-deque-call</code></a>
(<code>RUF037</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/in-empty-collection"><code>in-empty-collection</code></a>
(<code>RUF060</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/legacy-form-pytest-raises"><code>legacy-form-pytest-raises</code></a>
(<code>RUF061</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-octal-permissions"><code>non-octal-permissions</code></a>
(<code>RUF064</code>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.15.0</h2>
<p>Released on 2026-02-03.</p>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.15.0">blog
post</a> for a migration
guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p>Ruff now formats your code according to the 2026 style guide. See the
formatter section below or in the blog post for a detailed list of
changes.</p>
</li>
<li>
<p>The linter now supports block suppression comments. For example, to
suppress <code>N803</code> for all parameters in this function:</p>
<pre lang="python"><code># ruff: disable[N803]
def foo(
    legacyArg1,
    legacyArg2,
    legacyArg3,
    legacyArg4,
): ...
# ruff: enable[N803]
</code></pre>
<p>See the <a
href="https://docs.astral.sh/ruff/linter/#block-level">documentation</a>
for more details.</p>
</li>
<li>
<p>The <code>ruff:alpine</code> Docker image is now based on Alpine 3.23
(up from 3.21).</p>
</li>
<li>
<p>The <code>ruff:debian</code> and <code>ruff:debian-slim</code> Docker
images are now based on Debian 13 &quot;Trixie&quot; instead of Debian
12 &quot;Bookworm.&quot;</p>
</li>
<li>
<p>Binaries for the <code>ppc64</code> (64-bit big-endian PowerPC)
architecture are no longer included in our releases. It should still be
possible to build Ruff manually for this platform, if needed.</p>
</li>
<li>
<p>Ruff now resolves all <code>extend</code>ed configuration files
before falling back on a default Python version.</p>
</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-http-call-httpx-in-async-function"><code>blocking-http-call-httpx-in-async-function</code></a>
(<code>ASYNC212</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-path-method-in-async-function"><code>blocking-path-method-in-async-function</code></a>
(<code>ASYNC240</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/blocking-input-in-async-function"><code>blocking-input-in-async-function</code></a>
(<code>ASYNC250</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/map-without-explicit-strict"><code>map-without-explicit-strict</code></a>
(<code>B912</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/if-exp-instead-of-or-operator"><code>if-exp-instead-of-or-operator</code></a>
(<code>FURB110</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/single-item-membership-test"><code>single-item-membership-test</code></a>
(<code>FURB171</code>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ce5f7b6127"><code>ce5f7b6</code></a>
Bump 0.15.0 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23055">#23055</a>)</li>
<li><a
href="b4e40f539c"><code>b4e40f5</code></a>
[ty] Fix <code>__contains__</code> to respect descriptors (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23056">#23056</a>)</li>
<li><a
href="848cb72dc1"><code>848cb72</code></a>
[ty] Fix narrowing of nonlocal variables with conditional assignments
(<a
href="https://redirect.github.com/astral-sh/ruff/issues/22966">#22966</a>)</li>
<li><a
href="da7f33af22"><code>da7f33a</code></a>
[ty] Add a diagnostic for <code>Final</code> without assignment (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23001">#23001</a>)</li>
<li><a
href="e65f9a6b03"><code>e65f9a6</code></a>
Document markdown formatting feature (<a
href="https://redirect.github.com/astral-sh/ruff/issues/22990">#22990</a>)</li>
<li><a
href="c0c1b985c9"><code>c0c1b98</code></a>
Format markdown code blocks with line-by-line regex parse (<a
href="https://redirect.github.com/astral-sh/ruff/issues/22996">#22996</a>)</li>
<li><a
href="9f8f3e196b"><code>9f8f3e1</code></a>
Allow positional-only params with defaults in method overrides (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23037">#23037</a>)</li>
<li><a
href="ef83810e11"><code>ef83810</code></a>
[ty] ecosystem-analyzer: Support bare git repositories (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23054">#23054</a>)</li>
<li><a
href="54dfee4cb8"><code>54dfee4</code></a>
Customize where the <code>fix_title</code> sub-diagnostic appears (<a
href="https://redirect.github.com/astral-sh/ruff/issues/23044">#23044</a>)</li>
<li><a
href="b53460799b"><code>b534607</code></a>
2026 Ruff Formatter Style (<a
href="https://redirect.github.com/astral-sh/ruff/issues/22735">#22735</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.14.14...0.15.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-09 04:26:58 +00:00
dependabot[bot]
1a32ba7d9a chore(deps): bump urllib3 from 2.5.0 to 2.6.0 in /autogpt_platform/backend (#11607)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.5.0 to 2.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/urllib3/urllib3/releases">urllib3's
releases</a>.</em></p>
<blockquote>
<h2>2.6.0</h2>
<h2>🚀 urllib3 is fundraising for HTTP/2 support</h2>
<p><a
href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3
is raising ~$40,000 USD</a> to release HTTP/2 support and ensure
long-term sustainable maintenance of the project after a sharp decline
in financial support. If your company or organization uses Python and
would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and
thousands of other projects <a
href="https://opencollective.com/urllib3">please consider contributing
financially</a> to ensure HTTP/2 support is developed sustainably and
maintained for the long-haul.</p>
<p>Thank you for your support.</p>
<h2>Security</h2>
<ul>
<li>Fixed a security issue where streaming API could improperly handle
highly compressed HTTP content (&quot;decompression bombs&quot;) leading
to excessive resource consumption even when a small amount of data was
requested. Reading small chunks of compressed data is safer and much
more efficient now. (CVE-2025-66471 reported by <a
href="https://github.com/Cycloctane"><code>@​Cycloctane</code></a>, 8.9
High, GHSA-2xpw-w6gg-jr37)</li>
<li>Fixed a security issue where an attacker could compose an HTTP
response with virtually unlimited links in the
<code>Content-Encoding</code> header, potentially leading to a denial of
service (DoS) attack by exhausting system resources during decoding. The
number of allowed chained encodings is now limited to 5. (CVE-2025-66418
reported by <a
href="https://github.com/illia-v"><code>@​illia-v</code></a>, 8.9 High,
GHSA-gm62-xv2j-4w53)</li>
</ul>
<blockquote>
<p>[!IMPORTANT]</p>
<ul>
<li>If urllib3 is not installed with the optional
<code>urllib3[brotli]</code> extra, but your environment contains a
Brotli/brotlicffi/brotlipy package anyway, make sure to upgrade it to at
least Brotli 1.2.0 or brotlicffi 1.2.0.0 to benefit from the security
fixes and avoid warnings. Prefer using <code>urllib3[brotli]</code> to
install a compatible Brotli package automatically.</li>
<li>If you use custom decompressors, please make sure to update them to
respect the changed API of
<code>urllib3.response.ContentDecoder</code>.</li>
</ul>
</blockquote>
<h2>Features</h2>
<ul>
<li>Enabled retrieval, deletion, and membership testing in
<code>HTTPHeaderDict</code> using bytes keys. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3653">#3653</a>)</li>
<li>Added host and port information to string representations of
<code>HTTPConnection</code>. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3666">#3666</a>)</li>
<li>Added support for Python 3.14 free-threading builds explicitly. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3696">#3696</a>)</li>
</ul>
<h2>Removals</h2>
<ul>
<li>Removed the <code>HTTPResponse.getheaders()</code> method in favor
of <code>HTTPResponse.headers</code>. Removed the
<code>HTTPResponse.getheader(name, default)</code> method in favor of
<code>HTTPResponse.headers.get(name, default)</code>. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3622">#3622</a>)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Fixed redirect handling in <code>urllib3.PoolManager</code> when an
integer is passed for the retries parameter. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3649">#3649</a>)</li>
<li>Fixed <code>HTTPConnectionPool</code> when used in Emscripten with
no explicit port. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3664">#3664</a>)</li>
<li>Fixed handling of <code>SSLKEYLOGFILE</code> with expandable
variables. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3700">#3700</a>)</li>
</ul>
<h2>Misc</h2>
<ul>
<li>Changed the <code>zstd</code> extra to install
<code>backports.zstd</code> instead of <code>zstandard</code> on Python
3.13 and before. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3693">#3693</a>)</li>
<li>Improved the performance of content decoding by optimizing
<code>BytesQueueBuffer</code> class. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3710">#3710</a>)</li>
<li>Allowed building the urllib3 package with newer setuptools-scm v9.x.
(<a
href="https://redirect.github.com/urllib3/urllib3/issues/3652">#3652</a>)</li>
<li>Ensured successful urllib3 builds by setting Hatchling requirement
to ≥ 1.27.0. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3638">#3638</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's
changelog</a>.</em></p>
<blockquote>
<h1>2.6.0 (2025-12-05)</h1>
<h2>Security</h2>
<ul>
<li>Fixed a security issue where streaming API could improperly handle
highly
compressed HTTP content (&quot;decompression bombs&quot;) leading to
excessive resource
consumption even when a small amount of data was requested. Reading
small
chunks of compressed data is safer and much more efficient now.
(<code>GHSA-2xpw-w6gg-jr37
&lt;https://github.com/urllib3/urllib3/security/advisories/GHSA-2xpw-w6gg-jr37&gt;</code>__)</li>
<li>Fixed a security issue where an attacker could compose an HTTP
response with
virtually unlimited links in the <code>Content-Encoding</code> header,
potentially
leading to a denial of service (DoS) attack by exhausting system
resources
during decoding. The number of allowed chained encodings is now limited
to 5.
(<code>GHSA-gm62-xv2j-4w53
&lt;https://github.com/urllib3/urllib3/security/advisories/GHSA-gm62-xv2j-4w53&gt;</code>__)</li>
</ul>
<p>.. caution::</p>
<ul>
<li>
<p>If urllib3 is not installed with the optional
<code>urllib3[brotli]</code> extra, but
your environment contains a Brotli/brotlicffi/brotlipy package anyway,
make
sure to upgrade it to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 to
benefit from the security fixes and avoid warnings. Prefer using
<code>urllib3[brotli]</code> to install a compatible Brotli package
automatically.</p>
</li>
<li>
<p>If you use custom decompressors, please make sure to update them to
respect the changed API of
<code>urllib3.response.ContentDecoder</code>.</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>Enabled retrieval, deletion, and membership testing in
<code>HTTPHeaderDict</code> using bytes keys.
(<code>[#3653](https://github.com/urllib3/urllib3/issues/3653)
&lt;https://github.com/urllib3/urllib3/issues/3653&gt;</code>__)</li>
<li>Added host and port information to string representations of
<code>HTTPConnection</code>.
(<code>[#3666](https://github.com/urllib3/urllib3/issues/3666)
&lt;https://github.com/urllib3/urllib3/issues/3666&gt;</code>__)</li>
<li>Added support for Python 3.14 free-threading builds explicitly.
(<code>[#3696](https://github.com/urllib3/urllib3/issues/3696)
&lt;https://github.com/urllib3/urllib3/issues/3696&gt;</code>__)</li>
</ul>
<h2>Removals</h2>
<ul>
<li>Removed the <code>HTTPResponse.getheaders()</code> method in favor
of <code>HTTPResponse.headers</code>.
Removed the <code>HTTPResponse.getheader(name, default)</code> method in
favor of <code>HTTPResponse.headers.get(name, default)</code>.
(<code>[#3622](https://github.com/urllib3/urllib3/issues/3622)
&lt;https://github.com/urllib3/urllib3/issues/3622&gt;</code>__)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Fixed redirect handling in <code>urllib3.PoolManager</code> when an
integer is passed
for the retries parameter.
(<code>[#3649](https://github.com/urllib3/urllib3/issues/3649)
&lt;https://github.com/urllib3/urllib3/issues/3649&gt;</code>__)</li>
<li>Fixed <code>HTTPConnectionPool</code> when used in Emscripten with
no explicit port.
(<code>[#3664](https://github.com/urllib3/urllib3/issues/3664)
&lt;https://github.com/urllib3/urllib3/issues/3664&gt;</code>__)</li>
<li>Fixed handling of <code>SSLKEYLOGFILE</code> with expandable
variables.
(<code>[#3700](https://github.com/urllib3/urllib3/issues/3700)
&lt;https://github.com/urllib3/urllib3/issues/3700&gt;</code>__)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="720f484b60"><code>720f484</code></a>
Release 2.6.0</li>
<li><a
href="24d7b67eac"><code>24d7b67</code></a>
Merge commit from fork</li>
<li><a
href="c19571de34"><code>c19571d</code></a>
Merge commit from fork</li>
<li><a
href="816fcf0452"><code>816fcf0</code></a>
Bump actions/setup-python from 6.0.0 to 6.1.0 (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3725">#3725</a>)</li>
<li><a
href="18af0a10ef"><code>18af0a1</code></a>
Improve speed of <code>BytesQueueBuffer.get()</code> by using memoryview
(<a
href="https://redirect.github.com/urllib3/urllib3/issues/3711">#3711</a>)</li>
<li><a
href="1f6abac3e6"><code>1f6abac</code></a>
Bump versions of pre-commit hooks (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3716">#3716</a>)</li>
<li><a
href="1c8fbf787b"><code>1c8fbf7</code></a>
Bump actions/checkout from 5.0.0 to 6.0.0 (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3722">#3722</a>)</li>
<li><a
href="7784b9eee9"><code>7784b9e</code></a>
Add Python 3.15 to CI (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3717">#3717</a>)</li>
<li><a
href="0241c9e728"><code>0241c9e</code></a>
Updated docs to reflect change in optional zstd dependency from
<code>zstandard</code> t...</li>
<li><a
href="7afcabb648"><code>7afcabb</code></a>
Expand environment variable of SSLKEYLOGFILE (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3705">#3705</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/urllib3/urllib3/compare/2.5.0...2.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=2.5.0&new-version=2.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/Significant-Gravitas/AutoGPT/network/alerts).

</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 03:39:05 +00:00
dependabot[bot]
deccc26f1f chore(deps): bump actions/cache from 4 to 5 (#11665)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<blockquote>
<p>[!IMPORTANT]
<strong><code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of
<code>2.327.1</code>.</strong></p>
<p>If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<hr />
<h2>What's Changed</h2>
<ul>
<li>Upgrade to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1630">actions/cache#1630</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1684">actions/cache#1684</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4.3.0...v5.0.0">https://github.com/actions/cache/compare/v4.3.0...v5.0.0</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add note on runner versions by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1642">actions/cache#1642</a></li>
<li>Prepare <code>v4.3.0</code> release by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1655">actions/cache#1655</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1642">actions/cache#1642</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4...v4.3.0">https://github.com/actions/cache/compare/v4...v4.3.0</a></p>
<h2>v4.2.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1620">actions/cache#1620</a></li>
<li>Upgrade <code>@actions/cache</code> to <code>4.0.5</code> and move
<code>@protobuf-ts/plugin</code> to dev depdencies by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1634">actions/cache#1634</a></li>
<li>Prepare release <code>4.2.4</code> by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1636">actions/cache#1636</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1620">actions/cache#1620</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4...v4.2.4">https://github.com/actions/cache/compare/v4...v4.2.4</a></p>
<h2>v4.2.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use <code>@​actions/cache</code> 4.0.3 package &amp;
prepare for new release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1577">actions/cache#1577</a>
(SAS tokens for cache entries are now masked in debug logs)</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1577">actions/cache#1577</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4.2.2...v4.2.3">https://github.com/actions/cache/compare/v4.2.2...v4.2.3</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>Changelog</h2>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.
If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<h3>4.3.0</h3>
<ul>
<li>Bump <code>@actions/cache</code> to <a
href="https://redirect.github.com/actions/toolkit/pull/2132">v4.1.0</a></li>
</ul>
<h3>4.2.4</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.5</li>
</ul>
<h3>4.2.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.3 (obfuscates SAS token in
debug logs for cache entries)</li>
</ul>
<h3>4.2.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.2</li>
</ul>
<h3>4.2.1</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.1</li>
</ul>
<h3>4.2.0</h3>
<p>TLDR; The cache backend service has been rewritten from the ground up
for improved performance and reliability. <a
href="https://github.com/actions/cache">actions/cache</a> now integrates
with the new cache service (v2) APIs.</p>
<p>The new service will gradually roll out as of <strong>February 1st,
2025</strong>. The legacy service will also be sunset on the same date.
Changes in these release are <strong>fully backward
compatible</strong>.</p>
<p><strong>We are deprecating some versions of this action</strong>. We
recommend upgrading to version <code>v4</code> or <code>v3</code> as
soon as possible before <strong>February 1st, 2025.</strong> (Upgrade
instructions below).</p>
<p>If you are using pinned SHAs, please use the SHAs of versions
<code>v4.2.0</code> or <code>v3.4.0</code></p>
<p>If you do not upgrade, all workflow runs using any of the deprecated
<a href="https://github.com/actions/cache">actions/cache</a> will
fail.</p>
<p>Upgrading to the recommended versions will not break your
workflows.</p>
<h3>4.1.2</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9255dc7a25"><code>9255dc7</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1686">#1686</a>
from actions/cache-v5.0.1-release</li>
<li><a
href="8ff5423e8b"><code>8ff5423</code></a>
chore: release v5.0.1</li>
<li><a
href="9233019a15"><code>9233019</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1685">#1685</a>
from salmanmkc/node24-storage-blob-fix</li>
<li><a
href="b975f2bb84"><code>b975f2b</code></a>
fix: add peer property to package-lock.json for dependencies</li>
<li><a
href="d0a0e18134"><code>d0a0e18</code></a>
fix: update license files for <code>@​actions/cache</code>,
fast-xml-parser, and strnum</li>
<li><a
href="74de208dcf"><code>74de208</code></a>
fix: update <code>@​actions/cache</code> to ^5.0.1 for Node.js 24
punycode fix</li>
<li><a
href="ac7f1152ea"><code>ac7f115</code></a>
peer</li>
<li><a
href="b0f846b50b"><code>b0f846b</code></a>
fix: update <code>@​actions/cache</code> with storage-blob fix for
Node.js 24 punycode depr...</li>
<li><a
href="a783357455"><code>a783357</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1684">#1684</a>
from actions/prepare-cache-v5-release</li>
<li><a
href="3bb0d78750"><code>3bb0d78</code></a>
docs: highlight v5 runner requirement in releases</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/cache/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-09 03:28:23 +00:00
dependabot[bot]
9e38bd5b78 chore(backend/deps): bump the production-dependencies group across 1 directory with 8 updates (#12014)
Bumps the production-dependencies group with 8 updates in the
/autogpt_platform/backend directory:

| Package | From | To |
| --- | --- | --- |
| [anthropic](https://github.com/anthropics/anthropic-sdk-python) |
`0.59.0` | `0.79.0` |
| [fastapi](https://github.com/fastapi/fastapi) | `0.128.3` | `0.128.5`
|
| [ollama](https://github.com/ollama/ollama-python) | `0.5.4` | `0.6.1`
|
| [prometheus-client](https://github.com/prometheus/client_python) |
`0.22.1` | `0.24.1` |
| [python-multipart](https://github.com/Kludex/python-multipart) |
`0.0.20` | `0.0.22` |
| [supabase](https://github.com/supabase/supabase-py) | `2.27.2` |
`2.27.3` |
| [tenacity](https://github.com/jd/tenacity) | `9.1.3` | `9.1.4` |
| [tiktoken](https://github.com/openai/tiktoken) | `0.9.0` | `0.12.0` |


Updates `anthropic` from 0.59.0 to 0.79.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/anthropics/anthropic-sdk-python/releases">anthropic's
releases</a>.</em></p>
<blockquote>
<h2>v0.79.0</h2>
<h2>0.79.0 (2026-02-07)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.78.0...v0.79.0">v0.78.0...v0.79.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> enabling fast-mode in claude-opus-4-6 (<a
href="5953ba7b42">5953ba7</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>pass speed parameter through in sync beta count_tokens (<a
href="1dd6119dac">1dd6119</a>)</li>
</ul>
<h2>v0.78.0</h2>
<h2>0.78.0 (2026-02-05)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.1...v0.78.0">v0.77.1...v0.78.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> Release Claude Opus 4.6, adaptive thinking,
and other features (<a
href="3ef1529b45">3ef1529</a>)</li>
</ul>
<h2>v0.77.1</h2>
<h2>0.77.1 (2026-02-03)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.0...v0.77.1">v0.77.0...v0.77.1</a></p>
<h3>Bug Fixes</h3>
<ul>
<li><strong>structured outputs:</strong> send structured output beta
header when format is omitted (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1158">#1158</a>)
(<a
href="258494e2b8">258494e</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>remove claude-code-review workflow (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1338">#1338</a>)
(<a
href="aec4512305">aec4512</a>)</li>
</ul>
<h2>v0.77.0</h2>
<h2>0.77.0 (2026-01-29)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.76.0...v0.77.0">v0.76.0...v0.77.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> add support for Structured Outputs in the
Messages API (<a
href="ad5667774a">ad56677</a>)</li>
<li><strong>api:</strong> migrate sending message format in
output_config rather than output_format (<a
href="af405e473f">af405e4</a>)</li>
<li><strong>client:</strong> add custom JSON encoder for extended type
support (<a
href="7780e90bd2">7780e90</a>)</li>
<li>use output_config for structured outputs (<a
href="82d669db65">82d669d</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/anthropics/anthropic-sdk-python/blob/main/CHANGELOG.md">anthropic's
changelog</a>.</em></p>
<blockquote>
<h2>0.79.0 (2026-02-07)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.78.0...v0.79.0">v0.78.0...v0.79.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> enabling fast-mode in claude-opus-4-6 (<a
href="5953ba7b42">5953ba7</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>pass speed parameter through in sync beta count_tokens (<a
href="1dd6119dac">1dd6119</a>)</li>
</ul>
<h2>0.78.0 (2026-02-05)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.1...v0.78.0">v0.77.1...v0.78.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> Release Claude Opus 4.6, adaptive thinking,
and other features (<a
href="3ef1529b45">3ef1529</a>)</li>
</ul>
<h2>0.77.1 (2026-02-03)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.77.0...v0.77.1">v0.77.0...v0.77.1</a></p>
<h3>Bug Fixes</h3>
<ul>
<li><strong>structured outputs:</strong> send structured output beta
header when format is omitted (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1158">#1158</a>)
(<a
href="258494e2b8">258494e</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>remove claude-code-review workflow (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1338">#1338</a>)
(<a
href="aec4512305">aec4512</a>)</li>
</ul>
<h2>0.77.0 (2026-01-29)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.76.0...v0.77.0">v0.76.0...v0.77.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> add support for Structured Outputs in the
Messages API (<a
href="ad5667774a">ad56677</a>)</li>
<li><strong>api:</strong> migrate sending message format in
output_config rather than output_format (<a
href="af405e473f">af405e4</a>)</li>
<li><strong>client:</strong> add custom JSON encoder for extended type
support (<a
href="7780e90bd2">7780e90</a>)</li>
<li>use output_config for structured outputs (<a
href="82d669db65">82d669d</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>client:</strong> run formatter (<a
href="2e4ff86d7b">2e4ff86</a>)</li>
<li>remove class causing breaking change (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1333">#1333</a>)
(<a
href="81ee9533d1">81ee953</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cd1b39bf07"><code>cd1b39b</code></a>
release: 0.79.0</li>
<li><a
href="fb52a6a09d"><code>fb52a6a</code></a>
fix: pass speed parameter through in sync beta count_tokens</li>
<li><a
href="b7c2df239d"><code>b7c2df2</code></a>
feat(api): enabling fast-mode in claude-opus-4-6</li>
<li><a
href="7c42e4b04b"><code>7c42e4b</code></a>
Update CHANGELOG.md (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1163">#1163</a>)</li>
<li><a
href="f2b61ed11c"><code>f2b61ed</code></a>
release: 0.78.0</li>
<li><a
href="a4a29cab92"><code>a4a29ca</code></a>
feat(api): manual updates</li>
<li><a
href="3955600d74"><code>3955600</code></a>
release: 0.77.1</li>
<li><a
href="eca8ddfb19"><code>eca8ddf</code></a>
fix(structured outputs): send structured output beta header when format
is om...</li>
<li><a
href="ee44c52131"><code>ee44c52</code></a>
chore: remove claude-code-review workflow (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1338">#1338</a>)</li>
<li><a
href="9c485f6966"><code>9c485f6</code></a>
release: 0.77.0 (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/1117">#1117</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.59.0...v0.79.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `fastapi` from 0.128.3 to 0.128.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fastapi/fastapi/releases">fastapi's
releases</a>.</em></p>
<blockquote>
<h2>0.128.5</h2>
<h3>Refactors</h3>
<ul>
<li>♻️ Refactor and simplify Pydantic v2 (and v1) compatibility internal
utils. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14862">#14862</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li> Add inline snapshot tests for OpenAPI before changes from Pydantic
v2. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14864">#14864</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h2>0.128.4</h2>
<h3>Refactors</h3>
<ul>
<li>♻️ Refactor internals, simplify Pydantic v2/v1 utils,
<code>create_model_field</code>, better types for
<code>lenient_issubclass</code>. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14860">#14860</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>♻️ Simplify internals, remove Pydantic v1 only logic, no longer
needed. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14857">#14857</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
<li>♻️ Refactor internals, cleanup unneeded Pydantic v1 specific logic.
PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14856">#14856</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
<h3>Translations</h3>
<ul>
<li>🌐 Update translations for fr (outdated pages). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14839">#14839</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
<li>🌐 Update translations for tr (outdated and missing). PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14838">#14838</a>
by <a
href="https://github.com/YuriiMotov"><code>@​YuriiMotov</code></a>.</li>
</ul>
<h3>Internal</h3>
<ul>
<li>⬆️ Upgrade development dependencies. PR <a
href="https://redirect.github.com/fastapi/fastapi/pull/14854">#14854</a>
by <a
href="https://github.com/tiangolo"><code>@​tiangolo</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="dedf1409fe"><code>dedf140</code></a>
🔖 Release version 0.128.5</li>
<li><a
href="79d4dfb37f"><code>79d4dfb</code></a>
📝 Update release notes</li>
<li><a
href="9f4ecf562c"><code>9f4ecf5</code></a>
 Add inline snapshot tests for OpenAPI before changes from Pydantic v2
(<a
href="https://redirect.github.com/fastapi/fastapi/issues/14864">#14864</a>)</li>
<li><a
href="c48539f4c6"><code>c48539f</code></a>
📝 Update release notes</li>
<li><a
href="2e7d3754cd"><code>2e7d375</code></a>
♻️ Refactor and simplify Pydantic v2 (and v1) compatibility internal
utils (#...</li>
<li><a
href="8eac94bd91"><code>8eac94b</code></a>
🔖 Release version 0.128.4</li>
<li><a
href="58cdfc7f4b"><code>58cdfc7</code></a>
📝 Update release notes</li>
<li><a
href="d59fbc3494"><code>d59fbc3</code></a>
♻️ Refactor internals, simplify Pydantic v2/v1 utils,
<code>create_model_field</code>, b...</li>
<li><a
href="cc6ced6345"><code>cc6ced6</code></a>
📝 Update release notes</li>
<li><a
href="cf55bade7e"><code>cf55bad</code></a>
♻️ Simplify internals, remove Pydantic v1 only logic, no longer needed
(<a
href="https://redirect.github.com/fastapi/fastapi/issues/14857">#14857</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/fastapi/fastapi/compare/0.128.3...0.128.5">compare
view</a></li>
</ul>
</details>
<br />

Updates `ollama` from 0.5.4 to 0.6.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/ollama/ollama-python/releases">ollama's
releases</a>.</em></p>
<blockquote>
<h2>v0.6.1</h2>
<h2>What's Changed</h2>
<ul>
<li>client/types: add logprobs support by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/601">ollama/ollama-python#601</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/ollama/ollama-python/compare/v0.6.0...v0.6.1">https://github.com/ollama/ollama-python/compare/v0.6.0...v0.6.1</a></p>
<h2>v0.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>
<p>client: add web search and web crawl capabilities by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/578">ollama/ollama-python#578</a></p>
</li>
<li>
<p>client: load OLLAMA_API_KEY on init by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/583">ollama/ollama-python#583</a></p>
</li>
<li>
<p>client/types: update web search and fetch API by <a
href="https://github.com/npardal"><code>@​npardal</code></a> in <a
href="https://redirect.github.com/ollama/ollama-python/pull/584">ollama/ollama-python#584</a></p>
</li>
<li>
<p>examples: add mcp server for web_search web_crawl by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/585">ollama/ollama-python#585</a></p>
</li>
<li>
<p>examples: gpt oss browser tool by <a
href="https://github.com/ParthSareen"><code>@​ParthSareen</code></a> in
<a
href="https://redirect.github.com/ollama/ollama-python/pull/588">ollama/ollama-python#588</a></p>
</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/npardal"><code>@​npardal</code></a> made
their first contribution in <a
href="https://redirect.github.com/ollama/ollama-python/pull/584">ollama/ollama-python#584</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/ollama/ollama-python/compare/v0.5.4...v0.6.0">https://github.com/ollama/ollama-python/compare/v0.5.4...v0.6.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0008226fda"><code>0008226</code></a>
client/types: add logprobs support (<a
href="https://redirect.github.com/ollama/ollama-python/issues/601">#601</a>)</li>
<li><a
href="9ddd5f0182"><code>9ddd5f0</code></a>
examples: fix model web search (<a
href="https://redirect.github.com/ollama/ollama-python/issues/589">#589</a>)</li>
<li><a
href="d967f048d9"><code>d967f04</code></a>
examples: gpt oss browser tool (<a
href="https://redirect.github.com/ollama/ollama-python/issues/588">#588</a>)</li>
<li><a
href="ab49a669cd"><code>ab49a66</code></a>
examples: add mcp server for web_search web_crawl (<a
href="https://redirect.github.com/ollama/ollama-python/issues/585">#585</a>)</li>
<li><a
href="16f344f635"><code>16f344f</code></a>
client/types: update web search and fetch API (<a
href="https://redirect.github.com/ollama/ollama-python/issues/584">#584</a>)</li>
<li><a
href="d0f71bc8b8"><code>d0f71bc</code></a>
client: load OLLAMA_API_KEY on init (<a
href="https://redirect.github.com/ollama/ollama-python/issues/583">#583</a>)</li>
<li><a
href="b22c5fdabb"><code>b22c5fd</code></a>
init: fix export for web_search (<a
href="https://redirect.github.com/ollama/ollama-python/issues/581">#581</a>)</li>
<li><a
href="4d0b81b37a"><code>4d0b81b</code></a>
client: add web search and web crawl capabilities (<a
href="https://redirect.github.com/ollama/ollama-python/issues/578">#578</a>)</li>
<li>See full diff in <a
href="https://github.com/ollama/ollama-python/compare/v0.5.4...v0.6.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `prometheus-client` from 0.22.1 to 0.24.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/prometheus/client_python/releases">prometheus-client's
releases</a>.</em></p>
<blockquote>
<h2>v0.24.1</h2>
<ul>
<li>[Django] Pass correct registry to MultiProcessCollector by <a
href="https://github.com/jelly"><code>@​jelly</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1152">prometheus/client_python#1152</a></li>
</ul>
<h2>v0.24.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add an AIOHTTP exporter by <a
href="https://github.com/Lexicality"><code>@​Lexicality</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1139">prometheus/client_python#1139</a></li>
<li>Add remove_matching() method for metric label deletion by <a
href="https://github.com/hazel-shen"><code>@​hazel-shen</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1121">prometheus/client_python#1121</a></li>
<li>fix(multiprocess): avoid double-building child metric names (<a
href="https://redirect.github.com/prometheus/client_python/issues/1035">#1035</a>)
by <a href="https://github.com/hazel-shen"><code>@​hazel-shen</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1146">prometheus/client_python#1146</a></li>
<li>Don't interleave histogram metrics in multi-process collector by <a
href="https://github.com/cjwatson"><code>@​cjwatson</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1148">prometheus/client_python#1148</a></li>
<li>Relax registry type annotations for exposition by <a
href="https://github.com/cjwatson"><code>@​cjwatson</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1149">prometheus/client_python#1149</a></li>
<li>Added compression support in pushgateway by <a
href="https://github.com/ritesh-avesha"><code>@​ritesh-avesha</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1144">prometheus/client_python#1144</a></li>
<li>Add Django exporter (<a
href="https://redirect.github.com/prometheus/client_python/issues/1088">#1088</a>)
by <a href="https://github.com/Chadys"><code>@​Chadys</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1143">prometheus/client_python#1143</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus/client_python/compare/v0.23.1...v0.24.0">https://github.com/prometheus/client_python/compare/v0.23.1...v0.24.0</a></p>
<h2>v0.23.1</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: use tuples instead of packaging Version by <a
href="https://github.com/efiop"><code>@​efiop</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1136">prometheus/client_python#1136</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/efiop"><code>@​efiop</code></a> made
their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1136">prometheus/client_python#1136</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus/client_python/compare/v0.23.0...v0.23.1">https://github.com/prometheus/client_python/compare/v0.23.0...v0.23.1</a></p>
<h2>v0.23.0</h2>
<h2>What's Changed</h2>
<ul>
<li>UTF-8 Content Negotiation by <a
href="https://github.com/ywwg"><code>@​ywwg</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1102">prometheus/client_python#1102</a></li>
<li>Re include test data by <a
href="https://github.com/mgorny"><code>@​mgorny</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1113">prometheus/client_python#1113</a></li>
<li>Improve parser performance by <a
href="https://github.com/csmarchbanks"><code>@​csmarchbanks</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1117">prometheus/client_python#1117</a></li>
<li>Add support to <code>write_to_textfile</code> for custom tmpdir by
<a
href="https://github.com/aadityadhruv"><code>@​aadityadhruv</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1115">prometheus/client_python#1115</a></li>
<li>OM text exposition for NH by <a
href="https://github.com/vesari"><code>@​vesari</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1087">prometheus/client_python#1087</a></li>
<li>Fix bug which caused metric publishing to not accept query string
parameters in ASGI app by <a
href="https://github.com/hacksparr0w"><code>@​hacksparr0w</code></a> in
<a
href="https://redirect.github.com/prometheus/client_python/pull/1125">prometheus/client_python#1125</a></li>
<li>Emit native histograms only when OM 2.0.0 is requested by <a
href="https://github.com/vesari"><code>@​vesari</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1128">prometheus/client_python#1128</a></li>
<li>fix: remove space after comma in openmetrics exposition by <a
href="https://github.com/theSuess"><code>@​theSuess</code></a> in <a
href="https://redirect.github.com/prometheus/client_python/pull/1132">prometheus/client_python#1132</a></li>
<li>Fix issue parsing double spaces after # HELP/# TYPE by <a
href="https://github.com/csmarchbanks"><code>@​csmarchbanks</code></a>
in <a
href="https://redirect.github.com/prometheus/client_python/pull/1134">prometheus/client_python#1134</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/mgorny"><code>@​mgorny</code></a> made
their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1113">prometheus/client_python#1113</a></li>
<li><a
href="https://github.com/aadityadhruv"><code>@​aadityadhruv</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1115">prometheus/client_python#1115</a></li>
<li><a
href="https://github.com/hacksparr0w"><code>@​hacksparr0w</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1125">prometheus/client_python#1125</a></li>
<li><a href="https://github.com/theSuess"><code>@​theSuess</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus/client_python/pull/1132">prometheus/client_python#1132</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus/client_python/compare/v0.22.1...v0.23.0">https://github.com/prometheus/client_python/compare/v0.22.1...v0.23.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f417f6ea8f"><code>f417f6e</code></a>
Release 0.24.1</li>
<li><a
href="6f0e967c1f"><code>6f0e967</code></a>
Pass correct registry to MultiProcessCollector (<a
href="https://redirect.github.com/prometheus/client_python/issues/1152">#1152</a>)</li>
<li><a
href="c5024d310f"><code>c5024d3</code></a>
Release 0.24.0</li>
<li><a
href="e1cdc203b1"><code>e1cdc20</code></a>
Add Django exporter (<a
href="https://redirect.github.com/prometheus/client_python/issues/1088">#1088</a>)
(<a
href="https://redirect.github.com/prometheus/client_python/issues/1143">#1143</a>)</li>
<li><a
href="7b99592094"><code>7b99592</code></a>
Added compression support in pushgateway (<a
href="https://redirect.github.com/prometheus/client_python/issues/1144">#1144</a>)</li>
<li><a
href="13df12421e"><code>13df124</code></a>
Relax registry type annotations for exposition (<a
href="https://redirect.github.com/prometheus/client_python/issues/1149">#1149</a>)</li>
<li><a
href="a264ec0d85"><code>a264ec0</code></a>
Don't interleave histogram metrics in multi-process collector (<a
href="https://redirect.github.com/prometheus/client_python/issues/1148">#1148</a>)</li>
<li><a
href="e8f8bae655"><code>e8f8bae</code></a>
fix(multiprocess): avoid double-building child metric names (<a
href="https://redirect.github.com/prometheus/client_python/issues/1035">#1035</a>)
(<a
href="https://redirect.github.com/prometheus/client_python/issues/1146">#1146</a>)</li>
<li><a
href="1783ca87ac"><code>1783ca8</code></a>
Add support for Python 3.14 (<a
href="https://redirect.github.com/prometheus/client_python/issues/1142">#1142</a>)</li>
<li><a
href="378510b8ae"><code>378510b</code></a>
Add remove_matching() method for metric label deletion (<a
href="https://redirect.github.com/prometheus/client_python/issues/1121">#1121</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/prometheus/client_python/compare/v0.22.1...v0.24.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `python-multipart` from 0.0.20 to 0.0.22
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/releases">python-multipart's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.0.22</h2>
<h2>What's Changed</h2>
<ul>
<li>Drop directory path from filename in <code>File</code> <a
href="9433f4bbc9">9433f4b</a>.</li>
</ul>
<hr />
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.21...0.0.22">https://github.com/Kludex/python-multipart/compare/0.0.21...0.0.22</a></p>
<h2>Version 0.0.21</h2>
<h2>What's Changed</h2>
<ul>
<li>Add support for Python 3.14 and drop EOL 3.8 and 3.9 by <a
href="https://github.com/hugovk"><code>@​hugovk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/216">Kludex/python-multipart#216</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/waketzheng"><code>@​waketzheng</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/203">Kludex/python-multipart#203</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.21">https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.21</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md">python-multipart's
changelog</a>.</em></p>
<blockquote>
<h2>0.0.22 (2026-01-25)</h2>
<ul>
<li>Drop directory path from filename in <code>File</code> <a
href="9433f4bbc9">9433f4b</a>.</li>
</ul>
<h2>0.0.21 (2025-12-17)</h2>
<ul>
<li>Add support for Python 3.14 and drop EOL 3.8 and 3.9 <a
href="https://redirect.github.com/Kludex/python-multipart/pull/216">#216</a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bea7bbb290"><code>bea7bbb</code></a>
Version 0.0.22 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/222">#222</a>)</li>
<li><a
href="0fb59a9df0"><code>0fb59a9</code></a>
chore: add return type on test (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/221">#221</a>)</li>
<li><a
href="9433f4bbc9"><code>9433f4b</code></a>
Merge commit from fork</li>
<li><a
href="d5c91ecb0a"><code>d5c91ec</code></a>
Bump the github-actions group with 2 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/219">#219</a>)</li>
<li><a
href="5a90631b48"><code>5a90631</code></a>
bump uv (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/218">#218</a>)</li>
<li><a
href="1f72955602"><code>1f72955</code></a>
Version 0.0.21 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/217">#217</a>)</li>
<li><a
href="47ecfed353"><code>47ecfed</code></a>
Add support for Python 3.14 and drop EOL 3.8 and 3.9 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/216">#216</a>)</li>
<li><a
href="f18b70941b"><code>f18b709</code></a>
Bump the github-actions group across 1 directory with 4 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/214">#214</a>)</li>
<li><a
href="b388e9a7a8"><code>b388e9a</code></a>
chore: use depedency-groups in <code>pyproject.toml</code> (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/212">#212</a>)</li>
<li><a
href="6113e75097"><code>6113e75</code></a>
Bump the github-actions group across 1 directory with 3 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/210">#210</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/Kludex/python-multipart/compare/0.0.20...0.0.22">compare
view</a></li>
</ul>
</details>
<br />

Updates `supabase` from 2.27.2 to 2.27.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/releases">supabase's
releases</a>.</em></p>
<blockquote>
<h2>v2.27.3</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">2.27.3</a>
(2026-02-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)
(<a
href="cc72ed75d4">cc72ed7</a>)</li>
<li>ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)
(<a
href="4267ff1345">4267ff1</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/blob/main/CHANGELOG.md">supabase's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">2.27.3</a>
(2026-02-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)
(<a
href="cc72ed75d4">cc72ed7</a>)</li>
<li>ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)
(<a
href="4267ff1345">4267ff1</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c357def670"><code>c357def</code></a>
chore(main): release 2.27.3 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1368">#1368</a>)</li>
<li><a
href="4267ff1345"><code>4267ff1</code></a>
fix: ensure storage_url has trailing slash to prevent warning (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1367">#1367</a>)</li>
<li><a
href="cc72ed75d4"><code>cc72ed7</code></a>
fix: deprecate python 3.9 in all packages (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1365">#1365</a>)</li>
<li><a
href="9d3620da64"><code>9d3620d</code></a>
chore(realtime): move most 'info' level logs into 'debug' (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1358">#1358</a>)</li>
<li><a
href="30f5e84022"><code>30f5e84</code></a>
Upgrade GitHub Actions for Node 24 compatibility (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1357">#1357</a>)</li>
<li><a
href="1df3afcd7c"><code>1df3afc</code></a>
chore(ci): add python package to ci matrix (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1351">#1351</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-py/compare/v2.27.2...v2.27.3">compare
view</a></li>
</ul>
</details>
<br />

Updates `tenacity` from 9.1.3 to 9.1.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/jd/tenacity/releases">tenacity's
releases</a>.</em></p>
<blockquote>
<h2>9.1.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix <code>retry()</code> annotations with async <code>sleep=</code>
function by <a
href="https://github.com/Zac-HD"><code>@​Zac-HD</code></a> in <a
href="https://redirect.github.com/jd/tenacity/pull/555">jd/tenacity#555</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/jd/tenacity/compare/9.1.3...9.1.4">https://github.com/jd/tenacity/compare/9.1.3...9.1.4</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d4e868d6b8"><code>d4e868d</code></a>
Fix <code>retry()</code> annotations with async <code>sleep=</code>
function (<a
href="https://redirect.github.com/jd/tenacity/issues/555">#555</a>)</li>
<li>See full diff in <a
href="https://github.com/jd/tenacity/compare/9.1.3...9.1.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `tiktoken` from 0.9.0 to 0.12.0
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/openai/tiktoken/blob/main/CHANGELOG.md">tiktoken's
changelog</a>.</em></p>
<blockquote>
<h2>[v0.12.0]</h2>
<ul>
<li>Build wheels for Python 3.14</li>
<li>Build musllinux aarch64 wheels</li>
<li>Support for free-threaded Python</li>
<li>Update version of <code>pyo3</code> and <code>rustc-hash</code></li>
<li>Avoid use of <code>blobfile</code> for reading local files</li>
<li>Recognise <code>gpt-5</code> model identifier</li>
<li>Minor performance improvement for file reading</li>
</ul>
<h2>[v0.11.0]</h2>
<ul>
<li>Support for <code>GPT-5</code></li>
<li>Update version of <code>pyo3</code></li>
<li>Use new Rust edition</li>
<li>Fix special token handling in <code>encode_to_numpy</code></li>
<li>Better error handling</li>
<li>Improvements to private APIs</li>
</ul>
<h2>[v0.10.0]</h2>
<ul>
<li>Support for newer models</li>
<li>Improvements to private APIs</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="97e49cbadd"><code>97e49cb</code></a>
Release 0.12.0</li>
<li><a
href="948549882b"><code>9485498</code></a>
Partial sync of codebase (<a
href="https://redirect.github.com/openai/tiktoken/issues/451">#451</a>)</li>
<li><a
href="00ff187f59"><code>00ff187</code></a>
Add GPT-5 model support with o200k_base encoding (<a
href="https://redirect.github.com/openai/tiktoken/issues/440">#440</a>)</li>
<li><a
href="5ee89ca1fa"><code>5ee89ca</code></a>
chore: update dependencies (<a
href="https://redirect.github.com/openai/tiktoken/issues/449">#449</a>)</li>
<li><a
href="2ab6d3706d"><code>2ab6d37</code></a>
Support the free-threaded build (<a
href="https://redirect.github.com/openai/tiktoken/issues/443">#443</a>)</li>
<li><a
href="82dc3bbacc"><code>82dc3bb</code></a>
bump PyO3 version (<a
href="https://redirect.github.com/openai/tiktoken/issues/444">#444</a>)</li>
<li><a
href="eedc856364"><code>eedc856</code></a>
Partial sync of codebase</li>
<li><a
href="5818d56626"><code>5818d56</code></a>
Partial sync of codebase</li>
<li><a
href="3591ff175d"><code>3591ff1</code></a>
Sync codebase</li>
<li><a
href="4560a8896f"><code>4560a88</code></a>
Sync codebase (<a
href="https://redirect.github.com/openai/tiktoken/issues/389">#389</a>)</li>
<li>See full diff in <a
href="https://github.com/openai/tiktoken/compare/0.9.0...0.12.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Otto <otto@agpt.co>
2026-02-09 03:28:22 +00:00
Otto
a329831b0b feat(backend): Add ClamAV scanning for local file paths (#11988)
## Context

From PR #11796 review discussion. Files processed by the video blocks
(downloads, uploads, generated videos) should be scanned through ClamAV
for malware detection.

## Problem

`store_media_file()` in `backend/util/file.py` already scans:
- `workspace://` references
- Cloud storage paths  
- Data URIs (`data:...`)
- HTTP/HTTPS URLs

**But local file paths were NOT scanned.** The `else` branch only
verified the file exists.

This gap affected video processing blocks (e.g., `LoopVideoBlock`,
`AddAudioToVideoBlock`) that:
1. Download/receive input media
2. Process it locally (loop, add audio, etc.)
3. Write output to temp directory
4. Call `store_media_file(output_filename, ...)` with a local path →
**skipped virus scanning**

## Solution

Added virus scanning to the local file path branch:
```python
# Virus scan the local file before any further processing
local_content = target_path.read_bytes()
if len(local_content) > MAX_FILE_SIZE_BYTES:
    raise ValueError(...)
await scan_content_safe(local_content, filename=sanitized_file)
```

## Changes

- `backend/util/file.py` - Added ~7 lines to scan local files
(consistent with other input types)
- `backend/util/file_test.py` - Added 2 test cases for local file
scanning

## Risk Assessment

- **Low risk:** Single point of change, follows existing pattern
- **Backwards compatible:** No API changes
- **Fail-safe:** If scanning fails, file is rejected (existing behavior)

Closes SECRT-1904

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-02-09 00:24:18 +00:00
dependabot[bot]
98dd1a9480 chore(libs/deps): Bump cryptography from 45.0.6 to 46.0.1 in /autogpt_platform/autogpt_libs (#10968)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.6
to 46.0.1.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>46.0.1 - 2025-09-16</p>
<pre><code>
* Fixed an issue where users installing via ``pip`` on Python 3.14
development
  versions would not properly install a dependency.
* Fixed an issue building the free-threaded macOS 3.14 wheels.
<p>.. _v46-0-0:</p>
<p>46.0.0 - 2025-09-16<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for Python 3.7 has
been removed.</li>
<li>Support for OpenSSL &lt; 3.0 is deprecated and will be removed in
the next
release.</li>
<li>Support for <code>x86_64</code> macOS (including publishing wheels)
is deprecated
and will be removed in two releases. We will switch to publishing an
<code>arm64</code> only wheel for macOS.</li>
<li>Support for 32-bit Windows (including publishing wheels) is
deprecated
and will be removed in two releases. Users should move to a 64-bit
Python installation.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.3.</li>
<li>We now build <code>ppc64le</code> <code>manylinux</code> wheels and
publish them to PyPI.</li>
<li>We now build <code>win_arm64</code> (Windows on Arm) wheels and
publish them to PyPI.</li>
<li>Added support for free-threaded Python 3.14.</li>
<li>Removed the deprecated <code>get_attribute_for_oid</code> method on
:class:<code>~cryptography.x509.CertificateSigningRequest</code>. Users
should use
:meth:<code>~cryptography.x509.Attributes.get_attribute_for_oid</code>
instead.</li>
<li>Removed the deprecated <code>CAST5</code>, <code>SEED</code>,
<code>IDEA</code>, and <code>Blowfish</code>
classes from the cipher module. These are still available in
:doc:<code>/hazmat/decrepit/index</code>.</li>
<li>In X.509, when performing a PSS signature with a SHA-3 hash, it is
now
encoded with the official NIST SHA3 OID.</li>
</ul>
<p>.. _v45-0-7:</p>
<p>45.0.7 - 2025-09-01</p>
<pre><code>
* Added a function to support an upcoming ``pyOpenSSL`` release.
<p>.. _v45-0-6:<br />
</code></pre></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e735cfc275"><code>e735cfc</code></a>
release 46.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13450">#13450</a>)</li>
<li><a
href="4e457ffba4"><code>4e457ff</code></a>
Explicitly specify python in mac uv build invocation (<a
href="https://redirect.github.com/pyca/cryptography/issues/13447">#13447</a>)</li>
<li><a
href="2726efdb6d"><code>2726efd</code></a>
Depend on CFFI 2.0.0 or newer on Python &gt; 3.8 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13448">#13448</a>)</li>
<li><a
href="62230623d1"><code>6223062</code></a>
release 46.0.0 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13446">#13446</a>)</li>
<li><a
href="563c4915b0"><code>563c491</code></a>
Update comment for pyopenssl-release tag (<a
href="https://redirect.github.com/pyca/cryptography/issues/13445">#13445</a>)</li>
<li><a
href="d2f6f7face"><code>d2f6f7f</code></a>
Bump downstream dependencies in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/13439">#13439</a>)</li>
<li><a
href="e7ab02bd67"><code>e7ab02b</code></a>
we'll ship this with 3.5.3 why not (<a
href="https://redirect.github.com/pyca/cryptography/issues/13442">#13442</a>)</li>
<li><a
href="0b68a4bffb"><code>0b68a4b</code></a>
Another pair of bump dependencies fix (<a
href="https://redirect.github.com/pyca/cryptography/issues/13444">#13444</a>)</li>
<li><a
href="e076d08ee4"><code>e076d08</code></a>
Attempt to fix commit message for bump downstreams (<a
href="https://redirect.github.com/pyca/cryptography/issues/13440">#13440</a>)</li>
<li><a
href="6835ce899e"><code>6835ce8</code></a>
Put correct version bounds for pyenchant in pins (<a
href="https://redirect.github.com/pyca/cryptography/issues/13441">#13441</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/45.0.6...46.0.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=45.0.6&new-version=46.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nick Tindle <nick@ntindle.com>
2026-02-08 23:40:15 +00:00
Zamil Majdy
7aab2eb1d5 style(mcp): Apply linter formatting 2026-02-08 20:00:44 +04:00
Zamil Majdy
5ab28ccda2 fix(mcp): Fix pyright errors in test files
- Add type: ignore for aiohttp private _server.sockets access
- Add assert not None before accessing Optional refresh_token
2026-02-08 19:52:52 +04:00
Zamil Majdy
4fe0f05980 docs: Sync block documentation for MCP Tool block 2026-02-08 19:37:49 +04:00
Zamil Majdy
19b3373052 fix(mcp): Address PR review comments
- Fix get_missing_input/get_mismatch_error to validate tool_arguments
  dict instead of the entire BlockInput data (critical bug)
- Add type check for non-dict JSON-RPC error field in client.py
- Add try/catch for non-JSON responses in client.py
- Add raise_for_status and error payload checks to OAuth token requests
- Remove hardcoded placeholder skip-list from _extract_auth_token
- Fix server start timeout check in integration tests
- Remove unused MCPTool import, move execute_block_test to top-level
- Update tests to match fixed validation behavior
- Fix MCP_BLOCK_IMPLEMENTATION.md (remove duplicate section, local path)
- Soften PKCE comment in oauth.py
2026-02-08 19:34:28 +04:00
Zamil Majdy
7db3f12876 feat(backend/blocks/mcp): Add SSE support, OAuth auth, and e2e tests
- Handle text/event-stream (SSE) responses from real MCP servers
  (MCPClient._parse_sse_response) alongside plain JSON responses
- Add e2e tests against OpenAI docs MCP server (developers.openai.com/mcp)
  verifying SSE parsing, tool discovery, and tool execution work with a
  real production MCP server
- Support both api_key and oauth2 credential types on MCPToolBlock
  (MCPCredentials union type, _extract_auth_token helper)
- Add MCPOAuthHandler implementing BaseOAuthHandler with dynamic
  endpoints (authorize_url, token_url) for MCP OAuth 2.1 with PKCE
- Add OAuth metadata discovery to MCPClient (discover_auth,
  discover_auth_server_metadata) per RFC 9728 / RFC 8414
- 76 total tests: 46 unit, 11 OAuth, 14 integration, 5 e2e
2026-02-08 16:32:50 +04:00
Zamil Majdy
e9b996abb0 feat(backend/blocks): Add integration tests and trusted_origins support
- Add a test MCP server (test_server.py) for integration testing
- Add 14 integration tests that hit a real local MCP server over HTTP
- Add trusted_origins support to MCPClient for localhost/internal servers
- MCPToolBlock now trusts the user-configured server URL by default
- Add local conftest.py to avoid SpinTestServer overhead for MCP tests

Test results: 34 unit tests + 14 integration tests = 48 total, all passing
2026-02-08 13:49:44 +04:00
Zamil Majdy
9b972389a0 feat(backend/blocks): Add MCP (Model Context Protocol) tool block
Add a dynamic MCPToolBlock that can connect to any MCP server, discover
available tools, and execute them with dynamically generated input/output
schemas. This follows the same pattern as AgentExecutorBlock for dynamic
schema handling.

New files:
- backend/blocks/mcp/client.py — MCP Streamable HTTP client (JSON-RPC 2.0)
- backend/blocks/mcp/block.py — MCPToolBlock with dynamic schema
- backend/blocks/mcp/test_mcp.py — 34 tests covering client + block
- MCP_BLOCK_IMPLEMENTATION.md — Design document

Modified files:
- backend/integrations/providers.py — Add MCP provider name
2026-02-08 12:49:28 +04:00
92 changed files with 7320 additions and 1058 deletions

View File

@@ -42,7 +42,7 @@ jobs:
- name: Get CI failure details - name: Get CI failure details
id: failure_details id: failure_details
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
const run = await github.rest.actions.getWorkflowRun({ const run = await github.rest.actions.getWorkflowRun({

View File

@@ -41,7 +41,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -78,7 +78,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml) # Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22" node-version: "22"
@@ -91,7 +91,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies - name: Cache frontend dependencies
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }} key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -124,7 +124,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup # Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache - name: Set up Docker image cache
id: docker-cache id: docker-cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/docker-cache path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes # Use a versioned key for cache invalidation when image list changes

View File

@@ -57,7 +57,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -94,7 +94,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml) # Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22" node-version: "22"
@@ -107,7 +107,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies - name: Cache frontend dependencies
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }} key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -140,7 +140,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup # Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache - name: Set up Docker image cache
id: docker-cache id: docker-cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/docker-cache path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes # Use a versioned key for cache invalidation when image list changes

View File

@@ -39,7 +39,7 @@ jobs:
python-version: "3.11" # Use standard version matching CI python-version: "3.11" # Use standard version matching CI
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
@@ -76,7 +76,7 @@ jobs:
# Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml) # Frontend Node.js/pnpm setup (mirrors platform-frontend-ci.yml)
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22" node-version: "22"
@@ -89,7 +89,7 @@ jobs:
echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV echo "PNPM_HOME=$HOME/.pnpm-store" >> $GITHUB_ENV
- name: Cache frontend dependencies - name: Cache frontend dependencies
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }} key: ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}
@@ -132,7 +132,7 @@ jobs:
# Phase 1: Cache and load Docker images for faster setup # Phase 1: Cache and load Docker images for faster setup
- name: Set up Docker image cache - name: Set up Docker image cache
id: docker-cache id: docker-cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/docker-cache path: ~/docker-cache
# Use a versioned key for cache invalidation when image list changes # Use a versioned key for cache invalidation when image list changes

View File

@@ -33,7 +33,7 @@ jobs:
python-version: "3.11" python-version: "3.11"
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -33,7 +33,7 @@ jobs:
python-version: "3.11" python-version: "3.11"
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -38,7 +38,7 @@ jobs:
python-version: "3.11" python-version: "3.11"
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -88,7 +88,7 @@ jobs:
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Set up Python dependency cache - name: Set up Python dependency cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.cache/pypoetry path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }} key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}

View File

@@ -17,7 +17,7 @@ jobs:
- name: Check comment permissions and deployment status - name: Check comment permissions and deployment status
id: check_status id: check_status
if: github.event_name == 'issue_comment' && github.event.issue.pull_request if: github.event_name == 'issue_comment' && github.event.issue.pull_request
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
const commentBody = context.payload.comment.body.trim(); const commentBody = context.payload.comment.body.trim();
@@ -55,7 +55,7 @@ jobs:
- name: Post permission denied comment - name: Post permission denied comment
if: steps.check_status.outputs.permission_denied == 'true' if: steps.check_status.outputs.permission_denied == 'true'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({
@@ -68,7 +68,7 @@ jobs:
- name: Get PR details for deployment - name: Get PR details for deployment
id: pr_details id: pr_details
if: steps.check_status.outputs.should_deploy == 'true' || steps.check_status.outputs.should_undeploy == 'true' if: steps.check_status.outputs.should_deploy == 'true' || steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
const pr = await github.rest.pulls.get({ const pr = await github.rest.pulls.get({
@@ -98,7 +98,7 @@ jobs:
- name: Post deploy success comment - name: Post deploy success comment
if: steps.check_status.outputs.should_deploy == 'true' if: steps.check_status.outputs.should_deploy == 'true'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({
@@ -126,7 +126,7 @@ jobs:
- name: Post undeploy success comment - name: Post undeploy success comment
if: steps.check_status.outputs.should_undeploy == 'true' if: steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({
@@ -139,7 +139,7 @@ jobs:
- name: Check deployment status on PR close - name: Check deployment status on PR close
id: check_pr_close id: check_pr_close
if: github.event_name == 'pull_request' && github.event.action == 'closed' if: github.event_name == 'pull_request' && github.event.action == 'closed'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
const comments = await github.rest.issues.listComments({ const comments = await github.rest.issues.listComments({
@@ -187,7 +187,7 @@ jobs:
github.event_name == 'pull_request' && github.event_name == 'pull_request' &&
github.event.action == 'closed' && github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true' steps.check_pr_close.outputs.should_undeploy == 'true'
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
script: | script: |
await github.rest.issues.createComment({ await github.rest.issues.createComment({

View File

@@ -42,7 +42,7 @@ jobs:
- 'autogpt_platform/frontend/src/components/**' - 'autogpt_platform/frontend/src/components/**'
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -54,7 +54,7 @@ jobs:
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies - name: Cache dependencies
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }} key: ${{ steps.cache-key.outputs.key }}
@@ -74,7 +74,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -82,7 +82,7 @@ jobs:
run: corepack enable run: corepack enable
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}
@@ -112,7 +112,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -120,7 +120,7 @@ jobs:
run: corepack enable run: corepack enable
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}
@@ -153,7 +153,7 @@ jobs:
submodules: recursive submodules: recursive
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -176,7 +176,7 @@ jobs:
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Cache Docker layers - name: Cache Docker layers
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: /tmp/.buildx-cache path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-frontend-test-${{ hashFiles('autogpt_platform/docker-compose.yml', 'autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/pyproject.toml', 'autogpt_platform/backend/poetry.lock') }} key: ${{ runner.os }}-buildx-frontend-test-${{ hashFiles('autogpt_platform/docker-compose.yml', 'autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/pyproject.toml', 'autogpt_platform/backend/poetry.lock') }}
@@ -231,7 +231,7 @@ jobs:
fi fi
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}
@@ -282,7 +282,7 @@ jobs:
submodules: recursive submodules: recursive
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -290,7 +290,7 @@ jobs:
run: corepack enable run: corepack enable
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}

View File

@@ -32,7 +32,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -44,7 +44,7 @@ jobs:
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies - name: Cache dependencies
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }} key: ${{ steps.cache-key.outputs.key }}
@@ -68,7 +68,7 @@ jobs:
submodules: recursive submodules: recursive
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v4 uses: actions/setup-node@v6
with: with:
node-version: "22.18.0" node-version: "22.18.0"
@@ -88,7 +88,7 @@ jobs:
docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
- name: Restore dependencies cache - name: Restore dependencies cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: ~/.pnpm-store path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }} key: ${{ needs.setup.outputs.cache-key }}

View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand. # This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
[[package]] [[package]]
name = "annotated-doc" name = "annotated-doc"
@@ -67,7 +67,7 @@ description = "Backport of asyncio.Runner, a context manager that controls event
optional = false optional = false
python-versions = "<3.11,>=3.8" python-versions = "<3.11,>=3.8"
groups = ["dev"] groups = ["dev"]
markers = "python_version < \"3.11\"" markers = "python_version == \"3.10\""
files = [ files = [
{file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"}, {file = "backports_asyncio_runner-1.2.0-py3-none-any.whl", hash = "sha256:0da0a936a8aeb554eccb426dc55af3ba63bcdc69fa1a600b5bb305413a4477b5"},
{file = "backports_asyncio_runner-1.2.0.tar.gz", hash = "sha256:a5aa7b2b7d8f8bfcaa2b57313f70792df84e32a2a746f585213373f900b42162"}, {file = "backports_asyncio_runner-1.2.0.tar.gz", hash = "sha256:a5aa7b2b7d8f8bfcaa2b57313f70792df84e32a2a746f585213373f900b42162"},
@@ -99,84 +99,101 @@ files = [
[[package]] [[package]]
name = "cffi" name = "cffi"
version = "1.17.1" version = "2.0.0"
description = "Foreign Function Interface for Python calling C code." description = "Foreign Function Interface for Python calling C code."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
markers = "platform_python_implementation != \"PyPy\"" markers = "platform_python_implementation != \"PyPy\""
files = [ files = [
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"}, {file = "cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44"},
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"}, {file = "cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"}, {file = "cffi-2.0.0-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"}, {file = "cffi-2.0.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"}, {file = "cffi-2.0.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"}, {file = "cffi-2.0.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"}, {file = "cffi-2.0.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"}, {file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"}, {file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"}, {file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb"},
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"}, {file = "cffi-2.0.0-cp310-cp310-win32.whl", hash = "sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a"},
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"}, {file = "cffi-2.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739"},
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"}, {file = "cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe"},
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"}, {file = "cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"}, {file = "cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"}, {file = "cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"}, {file = "cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"}, {file = "cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"}, {file = "cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"}, {file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"}, {file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"}, {file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743"},
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"}, {file = "cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5"},
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"}, {file = "cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5"},
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"}, {file = "cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d"},
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"}, {file = "cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"}, {file = "cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"}, {file = "cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"}, {file = "cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"}, {file = "cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"}, {file = "cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"}, {file = "cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"}, {file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94"},
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"}, {file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187"},
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"}, {file = "cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18"},
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"}, {file = "cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5"},
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"}, {file = "cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"}, {file = "cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"}, {file = "cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"}, {file = "cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"}, {file = "cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"}, {file = "cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"}, {file = "cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"}, {file = "cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26"},
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"}, {file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c"},
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"}, {file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b"},
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"}, {file = "cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"}, {file = "cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"}, {file = "cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"}, {file = "cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"}, {file = "cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"}, {file = "cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b"},
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"}, {file = "cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c"},
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"}, {file = "cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef"},
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"}, {file = "cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775"},
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"}, {file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"}, {file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"}, {file = "cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"}, {file = "cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"}, {file = "cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"}, {file = "cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"}, {file = "cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"}, {file = "cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"}, {file = "cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8"},
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"}, {file = "cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc"},
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"}, {file = "cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592"},
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"}, {file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512"},
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4"},
{file = "cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e"},
{file = "cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6"},
{file = "cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9"},
{file = "cffi-2.0.0-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf"},
{file = "cffi-2.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7"},
{file = "cffi-2.0.0-cp39-cp39-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322"},
{file = "cffi-2.0.0-cp39-cp39-win32.whl", hash = "sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a"},
{file = "cffi-2.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9"},
{file = "cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529"},
] ]
[package.dependencies] [package.dependencies]
pycparser = "*" pycparser = {version = "*", markers = "implementation_name != \"PyPy\""}
[[package]] [[package]]
name = "charset-normalizer" name = "charset-normalizer"
@@ -413,62 +430,75 @@ toml = ["tomli ; python_full_version <= \"3.11.0a6\""]
[[package]] [[package]]
name = "cryptography" name = "cryptography"
version = "45.0.6" version = "46.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7" python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "cryptography-45.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:048e7ad9e08cf4c0ab07ff7f36cc3115924e22e2266e034450a890d9e312dd74"}, {file = "cryptography-46.0.4-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:281526e865ed4166009e235afadf3a4c4cba6056f99336a99efba65336fd5485"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:44647c5d796f5fc042bbc6d61307d04bf29bccb74d188f18051b635f20a9c75f"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5f14fba5bf6f4390d7ff8f086c566454bff0411f6d8aa7af79c88b6f9267aecc"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e40b80ecf35ec265c452eea0ba94c9587ca763e739b8e559c128d23bff7ebbbf"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:47bcd19517e6389132f76e2d5303ded6cf3f78903da2158a671be8de024f4cd0"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:00e8724bdad672d75e6f069b27970883179bd472cd24a63f6e620ca7e41cc0c5"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:01df4f50f314fbe7009f54046e908d1754f19d0c6d3070df1e6268c5a4af09fa"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7a3085d1b319d35296176af31c90338eeb2ddac8104661df79f80e1d9787b8b2"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:5aa3e463596b0087b3da0dbe2b2487e9fc261d25da85754e30e3b40637d61f81"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1b7fa6a1c1188c7ee32e47590d16a5a0646270921f8020efc9a511648e1b2e08"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:0a9ad24359fee86f131836a9ac3bffc9329e956624a2d379b613f8f8abaf5255"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:275ba5cc0d9e320cd70f8e7b96d9e59903c815ca579ab96c1e37278d231fc402"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:dc1272e25ef673efe72f2096e92ae39dea1a1a450dd44918b15351f72c5a168e"},
{file = "cryptography-45.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f4028f29a9f38a2025abedb2e409973709c660d44319c61762202206ed577c42"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:de0f5f4ec8711ebc555f54735d4c673fc34b65c44283895f1a08c2b49d2fd99c"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ee411a1b977f40bd075392c80c10b58025ee5c6b47a822a33c1198598a7a5f05"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:eeeb2e33d8dbcccc34d64651f00a98cb41b2dc69cef866771a5717e6734dfa32"},
{file = "cryptography-45.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:e2a21a8eda2d86bb604934b6b37691585bd095c1f788530c1fcefc53a82b3453"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:3d425eacbc9aceafd2cb429e42f4e5d5633c6f873f5e567077043ef1b9bbf616"},
{file = "cryptography-45.0.6-cp311-abi3-win32.whl", hash = "sha256:d063341378d7ee9c91f9d23b431a3502fc8bfacd54ef0a27baa72a0843b29159"}, {file = "cryptography-46.0.4-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91627ebf691d1ea3976a031b61fb7bac1ccd745afa03602275dda443e11c8de0"},
{file = "cryptography-45.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:833dc32dfc1e39b7376a87b9a6a4288a10aae234631268486558920029b086ec"}, {file = "cryptography-46.0.4-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2d08bc22efd73e8854b0b7caff402d735b354862f1145d7be3b9c0f740fef6a0"},
{file = "cryptography-45.0.6-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:3436128a60a5e5490603ab2adbabc8763613f638513ffa7d311c900a8349a2a0"}, {file = "cryptography-46.0.4-cp311-abi3-win32.whl", hash = "sha256:82a62483daf20b8134f6e92898da70d04d0ef9a75829d732ea1018678185f4f5"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0d9ef57b6768d9fa58e92f4947cea96ade1233c0e236db22ba44748ffedca394"}, {file = "cryptography-46.0.4-cp311-abi3-win_amd64.whl", hash = "sha256:6225d3ebe26a55dbc8ead5ad1265c0403552a63336499564675b29eb3184c09b"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea3c42f2016a5bbf71825537c2ad753f2870191134933196bee408aac397b3d9"}, {file = "cryptography-46.0.4-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:485e2b65d25ec0d901bca7bcae0f53b00133bf3173916d8e421f6fddde103908"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:20ae4906a13716139d6d762ceb3e0e7e110f7955f3bc3876e3a07f5daadec5f3"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:078e5f06bd2fa5aea5a324f2a09f914b1484f1d0c2a4d6a8a28c74e72f65f2da"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2dac5ec199038b8e131365e2324c03d20e97fe214af051d20c49db129844e8b3"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dce1e4f068f03008da7fa51cc7abc6ddc5e5de3e3d1550334eaf8393982a5829"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:18f878a34b90d688982e43f4b700408b478102dd58b3e39de21b5ebf6509c301"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:2067461c80271f422ee7bdbe79b9b4be54a5162e90345f86a23445a0cf3fd8a2"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:5bd6020c80c5b2b2242d6c48487d7b85700f5e0038e67b29d706f98440d66eb5"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:c92010b58a51196a5f41c3795190203ac52edfd5dc3ff99149b4659eba9d2085"},
{file = "cryptography-45.0.6-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:eccddbd986e43014263eda489abbddfbc287af5cddfd690477993dbb31e31016"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:829c2b12bbc5428ab02d6b7f7e9bbfd53e33efd6672d21341f2177470171ad8b"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:550ae02148206beb722cfe4ef0933f9352bab26b087af00e48fdfb9ade35c5b3"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:62217ba44bf81b30abaeda1488686a04a702a261e26f87db51ff61d9d3510abd"},
{file = "cryptography-45.0.6-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5b64e668fc3528e77efa51ca70fadcd6610e8ab231e3e06ae2bab3b31c2b8ed9"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:9c2da296c8d3415b93e6053f5a728649a87a48ce084a9aaf51d6e46c87c7f2d2"},
{file = "cryptography-45.0.6-cp37-abi3-win32.whl", hash = "sha256:780c40fb751c7d2b0c6786ceee6b6f871e86e8718a8ff4bc35073ac353c7cd02"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:9b34d8ba84454641a6bf4d6762d15847ecbd85c1316c0a7984e6e4e9f748ec2e"},
{file = "cryptography-45.0.6-cp37-abi3-win_amd64.whl", hash = "sha256:20d15aed3ee522faac1a39fbfdfee25d17b1284bafd808e1640a74846d7c4d1b"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:df4a817fa7138dd0c96c8c8c20f04b8aaa1fac3bbf610913dcad8ea82e1bfd3f"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:705bb7c7ecc3d79a50f236adda12ca331c8e7ecfbea51edd931ce5a7a7c4f012"}, {file = "cryptography-46.0.4-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:b1de0ebf7587f28f9190b9cb526e901bf448c9e6a99655d2b07fff60e8212a82"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:826b46dae41a1155a0c0e66fafba43d0ede1dc16570b95e40c4d83bfcf0a451d"}, {file = "cryptography-46.0.4-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:9b4d17bc7bd7cdd98e3af40b441feaea4c68225e2eb2341026c84511ad246c0c"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:cc4d66f5dc4dc37b89cfef1bd5044387f7a1f6f0abb490815628501909332d5d"}, {file = "cryptography-46.0.4-cp314-cp314t-win32.whl", hash = "sha256:c411f16275b0dea722d76544a61d6421e2cc829ad76eec79280dbdc9ddf50061"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:f68f833a9d445cc49f01097d95c83a850795921b3f7cc6488731e69bde3288da"}, {file = "cryptography-46.0.4-cp314-cp314t-win_amd64.whl", hash = "sha256:728fedc529efc1439eb6107b677f7f7558adab4553ef8669f0d02d42d7b959a7"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3b5bf5267e98661b9b888a9250d05b063220dfa917a8203744454573c7eb79db"}, {file = "cryptography-46.0.4-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:a9556ba711f7c23f77b151d5798f3ac44a13455cc68db7697a1096e6d0563cab"},
{file = "cryptography-45.0.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:2384f2ab18d9be88a6e4f8972923405e2dbb8d3e16c6b43f15ca491d7831bd18"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8bf75b0259e87fa70bddc0b8b4078b76e7fd512fd9afae6c1193bcf440a4dbef"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:fc022c1fa5acff6def2fc6d7819bbbd31ccddfe67d075331a65d9cfb28a20983"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3c268a3490df22270955966ba236d6bc4a8f9b6e4ffddb78aac535f1a5ea471d"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3de77e4df42ac8d4e4d6cdb342d989803ad37707cf8f3fbf7b088c9cbdd46427"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:812815182f6a0c1d49a37893a303b44eaac827d7f0d582cecfc81b6427f22973"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:599c8d7df950aa68baa7e98f7b73f4f414c9f02d0e8104a30c0182a07732638b"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:a90e43e3ef65e6dcf969dfe3bb40cbf5aef0d523dff95bfa24256be172a845f4"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:31a2b9a10530a1cb04ffd6aa1cd4d3be9ed49f7d77a4dafe198f3b382f41545c"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:a05177ff6296644ef2876fce50518dffb5bcdf903c85250974fc8bc85d54c0af"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:e5b3dda1b00fb41da3af4c5ef3f922a200e33ee5ba0f0bc9ecf0b0c173958385"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:daa392191f626d50f1b136c9b4cf08af69ca8279d110ea24f5c2700054d2e263"},
{file = "cryptography-45.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:629127cfdcdc6806dfe234734d7cb8ac54edaf572148274fa377a7d3405b0043"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:e07ea39c5b048e085f15923511d8121e4a9dc45cee4e3b970ca4f0d338f23095"},
{file = "cryptography-45.0.6.tar.gz", hash = "sha256:5c966c732cf6e4a276ce83b6e4c729edda2df6929083a952cc7da973c539c719"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:d5a45ddc256f492ce42a4e35879c5e5528c09cd9ad12420828c972951d8e016b"},
{file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:6bb5157bf6a350e5b28aee23beb2d84ae6f5be390b2f8ee7ea179cda077e1019"},
{file = "cryptography-46.0.4-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:dd5aba870a2c40f87a3af043e0dee7d9eb02d4aff88a797b48f2b43eff8c3ab4"},
{file = "cryptography-46.0.4-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:93d8291da8d71024379ab2cb0b5c57915300155ad42e07f76bea6ad838d7e59b"},
{file = "cryptography-46.0.4-cp38-abi3-win32.whl", hash = "sha256:0563655cb3c6d05fb2afe693340bc050c30f9f34e15763361cf08e94749401fc"},
{file = "cryptography-46.0.4-cp38-abi3-win_amd64.whl", hash = "sha256:fa0900b9ef9c49728887d1576fd8d9e7e3ea872fa9b25ef9b64888adc434e976"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:766330cce7416c92b5e90c3bb71b1b79521760cdcfc3a6a1a182d4c9fab23d2b"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c236a44acfb610e70f6b3e1c3ca20ff24459659231ef2f8c48e879e2d32b73da"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8a15fb869670efa8f83cbffbc8753c1abf236883225aed74cd179b720ac9ec80"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:fdc3daab53b212472f1524d070735b2f0c214239df131903bae1d598016fa822"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:44cc0675b27cadb71bdbb96099cca1fa051cd11d2ade09e5cd3a2edb929ed947"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:be8c01a7d5a55f9a47d1888162b76c8f49d62b234d88f0ff91a9fbebe32ffbc3"},
{file = "cryptography-46.0.4.tar.gz", hash = "sha256:bfd019f60f8abc2ed1b9be4ddc21cfef059c841d86d710bb69909a688cbb8f59"},
] ]
[package.dependencies] [package.dependencies]
cffi = {version = ">=1.14", markers = "platform_python_implementation != \"PyPy\""} cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
typing-extensions = {version = ">=4.13.2", markers = "python_full_version < \"3.11.0\""}
[package.extras] [package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs ; python_full_version >= \"3.8.0\"", "sphinx-rtd-theme (>=3.0.0) ; python_full_version >= \"3.8.0\""] docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"] docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_full_version >= \"3.8.0\""] nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist ; python_full_version >= \"3.8.0\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"] pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"] sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"] ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"] test = ["certifi (>=2024)", "cryptography-vectors (==46.0.4)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"] test-randomorder = ["pytest-randomly"]
[[package]] [[package]]
@@ -493,7 +523,7 @@ description = "Backport of PEP 654 (exception groups)"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["main", "dev"] groups = ["main", "dev"]
markers = "python_version < \"3.11\"" markers = "python_version == \"3.10\""
files = [ files = [
{file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"}, {file = "exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10"},
{file = "exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88"}, {file = "exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88"},
@@ -1650,7 +1680,7 @@ description = "C parser in Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
markers = "platform_python_implementation != \"PyPy\"" markers = "platform_python_implementation != \"PyPy\" and implementation_name != \"PyPy\""
files = [ files = [
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"}, {file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"}, {file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
@@ -1972,14 +2002,14 @@ files = [
[[package]] [[package]]
name = "pyright" name = "pyright"
version = "1.1.404" version = "1.1.408"
description = "Command line wrapper for pyright" description = "Command line wrapper for pyright"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419"}, {file = "pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1"},
{file = "pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e"}, {file = "pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684"},
] ]
[package.dependencies] [package.dependencies]
@@ -2111,19 +2141,20 @@ dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "requests
[[package]] [[package]]
name = "pytest-asyncio" name = "pytest-asyncio"
version = "1.1.0" version = "1.3.0"
description = "Pytest support for asyncio" description = "Pytest support for asyncio"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.10"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_asyncio-1.1.0-py3-none-any.whl", hash = "sha256:5fe2d69607b0bd75c656d1211f969cadba035030156745ee09e7d71740e58ecf"}, {file = "pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5"},
{file = "pytest_asyncio-1.1.0.tar.gz", hash = "sha256:796aa822981e01b68c12e4827b8697108f7205020f24b5793b3c41555dab68ea"}, {file = "pytest_asyncio-1.3.0.tar.gz", hash = "sha256:d7f52f36d231b80ee124cd216ffb19369aa168fc10095013c6b014a34d3ee9e5"},
] ]
[package.dependencies] [package.dependencies]
backports-asyncio-runner = {version = ">=1.1,<2", markers = "python_version < \"3.11\""} backports-asyncio-runner = {version = ">=1.1,<2", markers = "python_version < \"3.11\""}
pytest = ">=8.2,<9" pytest = ">=8.2,<10"
typing-extensions = {version = ">=4.12", markers = "python_version < \"3.13\""}
[package.extras] [package.extras]
docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"] docs = ["sphinx (>=5.3)", "sphinx-rtd-theme (>=1)"]
@@ -2151,14 +2182,14 @@ testing = ["fields", "hunter", "process-tests", "pytest-xdist", "virtualenv"]
[[package]] [[package]]
name = "pytest-mock" name = "pytest-mock"
version = "3.14.1" version = "3.15.1"
description = "Thin-wrapper around the mock package for easier use with pytest" description = "Thin-wrapper around the mock package for easier use with pytest"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_mock-3.14.1-py3-none-any.whl", hash = "sha256:178aefcd11307d874b4cd3100344e7e2d888d9791a6a1d9bfe90fbc1b74fd1d0"}, {file = "pytest_mock-3.15.1-py3-none-any.whl", hash = "sha256:0a25e2eb88fe5168d535041d09a4529a188176ae608a6d249ee65abc0949630d"},
{file = "pytest_mock-3.14.1.tar.gz", hash = "sha256:159e9edac4c451ce77a5cdb9fc5d1100708d2dd4ba3c3df572f14097351af80e"}, {file = "pytest_mock-3.15.1.tar.gz", hash = "sha256:1849a238f6f396da19762269de72cb1814ab44416fa73a8686deac10b0d87a0f"},
] ]
[package.dependencies] [package.dependencies]
@@ -2292,31 +2323,30 @@ pyasn1 = ">=0.1.3"
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.12.11" version = "0.15.0"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "ruff-0.12.11-py3-none-linux_armv6l.whl", hash = "sha256:93fce71e1cac3a8bf9200e63a38ac5c078f3b6baebffb74ba5274fb2ab276065"}, {file = "ruff-0.15.0-py3-none-linux_armv6l.whl", hash = "sha256:aac4ebaa612a82b23d45964586f24ae9bc23ca101919f5590bdb368d74ad5455"},
{file = "ruff-0.12.11-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b8e33ac7b28c772440afa80cebb972ffd823621ded90404f29e5ab6d1e2d4b93"}, {file = "ruff-0.15.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:dcd4be7cc75cfbbca24a98d04d0b9b36a270d0833241f776b788d59f4142b14d"},
{file = "ruff-0.12.11-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d69fb9d4937aa19adb2e9f058bc4fbfe986c2040acb1a4a9747734834eaa0bfd"}, {file = "ruff-0.15.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d747e3319b2bce179c7c1eaad3d884dc0a199b5f4d5187620530adf9105268ce"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:411954eca8464595077a93e580e2918d0a01a19317af0a72132283e28ae21bee"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:650bd9c56ae03102c51a5e4b554d74d825ff3abe4db22b90fd32d816c2e90621"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6a2c0a2e1a450f387bf2c6237c727dd22191ae8c00e448e0672d624b2bbd7fb0"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a6664b7eac559e3048223a2da77769c2f92b43a6dfd4720cef42654299a599c9"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ca4c3a7f937725fd2413c0e884b5248a19369ab9bdd850b5781348ba283f644"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f811f97b0f092b35320d1556f3353bf238763420ade5d9e62ebd2b73f2ff179"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4d1df0098124006f6a66ecf3581a7f7e754c4df7644b2e6704cd7ca80ff95211"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:761ec0a66680fab6454236635a39abaf14198818c8cdf691e036f4bc0f406b2d"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5a8dd5f230efc99a24ace3b77e3555d3fbc0343aeed3fc84c8d89e75ab2ff793"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:940f11c2604d317e797b289f4f9f3fa5555ffe4fb574b55ed006c3d9b6f0eb78"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4dc75533039d0ed04cd33fb8ca9ac9620b99672fe7ff1533b6402206901c34ee"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bcbca3d40558789126da91d7ef9a7c87772ee107033db7191edefa34e2c7f1b4"},
{file = "ruff-0.12.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc58f9266d62c6eccc75261a665f26b4ef64840887fc6cbc552ce5b29f96cc8"}, {file = "ruff-0.15.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:9a121a96db1d75fa3eb39c4539e607f628920dd72ff1f7c5ee4f1b768ac62d6e"},
{file = "ruff-0.12.11-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5a0113bd6eafd545146440225fe60b4e9489f59eb5f5f107acd715ba5f0b3d2f"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:5298d518e493061f2eabd4abd067c7e4fb89e2f63291c94332e35631c07c3662"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0d737b4059d66295c3ea5720e6efc152623bb83fde5444209b69cd33a53e2000"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:afb6e603d6375ff0d6b0cee563fa21ab570fd15e65c852cb24922cef25050cf1"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:916fc5defee32dbc1fc1650b576a8fed68f5e8256e2180d4d9855aea43d6aab2"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:77e515f6b15f828b94dc17d2b4ace334c9ddb7d9468c54b2f9ed2b9c1593ef16"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_i686.whl", hash = "sha256:c984f07d7adb42d3ded5be894fb4007f30f82c87559438b4879fe7aa08c62b39"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:6f6e80850a01eb13b3e42ee0ebdf6e4497151b48c35051aab51c101266d187a3"},
{file = "ruff-0.12.11-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e07fbb89f2e9249f219d88331c833860489b49cdf4b032b8e4432e9b13e8a4b9"}, {file = "ruff-0.15.0-py3-none-win32.whl", hash = "sha256:238a717ef803e501b6d51e0bdd0d2c6e8513fe9eec14002445134d3907cd46c3"},
{file = "ruff-0.12.11-py3-none-win32.whl", hash = "sha256:c792e8f597c9c756e9bcd4d87cf407a00b60af77078c96f7b6366ea2ce9ba9d3"}, {file = "ruff-0.15.0-py3-none-win_amd64.whl", hash = "sha256:dd5e4d3301dc01de614da3cdffc33d4b1b96fb89e45721f1598e5532ccf78b18"},
{file = "ruff-0.12.11-py3-none-win_amd64.whl", hash = "sha256:a3283325960307915b6deb3576b96919ee89432ebd9c48771ca12ee8afe4a0fd"}, {file = "ruff-0.15.0-py3-none-win_arm64.whl", hash = "sha256:c480d632cc0ca3f0727acac8b7d053542d9e114a462a145d0b00e7cd658c515a"},
{file = "ruff-0.12.11-py3-none-win_arm64.whl", hash = "sha256:bae4d6e6a2676f8fb0f98b74594a048bae1b944aab17e9f5d504062303c6dbea"}, {file = "ruff-0.15.0.tar.gz", hash = "sha256:6bdea47cdbea30d40f8f8d7d69c0854ba7c15420ec75a26f463290949d7f7e9a"},
{file = "ruff-0.12.11.tar.gz", hash = "sha256:c6b09ae8426a65bbee5425b9d0b82796dbb07cb1af045743c79bfb163001165d"},
] ]
[[package]] [[package]]
@@ -2515,7 +2545,7 @@ description = "A lil' TOML parser"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
groups = ["dev"] groups = ["dev"]
markers = "python_version < \"3.11\"" markers = "python_version == \"3.10\""
files = [ files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"}, {file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"}, {file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
@@ -2863,4 +2893,4 @@ type = ["pytest-mypy"]
[metadata] [metadata]
lock-version = "2.1" lock-version = "2.1"
python-versions = ">=3.10,<4.0" python-versions = ">=3.10,<4.0"
content-hash = "5f15a9c9381c9a374f3d18e087c23b1f1ba8cce192d6f67463a3e3a7a18fee44" content-hash = "b7ac335a86aa44c3d7d2802298818b389a6f1286e3e9b7b0edb2ff06377cecaf"

View File

@@ -9,7 +9,7 @@ packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<4.0" python = ">=3.10,<4.0"
colorama = "^0.4.6" colorama = "^0.4.6"
cryptography = "^45.0" cryptography = "^46.0"
expiringdict = "^1.2.2" expiringdict = "^1.2.2"
fastapi = "^0.128.0" fastapi = "^0.128.0"
google-cloud-logging = "^3.13.0" google-cloud-logging = "^3.13.0"
@@ -22,12 +22,12 @@ supabase = "^2.27.2"
uvicorn = "^0.40.0" uvicorn = "^0.40.0"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
pyright = "^1.1.404" pyright = "^1.1.408"
pytest = "^8.4.1" pytest = "^8.4.1"
pytest-asyncio = "^1.1.0" pytest-asyncio = "^1.3.0"
pytest-mock = "^3.14.1" pytest-mock = "^3.15.1"
pytest-cov = "^6.2.1" pytest-cov = "^6.2.1"
ruff = "^0.12.11" ruff = "^0.15.0"
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]

View File

@@ -45,10 +45,7 @@ async def create_chat_session(
successfulAgentRuns=SafeJson({}), successfulAgentRuns=SafeJson({}),
successfulAgentSchedules=SafeJson({}), successfulAgentSchedules=SafeJson({}),
) )
return await PrismaChatSession.prisma().create( return await PrismaChatSession.prisma().create(data=data)
data=data,
include={"Messages": True},
)
async def update_chat_session( async def update_chat_session(

View File

@@ -266,12 +266,38 @@ async def stream_chat_post(
""" """
import asyncio import asyncio
import time
stream_start_time = time.perf_counter()
# Base log metadata (task_id added after creation)
log_meta = {"component": "ChatStream", "session_id": session_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] stream_chat_post STARTED, session={session_id}, "
f"user={user_id}, message_len={len(request.message)}",
extra={"json_fields": log_meta},
)
session = await _validate_and_get_session(session_id, user_id) session = await _validate_and_get_session(session_id, user_id)
logger.info(
f"[TIMING] session validated in {(time.perf_counter() - stream_start_time)*1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"duration_ms": (time.perf_counter() - stream_start_time) * 1000,
}
},
)
# Create a task in the stream registry for reconnection support # Create a task in the stream registry for reconnection support
task_id = str(uuid_module.uuid4()) task_id = str(uuid_module.uuid4())
operation_id = str(uuid_module.uuid4()) operation_id = str(uuid_module.uuid4())
log_meta["task_id"] = task_id
task_create_start = time.perf_counter()
await stream_registry.create_task( await stream_registry.create_task(
task_id=task_id, task_id=task_id,
session_id=session_id, session_id=session_id,
@@ -280,14 +306,46 @@ async def stream_chat_post(
tool_name="chat", tool_name="chat",
operation_id=operation_id, operation_id=operation_id,
) )
logger.info(
f"[TIMING] create_task completed in {(time.perf_counter() - task_create_start)*1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"duration_ms": (time.perf_counter() - task_create_start) * 1000,
}
},
)
# Background task that runs the AI generation independently of SSE connection # Background task that runs the AI generation independently of SSE connection
async def run_ai_generation(): async def run_ai_generation():
import time as time_module
gen_start_time = time_module.perf_counter()
logger.info(
f"[TIMING] run_ai_generation STARTED, task={task_id}, session={session_id}, user={user_id}",
extra={"json_fields": log_meta},
)
first_chunk_time, ttfc = None, None
chunk_count = 0
try: try:
# Emit a start event with task_id for reconnection # Emit a start event with task_id for reconnection
start_chunk = StreamStart(messageId=task_id, taskId=task_id) start_chunk = StreamStart(messageId=task_id, taskId=task_id)
await stream_registry.publish_chunk(task_id, start_chunk) await stream_registry.publish_chunk(task_id, start_chunk)
logger.info(
f"[TIMING] StreamStart published at {(time_module.perf_counter() - gen_start_time)*1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": (time_module.perf_counter() - gen_start_time)
* 1000,
}
},
)
logger.info(
"[TIMING] Calling stream_chat_completion",
extra={"json_fields": log_meta},
)
async for chunk in chat_service.stream_chat_completion( async for chunk in chat_service.stream_chat_completion(
session_id, session_id,
request.message, request.message,
@@ -296,54 +354,202 @@ async def stream_chat_post(
session=session, # Pass pre-fetched session to avoid double-fetch session=session, # Pass pre-fetched session to avoid double-fetch
context=request.context, context=request.context,
): ):
chunk_count += 1
if first_chunk_time is None:
first_chunk_time = time_module.perf_counter()
ttfc = first_chunk_time - gen_start_time
logger.info(
f"[TIMING] FIRST AI CHUNK at {ttfc:.2f}s, type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"chunk_type": type(chunk).__name__,
"time_to_first_chunk_ms": ttfc * 1000,
}
},
)
# Write to Redis (subscribers will receive via XREAD) # Write to Redis (subscribers will receive via XREAD)
await stream_registry.publish_chunk(task_id, chunk) await stream_registry.publish_chunk(task_id, chunk)
# Mark task as completed gen_end_time = time_module.perf_counter()
total_time = (gen_end_time - gen_start_time) * 1000
logger.info(
f"[TIMING] run_ai_generation FINISHED in {total_time/1000:.1f}s; "
f"task={task_id}, session={session_id}, "
f"ttfc={ttfc or -1:.2f}s, n_chunks={chunk_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"time_to_first_chunk_ms": (
ttfc * 1000 if ttfc is not None else None
),
"n_chunks": chunk_count,
}
},
)
await stream_registry.mark_task_completed(task_id, "completed") await stream_registry.mark_task_completed(task_id, "completed")
except Exception as e: except Exception as e:
elapsed = time_module.perf_counter() - gen_start_time
logger.error( logger.error(
f"Error in background AI generation for session {session_id}: {e}" f"[TIMING] run_ai_generation ERROR after {elapsed:.2f}s: {e}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed * 1000,
"error": str(e),
}
},
) )
await stream_registry.mark_task_completed(task_id, "failed") await stream_registry.mark_task_completed(task_id, "failed")
# Start the AI generation in a background task # Start the AI generation in a background task
bg_task = asyncio.create_task(run_ai_generation()) bg_task = asyncio.create_task(run_ai_generation())
await stream_registry.set_task_asyncio_task(task_id, bg_task) await stream_registry.set_task_asyncio_task(task_id, bg_task)
setup_time = (time.perf_counter() - stream_start_time) * 1000
logger.info(
f"[TIMING] Background task started, setup={setup_time:.1f}ms",
extra={"json_fields": {**log_meta, "setup_time_ms": setup_time}},
)
# SSE endpoint that subscribes to the task's stream # SSE endpoint that subscribes to the task's stream
async def event_generator() -> AsyncGenerator[str, None]: async def event_generator() -> AsyncGenerator[str, None]:
import time as time_module
event_gen_start = time_module.perf_counter()
logger.info(
f"[TIMING] event_generator STARTED, task={task_id}, session={session_id}, "
f"user={user_id}",
extra={"json_fields": log_meta},
)
subscriber_queue = None subscriber_queue = None
first_chunk_yielded = False
chunks_yielded = 0
try: try:
# Subscribe to the task stream (this replays existing messages + live updates) # Subscribe to the task stream (this replays existing messages + live updates)
subscribe_start = time_module.perf_counter()
logger.info(
"[TIMING] Calling subscribe_to_task",
extra={"json_fields": log_meta},
)
subscriber_queue = await stream_registry.subscribe_to_task( subscriber_queue = await stream_registry.subscribe_to_task(
task_id=task_id, task_id=task_id,
user_id=user_id, user_id=user_id,
last_message_id="0-0", # Get all messages from the beginning last_message_id="0-0", # Get all messages from the beginning
) )
subscribe_time = (time_module.perf_counter() - subscribe_start) * 1000
logger.info(
f"[TIMING] subscribe_to_task completed in {subscribe_time:.1f}ms, "
f"queue_ok={subscriber_queue is not None}",
extra={
"json_fields": {
**log_meta,
"duration_ms": subscribe_time,
"queue_obtained": subscriber_queue is not None,
}
},
)
if subscriber_queue is None: if subscriber_queue is None:
logger.info(
"[TIMING] subscriber_queue is None, yielding finish",
extra={"json_fields": log_meta},
)
yield StreamFinish().to_sse() yield StreamFinish().to_sse()
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
return return
# Read from the subscriber queue and yield to SSE # Read from the subscriber queue and yield to SSE
logger.info(
"[TIMING] Starting to read from subscriber_queue",
extra={"json_fields": log_meta},
)
while True: while True:
try: try:
queue_wait_start = time_module.perf_counter()
chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0) chunk = await asyncio.wait_for(subscriber_queue.get(), timeout=30.0)
queue_wait_time = (
time_module.perf_counter() - queue_wait_start
) * 1000
chunks_yielded += 1
if not first_chunk_yielded:
first_chunk_yielded = True
elapsed = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] FIRST CHUNK from queue at {elapsed:.2f}s, "
f"type={type(chunk).__name__}, "
f"wait={queue_wait_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"chunk_type": type(chunk).__name__,
"elapsed_ms": elapsed * 1000,
"queue_wait_ms": queue_wait_time,
}
},
)
elif chunks_yielded % 50 == 0:
logger.info(
f"[TIMING] Chunk #{chunks_yielded}, "
f"type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"chunk_number": chunks_yielded,
"chunk_type": type(chunk).__name__,
}
},
)
yield chunk.to_sse() yield chunk.to_sse()
# Check for finish signal # Check for finish signal
if isinstance(chunk, StreamFinish): if isinstance(chunk, StreamFinish):
total_time = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] StreamFinish received in {total_time:.2f}s; "
f"n_chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"chunks_yielded": chunks_yielded,
"total_time_ms": total_time * 1000,
}
},
)
break break
except asyncio.TimeoutError: except asyncio.TimeoutError:
# Send heartbeat to keep connection alive # Send heartbeat to keep connection alive
logger.info(
f"[TIMING] Heartbeat timeout, chunks_so_far={chunks_yielded}",
extra={
"json_fields": {**log_meta, "chunks_so_far": chunks_yielded}
},
)
yield StreamHeartbeat().to_sse() yield StreamHeartbeat().to_sse()
except GeneratorExit: except GeneratorExit:
logger.info(
f"[TIMING] GeneratorExit (client disconnected), chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"chunks_yielded": chunks_yielded,
"reason": "client_disconnect",
}
},
)
pass # Client disconnected - background task continues pass # Client disconnected - background task continues
except Exception as e: except Exception as e:
logger.error(f"Error in SSE stream for task {task_id}: {e}") elapsed = (time_module.perf_counter() - event_gen_start) * 1000
logger.error(
f"[TIMING] event_generator ERROR after {elapsed:.1f}ms: {e}",
extra={
"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}
},
)
finally: finally:
# Unsubscribe when client disconnects or stream ends to prevent resource leak # Unsubscribe when client disconnects or stream ends to prevent resource leak
if subscriber_queue is not None: if subscriber_queue is not None:
@@ -357,6 +563,18 @@ async def stream_chat_post(
exc_info=True, exc_info=True,
) )
# AI SDK protocol termination - always yield even if unsubscribe fails # AI SDK protocol termination - always yield even if unsubscribe fails
total_time = time_module.perf_counter() - event_gen_start
logger.info(
f"[TIMING] event_generator FINISHED in {total_time:.2f}s; "
f"task={task_id}, session={session_id}, n_chunks={chunks_yielded}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time * 1000,
"chunks_yielded": chunks_yielded,
}
},
)
yield "data: [DONE]\n\n" yield "data: [DONE]\n\n"
return StreamingResponse( return StreamingResponse(
@@ -425,7 +643,7 @@ async def stream_chat_get(
"Chat stream completed", "Chat stream completed",
extra={ extra={
"session_id": session_id, "session_id": session_id,
"chunk_count": chunk_count, "n_chunks": chunk_count,
"first_chunk_type": first_chunk_type, "first_chunk_type": first_chunk_type,
}, },
) )

View File

@@ -371,21 +371,45 @@ async def stream_chat_completion(
ValueError: If max_context_messages is exceeded ValueError: If max_context_messages is exceeded
""" """
completion_start = time.monotonic()
# Build log metadata for structured logging
log_meta = {"component": "ChatService", "session_id": session_id}
if user_id:
log_meta["user_id"] = user_id
logger.info( logger.info(
f"Streaming chat completion for session {session_id} for message {message} and user id {user_id}. Message is user message: {is_user_message}" f"[TIMING] stream_chat_completion STARTED, session={session_id}, user={user_id}, "
f"message_len={len(message) if message else 0}, is_user={is_user_message}",
extra={
"json_fields": {
**log_meta,
"message_len": len(message) if message else 0,
"is_user_message": is_user_message,
}
},
) )
# Only fetch from Redis if session not provided (initial call) # Only fetch from Redis if session not provided (initial call)
if session is None: if session is None:
fetch_start = time.monotonic()
session = await get_chat_session(session_id, user_id) session = await get_chat_session(session_id, user_id)
fetch_time = (time.monotonic() - fetch_start) * 1000
logger.info( logger.info(
f"Fetched session from Redis: {session.session_id if session else 'None'}, " f"[TIMING] get_chat_session took {fetch_time:.1f}ms, "
f"message_count={len(session.messages) if session else 0}" f"n_messages={len(session.messages) if session else 0}",
extra={
"json_fields": {
**log_meta,
"duration_ms": fetch_time,
"n_messages": len(session.messages) if session else 0,
}
},
) )
else: else:
logger.info( logger.info(
f"Using provided session object: {session.session_id}, " f"[TIMING] Using provided session, messages={len(session.messages)}",
f"message_count={len(session.messages)}" extra={"json_fields": {**log_meta, "n_messages": len(session.messages)}},
) )
if not session: if not session:
@@ -406,17 +430,25 @@ async def stream_chat_completion(
# Track user message in PostHog # Track user message in PostHog
if is_user_message: if is_user_message:
posthog_start = time.monotonic()
track_user_message( track_user_message(
user_id=user_id, user_id=user_id,
session_id=session_id, session_id=session_id,
message_length=len(message), message_length=len(message),
) )
posthog_time = (time.monotonic() - posthog_start) * 1000
logger.info(
f"[TIMING] track_user_message took {posthog_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": posthog_time}},
)
logger.info( upsert_start = time.monotonic()
f"Upserting session: {session.session_id} with user id {session.user_id}, "
f"message_count={len(session.messages)}"
)
session = await upsert_chat_session(session) session = await upsert_chat_session(session)
upsert_time = (time.monotonic() - upsert_start) * 1000
logger.info(
f"[TIMING] upsert_chat_session took {upsert_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": upsert_time}},
)
assert session, "Session not found" assert session, "Session not found"
# Generate title for new sessions on first user message (non-blocking) # Generate title for new sessions on first user message (non-blocking)
@@ -454,7 +486,13 @@ async def stream_chat_completion(
asyncio.create_task(_update_title()) asyncio.create_task(_update_title())
# Build system prompt with business understanding # Build system prompt with business understanding
prompt_start = time.monotonic()
system_prompt, understanding = await _build_system_prompt(user_id) system_prompt, understanding = await _build_system_prompt(user_id)
prompt_time = (time.monotonic() - prompt_start) * 1000
logger.info(
f"[TIMING] _build_system_prompt took {prompt_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": prompt_time}},
)
# Initialize variables for streaming # Initialize variables for streaming
assistant_response = ChatMessage( assistant_response = ChatMessage(
@@ -483,9 +521,18 @@ async def stream_chat_completion(
text_block_id = str(uuid_module.uuid4()) text_block_id = str(uuid_module.uuid4())
# Yield message start # Yield message start
setup_time = (time.monotonic() - completion_start) * 1000
logger.info(
f"[TIMING] Setup complete, yielding StreamStart at {setup_time:.1f}ms",
extra={"json_fields": {**log_meta, "setup_time_ms": setup_time}},
)
yield StreamStart(messageId=message_id) yield StreamStart(messageId=message_id)
try: try:
logger.info(
"[TIMING] Calling _stream_chat_chunks",
extra={"json_fields": log_meta},
)
async for chunk in _stream_chat_chunks( async for chunk in _stream_chat_chunks(
session=session, session=session,
tools=tools, tools=tools,
@@ -893,9 +940,21 @@ async def _stream_chat_chunks(
SSE formatted JSON response objects SSE formatted JSON response objects
""" """
import time as time_module
stream_chunks_start = time_module.perf_counter()
model = config.model model = config.model
logger.info("Starting pure chat stream") # Build log metadata for structured logging
log_meta = {"component": "ChatService", "session_id": session.session_id}
if session.user_id:
log_meta["user_id"] = session.user_id
logger.info(
f"[TIMING] _stream_chat_chunks STARTED, session={session.session_id}, "
f"user={session.user_id}, n_messages={len(session.messages)}",
extra={"json_fields": {**log_meta, "n_messages": len(session.messages)}},
)
messages = session.to_openai_messages() messages = session.to_openai_messages()
if system_prompt: if system_prompt:
@@ -906,12 +965,18 @@ async def _stream_chat_chunks(
messages = [system_message] + messages messages = [system_message] + messages
# Apply context window management # Apply context window management
context_start = time_module.perf_counter()
context_result = await _manage_context_window( context_result = await _manage_context_window(
messages=messages, messages=messages,
model=model, model=model,
api_key=config.api_key, api_key=config.api_key,
base_url=config.base_url, base_url=config.base_url,
) )
context_time = (time_module.perf_counter() - context_start) * 1000
logger.info(
f"[TIMING] _manage_context_window took {context_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": context_time}},
)
if context_result.error: if context_result.error:
if "System prompt dropped" in context_result.error: if "System prompt dropped" in context_result.error:
@@ -946,9 +1011,19 @@ async def _stream_chat_chunks(
while retry_count <= MAX_RETRIES: while retry_count <= MAX_RETRIES:
try: try:
elapsed = (time_module.perf_counter() - stream_chunks_start) * 1000
retry_info = (
f" (retry {retry_count}/{MAX_RETRIES})" if retry_count > 0 else ""
)
logger.info( logger.info(
f"Creating OpenAI chat completion stream..." f"[TIMING] Creating OpenAI stream at {elapsed:.1f}ms{retry_info}",
f"{f' (retry {retry_count}/{MAX_RETRIES})' if retry_count > 0 else ''}" extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"retry_count": retry_count,
}
},
) )
# Build extra_body for OpenRouter tracing and PostHog analytics # Build extra_body for OpenRouter tracing and PostHog analytics
@@ -965,6 +1040,7 @@ async def _stream_chat_chunks(
:128 :128
] # OpenRouter limit ] # OpenRouter limit
api_call_start = time_module.perf_counter()
stream = await client.chat.completions.create( stream = await client.chat.completions.create(
model=model, model=model,
messages=cast(list[ChatCompletionMessageParam], messages), messages=cast(list[ChatCompletionMessageParam], messages),
@@ -974,6 +1050,11 @@ async def _stream_chat_chunks(
stream_options=ChatCompletionStreamOptionsParam(include_usage=True), stream_options=ChatCompletionStreamOptionsParam(include_usage=True),
extra_body=extra_body, extra_body=extra_body,
) )
api_init_time = (time_module.perf_counter() - api_call_start) * 1000
logger.info(
f"[TIMING] OpenAI stream object returned in {api_init_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": api_init_time}},
)
# Variables to accumulate tool calls # Variables to accumulate tool calls
tool_calls: list[dict[str, Any]] = [] tool_calls: list[dict[str, Any]] = []
@@ -984,10 +1065,13 @@ async def _stream_chat_chunks(
# Track if we've started the text block # Track if we've started the text block
text_started = False text_started = False
first_content_chunk = True
chunk_count = 0
# Process the stream # Process the stream
chunk: ChatCompletionChunk chunk: ChatCompletionChunk
async for chunk in stream: async for chunk in stream:
chunk_count += 1
if chunk.usage: if chunk.usage:
yield StreamUsage( yield StreamUsage(
promptTokens=chunk.usage.prompt_tokens, promptTokens=chunk.usage.prompt_tokens,
@@ -1010,6 +1094,23 @@ async def _stream_chat_chunks(
if not text_started and text_block_id: if not text_started and text_block_id:
yield StreamTextStart(id=text_block_id) yield StreamTextStart(id=text_block_id)
text_started = True text_started = True
# Log timing for first content chunk
if first_content_chunk:
first_content_chunk = False
ttfc = (
time_module.perf_counter() - api_call_start
) * 1000
logger.info(
f"[TIMING] FIRST CONTENT CHUNK at {ttfc:.1f}ms "
f"(since API call), n_chunks={chunk_count}",
extra={
"json_fields": {
**log_meta,
"time_to_first_chunk_ms": ttfc,
"n_chunks": chunk_count,
}
},
)
# Stream the text delta # Stream the text delta
text_response = StreamTextDelta( text_response = StreamTextDelta(
id=text_block_id or "", id=text_block_id or "",
@@ -1066,7 +1167,21 @@ async def _stream_chat_chunks(
toolName=tool_calls[idx]["function"]["name"], toolName=tool_calls[idx]["function"]["name"],
) )
emitted_start_for_idx.add(idx) emitted_start_for_idx.add(idx)
logger.info(f"Stream complete. Finish reason: {finish_reason}") stream_duration = time_module.perf_counter() - api_call_start
logger.info(
f"[TIMING] OpenAI stream COMPLETE, finish_reason={finish_reason}, "
f"duration={stream_duration:.2f}s, "
f"n_chunks={chunk_count}, n_tool_calls={len(tool_calls)}",
extra={
"json_fields": {
**log_meta,
"stream_duration_ms": stream_duration * 1000,
"finish_reason": finish_reason,
"n_chunks": chunk_count,
"n_tool_calls": len(tool_calls),
}
},
)
# Yield all accumulated tool calls after the stream is complete # Yield all accumulated tool calls after the stream is complete
# This ensures all tool call arguments have been fully received # This ensures all tool call arguments have been fully received
@@ -1086,6 +1201,12 @@ async def _stream_chat_chunks(
# Re-raise to trigger retry logic in the parent function # Re-raise to trigger retry logic in the parent function
raise raise
total_time = (time_module.perf_counter() - stream_chunks_start) * 1000
logger.info(
f"[TIMING] _stream_chat_chunks COMPLETED in {total_time/1000:.1f}s; "
f"session={session.session_id}, user={session.user_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
yield StreamFinish() yield StreamFinish()
return return
except Exception as e: except Exception as e:

View File

@@ -104,6 +104,24 @@ async def create_task(
Returns: Returns:
The created ActiveTask instance (metadata only) The created ActiveTask instance (metadata only)
""" """
import time
start_time = time.perf_counter()
# Build log metadata for structured logging
log_meta = {
"component": "StreamRegistry",
"task_id": task_id,
"session_id": session_id,
}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] create_task STARTED, task={task_id}, session={session_id}, user={user_id}",
extra={"json_fields": log_meta},
)
task = ActiveTask( task = ActiveTask(
task_id=task_id, task_id=task_id,
session_id=session_id, session_id=session_id,
@@ -114,10 +132,18 @@ async def create_task(
) )
# Store metadata in Redis # Store metadata in Redis
redis_start = time.perf_counter()
redis = await get_redis_async() redis = await get_redis_async()
redis_time = (time.perf_counter() - redis_start) * 1000
logger.info(
f"[TIMING] get_redis_async took {redis_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": redis_time}},
)
meta_key = _get_task_meta_key(task_id) meta_key = _get_task_meta_key(task_id)
op_key = _get_operation_mapping_key(operation_id) op_key = _get_operation_mapping_key(operation_id)
hset_start = time.perf_counter()
await redis.hset( # type: ignore[misc] await redis.hset( # type: ignore[misc]
meta_key, meta_key,
mapping={ mapping={
@@ -131,12 +157,22 @@ async def create_task(
"created_at": task.created_at.isoformat(), "created_at": task.created_at.isoformat(),
}, },
) )
hset_time = (time.perf_counter() - hset_start) * 1000
logger.info(
f"[TIMING] redis.hset took {hset_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": hset_time}},
)
await redis.expire(meta_key, config.stream_ttl) await redis.expire(meta_key, config.stream_ttl)
# Create operation_id -> task_id mapping for webhook lookups # Create operation_id -> task_id mapping for webhook lookups
await redis.set(op_key, task_id, ex=config.stream_ttl) await redis.set(op_key, task_id, ex=config.stream_ttl)
logger.debug(f"Created task {task_id} for session {session_id}") total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] create_task COMPLETED in {total_time:.1f}ms; task={task_id}, session={session_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
return task return task
@@ -156,26 +192,60 @@ async def publish_chunk(
Returns: Returns:
The Redis Stream message ID The Redis Stream message ID
""" """
import time
start_time = time.perf_counter()
chunk_type = type(chunk).__name__
chunk_json = chunk.model_dump_json() chunk_json = chunk.model_dump_json()
message_id = "0-0" message_id = "0-0"
# Build log metadata
log_meta = {
"component": "StreamRegistry",
"task_id": task_id,
"chunk_type": chunk_type,
}
try: try:
redis = await get_redis_async() redis = await get_redis_async()
stream_key = _get_task_stream_key(task_id) stream_key = _get_task_stream_key(task_id)
# Write to Redis Stream for persistence and real-time delivery # Write to Redis Stream for persistence and real-time delivery
xadd_start = time.perf_counter()
raw_id = await redis.xadd( raw_id = await redis.xadd(
stream_key, stream_key,
{"data": chunk_json}, {"data": chunk_json},
maxlen=config.stream_max_length, maxlen=config.stream_max_length,
) )
xadd_time = (time.perf_counter() - xadd_start) * 1000
message_id = raw_id if isinstance(raw_id, str) else raw_id.decode() message_id = raw_id if isinstance(raw_id, str) else raw_id.decode()
# Set TTL on stream to match task metadata TTL # Set TTL on stream to match task metadata TTL
await redis.expire(stream_key, config.stream_ttl) await redis.expire(stream_key, config.stream_ttl)
total_time = (time.perf_counter() - start_time) * 1000
# Only log timing for significant chunks or slow operations
if (
chunk_type
in ("StreamStart", "StreamFinish", "StreamTextStart", "StreamTextEnd")
or total_time > 50
):
logger.info(
f"[TIMING] publish_chunk {chunk_type} in {total_time:.1f}ms (xadd={xadd_time:.1f}ms)",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"xadd_time_ms": xadd_time,
"message_id": message_id,
}
},
)
except Exception as e: except Exception as e:
elapsed = (time.perf_counter() - start_time) * 1000
logger.error( logger.error(
f"Failed to publish chunk for task {task_id}: {e}", f"[TIMING] Failed to publish chunk {chunk_type} after {elapsed:.1f}ms: {e}",
extra={"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}},
exc_info=True, exc_info=True,
) )
@@ -200,24 +270,61 @@ async def subscribe_to_task(
An asyncio Queue that will receive stream chunks, or None if task not found An asyncio Queue that will receive stream chunks, or None if task not found
or user doesn't have access or user doesn't have access
""" """
import time
start_time = time.perf_counter()
# Build log metadata
log_meta = {"component": "StreamRegistry", "task_id": task_id}
if user_id:
log_meta["user_id"] = user_id
logger.info(
f"[TIMING] subscribe_to_task STARTED, task={task_id}, user={user_id}, last_msg={last_message_id}",
extra={"json_fields": {**log_meta, "last_message_id": last_message_id}},
)
redis_start = time.perf_counter()
redis = await get_redis_async() redis = await get_redis_async()
meta_key = _get_task_meta_key(task_id) meta_key = _get_task_meta_key(task_id)
meta: dict[Any, Any] = await redis.hgetall(meta_key) # type: ignore[misc] meta: dict[Any, Any] = await redis.hgetall(meta_key) # type: ignore[misc]
hgetall_time = (time.perf_counter() - redis_start) * 1000
logger.info(
f"[TIMING] Redis hgetall took {hgetall_time:.1f}ms",
extra={"json_fields": {**log_meta, "duration_ms": hgetall_time}},
)
if not meta: if not meta:
logger.debug(f"Task {task_id} not found in Redis") elapsed = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] Task not found in Redis after {elapsed:.1f}ms",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"reason": "task_not_found",
}
},
)
return None return None
# Note: Redis client uses decode_responses=True, so keys are strings # Note: Redis client uses decode_responses=True, so keys are strings
task_status = meta.get("status", "") task_status = meta.get("status", "")
task_user_id = meta.get("user_id", "") or None task_user_id = meta.get("user_id", "") or None
log_meta["session_id"] = meta.get("session_id", "")
# Validate ownership - if task has an owner, requester must match # Validate ownership - if task has an owner, requester must match
if task_user_id: if task_user_id:
if user_id != task_user_id: if user_id != task_user_id:
logger.warning( logger.warning(
f"User {user_id} denied access to task {task_id} " f"[TIMING] Access denied: user {user_id} tried to access task owned by {task_user_id}",
f"owned by {task_user_id}" extra={
"json_fields": {
**log_meta,
"task_owner": task_user_id,
"reason": "access_denied",
}
},
) )
return None return None
@@ -225,7 +332,19 @@ async def subscribe_to_task(
stream_key = _get_task_stream_key(task_id) stream_key = _get_task_stream_key(task_id)
# Step 1: Replay messages from Redis Stream # Step 1: Replay messages from Redis Stream
xread_start = time.perf_counter()
messages = await redis.xread({stream_key: last_message_id}, block=0, count=1000) messages = await redis.xread({stream_key: last_message_id}, block=0, count=1000)
xread_time = (time.perf_counter() - xread_start) * 1000
logger.info(
f"[TIMING] Redis xread (replay) took {xread_time:.1f}ms, status={task_status}",
extra={
"json_fields": {
**log_meta,
"duration_ms": xread_time,
"task_status": task_status,
}
},
)
replayed_count = 0 replayed_count = 0
replay_last_id = last_message_id replay_last_id = last_message_id
@@ -244,19 +363,48 @@ async def subscribe_to_task(
except Exception as e: except Exception as e:
logger.warning(f"Failed to replay message: {e}") logger.warning(f"Failed to replay message: {e}")
logger.debug(f"Task {task_id}: replayed {replayed_count} messages") logger.info(
f"[TIMING] Replayed {replayed_count} messages, last_id={replay_last_id}",
extra={
"json_fields": {
**log_meta,
"n_messages_replayed": replayed_count,
"replay_last_id": replay_last_id,
}
},
)
# Step 2: If task is still running, start stream listener for live updates # Step 2: If task is still running, start stream listener for live updates
if task_status == "running": if task_status == "running":
logger.info(
"[TIMING] Task still running, starting _stream_listener",
extra={"json_fields": {**log_meta, "task_status": task_status}},
)
listener_task = asyncio.create_task( listener_task = asyncio.create_task(
_stream_listener(task_id, subscriber_queue, replay_last_id) _stream_listener(task_id, subscriber_queue, replay_last_id, log_meta)
) )
# Track listener task for cleanup on unsubscribe # Track listener task for cleanup on unsubscribe
_listener_tasks[id(subscriber_queue)] = (task_id, listener_task) _listener_tasks[id(subscriber_queue)] = (task_id, listener_task)
else: else:
# Task is completed/failed - add finish marker # Task is completed/failed - add finish marker
logger.info(
f"[TIMING] Task already {task_status}, adding StreamFinish",
extra={"json_fields": {**log_meta, "task_status": task_status}},
)
await subscriber_queue.put(StreamFinish()) await subscriber_queue.put(StreamFinish())
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] subscribe_to_task COMPLETED in {total_time:.1f}ms; task={task_id}, "
f"n_messages_replayed={replayed_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"n_messages_replayed": replayed_count,
}
},
)
return subscriber_queue return subscriber_queue
@@ -264,6 +412,7 @@ async def _stream_listener(
task_id: str, task_id: str,
subscriber_queue: asyncio.Queue[StreamBaseResponse], subscriber_queue: asyncio.Queue[StreamBaseResponse],
last_replayed_id: str, last_replayed_id: str,
log_meta: dict | None = None,
) -> None: ) -> None:
"""Listen to Redis Stream for new messages using blocking XREAD. """Listen to Redis Stream for new messages using blocking XREAD.
@@ -274,10 +423,27 @@ async def _stream_listener(
task_id: Task ID to listen for task_id: Task ID to listen for
subscriber_queue: Queue to deliver messages to subscriber_queue: Queue to deliver messages to
last_replayed_id: Last message ID from replay (continue from here) last_replayed_id: Last message ID from replay (continue from here)
log_meta: Structured logging metadata
""" """
import time
start_time = time.perf_counter()
# Use provided log_meta or build minimal one
if log_meta is None:
log_meta = {"component": "StreamRegistry", "task_id": task_id}
logger.info(
f"[TIMING] _stream_listener STARTED, task={task_id}, last_id={last_replayed_id}",
extra={"json_fields": {**log_meta, "last_replayed_id": last_replayed_id}},
)
queue_id = id(subscriber_queue) queue_id = id(subscriber_queue)
# Track the last successfully delivered message ID for recovery hints # Track the last successfully delivered message ID for recovery hints
last_delivered_id = last_replayed_id last_delivered_id = last_replayed_id
messages_delivered = 0
first_message_time = None
xread_count = 0
try: try:
redis = await get_redis_async() redis = await get_redis_async()
@@ -287,9 +453,39 @@ async def _stream_listener(
while True: while True:
# Block for up to 30 seconds waiting for new messages # Block for up to 30 seconds waiting for new messages
# This allows periodic checking if task is still running # This allows periodic checking if task is still running
xread_start = time.perf_counter()
xread_count += 1
messages = await redis.xread( messages = await redis.xread(
{stream_key: current_id}, block=30000, count=100 {stream_key: current_id}, block=30000, count=100
) )
xread_time = (time.perf_counter() - xread_start) * 1000
if messages:
msg_count = sum(len(msgs) for _, msgs in messages)
logger.info(
f"[TIMING] xread #{xread_count} returned {msg_count} messages in {xread_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"xread_count": xread_count,
"n_messages": msg_count,
"duration_ms": xread_time,
}
},
)
elif xread_time > 1000:
# Only log timeouts (30s blocking)
logger.info(
f"[TIMING] xread #{xread_count} timeout after {xread_time:.1f}ms",
extra={
"json_fields": {
**log_meta,
"xread_count": xread_count,
"duration_ms": xread_time,
"reason": "timeout",
}
},
)
if not messages: if not messages:
# Timeout - check if task is still running # Timeout - check if task is still running
@@ -326,10 +522,30 @@ async def _stream_listener(
) )
# Update last delivered ID on successful delivery # Update last delivered ID on successful delivery
last_delivered_id = current_id last_delivered_id = current_id
messages_delivered += 1
if first_message_time is None:
first_message_time = time.perf_counter()
elapsed = (first_message_time - start_time) * 1000
logger.info(
f"[TIMING] FIRST live message at {elapsed:.1f}ms, type={type(chunk).__name__}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"chunk_type": type(chunk).__name__,
}
},
)
except asyncio.TimeoutError: except asyncio.TimeoutError:
logger.warning( logger.warning(
f"Subscriber queue full for task {task_id}, " f"[TIMING] Subscriber queue full, delivery timed out after {QUEUE_PUT_TIMEOUT}s",
f"message delivery timed out after {QUEUE_PUT_TIMEOUT}s" extra={
"json_fields": {
**log_meta,
"timeout_s": QUEUE_PUT_TIMEOUT,
"reason": "queue_full",
}
},
) )
# Send overflow error with recovery info # Send overflow error with recovery info
try: try:
@@ -351,15 +567,44 @@ async def _stream_listener(
# Stop listening on finish # Stop listening on finish
if isinstance(chunk, StreamFinish): if isinstance(chunk, StreamFinish):
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] StreamFinish received in {total_time/1000:.1f}s; delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"messages_delivered": messages_delivered,
}
},
)
return return
except Exception as e: except Exception as e:
logger.warning(f"Error processing stream message: {e}") logger.warning(
f"Error processing stream message: {e}",
extra={"json_fields": {**log_meta, "error": str(e)}},
)
except asyncio.CancelledError: except asyncio.CancelledError:
logger.debug(f"Stream listener cancelled for task {task_id}") elapsed = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] _stream_listener CANCELLED after {elapsed:.1f}ms, delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
"elapsed_ms": elapsed,
"messages_delivered": messages_delivered,
"reason": "cancelled",
}
},
)
raise # Re-raise to propagate cancellation raise # Re-raise to propagate cancellation
except Exception as e: except Exception as e:
logger.error(f"Stream listener error for task {task_id}: {e}") elapsed = (time.perf_counter() - start_time) * 1000
logger.error(
f"[TIMING] _stream_listener ERROR after {elapsed:.1f}ms: {e}",
extra={"json_fields": {**log_meta, "elapsed_ms": elapsed, "error": str(e)}},
)
# On error, send finish to unblock subscriber # On error, send finish to unblock subscriber
try: try:
await asyncio.wait_for( await asyncio.wait_for(
@@ -368,10 +613,24 @@ async def _stream_listener(
) )
except (asyncio.TimeoutError, asyncio.QueueFull): except (asyncio.TimeoutError, asyncio.QueueFull):
logger.warning( logger.warning(
f"Could not deliver finish event for task {task_id} after error" "Could not deliver finish event after error",
extra={"json_fields": log_meta},
) )
finally: finally:
# Clean up listener task mapping on exit # Clean up listener task mapping on exit
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] _stream_listener FINISHED in {total_time/1000:.1f}s; task={task_id}, "
f"delivered={messages_delivered}, xread_count={xread_count}",
extra={
"json_fields": {
**log_meta,
"total_time_ms": total_time,
"messages_delivered": messages_delivered,
"xread_count": xread_count,
}
},
)
_listener_tasks.pop(queue_id, None) _listener_tasks.pop(queue_id, None)

View File

@@ -13,10 +13,32 @@ from backend.api.features.chat.tools.models import (
NoResultsResponse, NoResultsResponse,
) )
from backend.api.features.store.hybrid_search import unified_hybrid_search from backend.api.features.store.hybrid_search import unified_hybrid_search
from backend.data.block import get_block from backend.data.block import BlockType, get_block
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_TARGET_RESULTS = 10
# Over-fetch to compensate for post-hoc filtering of graph-only blocks.
# 40 is 2x current removed; speed of query 10 vs 40 is minimial
_OVERFETCH_PAGE_SIZE = 40
# Block types that only work within graphs and cannot run standalone in CoPilot.
COPILOT_EXCLUDED_BLOCK_TYPES = {
BlockType.INPUT, # Graph interface definition - data enters via chat, not graph inputs
BlockType.OUTPUT, # Graph interface definition - data exits via chat, not graph outputs
BlockType.WEBHOOK, # Wait for external events - would hang forever in CoPilot
BlockType.WEBHOOK_MANUAL, # Same as WEBHOOK
BlockType.NOTE, # Visual annotation only - no runtime behavior
BlockType.HUMAN_IN_THE_LOOP, # Pauses for human approval - CoPilot IS human-in-the-loop
BlockType.AGENT, # AgentExecutorBlock requires execution_context - use run_agent tool
}
# Specific block IDs excluded from CoPilot (STANDARD type but still require graph context)
COPILOT_EXCLUDED_BLOCK_IDS = {
# SmartDecisionMakerBlock - dynamically discovers downstream blocks via graph topology
"3b191d9f-356f-482d-8238-ba04b6d18381",
}
class FindBlockTool(BaseTool): class FindBlockTool(BaseTool):
"""Tool for searching available blocks.""" """Tool for searching available blocks."""
@@ -88,7 +110,7 @@ class FindBlockTool(BaseTool):
query=query, query=query,
content_types=[ContentType.BLOCK], content_types=[ContentType.BLOCK],
page=1, page=1,
page_size=10, page_size=_OVERFETCH_PAGE_SIZE,
) )
if not results: if not results:
@@ -108,60 +130,90 @@ class FindBlockTool(BaseTool):
block = get_block(block_id) block = get_block(block_id)
# Skip disabled blocks # Skip disabled blocks
if block and not block.disabled: if not block or block.disabled:
# Get input/output schemas continue
input_schema = {}
output_schema = {}
try:
input_schema = block.input_schema.jsonschema()
except Exception:
pass
try:
output_schema = block.output_schema.jsonschema()
except Exception:
pass
# Get categories from block instance # Skip blocks excluded from CoPilot (graph-only blocks)
categories = [] if (
if hasattr(block, "categories") and block.categories: block.block_type in COPILOT_EXCLUDED_BLOCK_TYPES
categories = [cat.value for cat in block.categories] or block.id in COPILOT_EXCLUDED_BLOCK_IDS
):
continue
# Extract required inputs for easier use # Get input/output schemas
required_inputs: list[BlockInputFieldInfo] = [] input_schema = {}
if input_schema: output_schema = {}
properties = input_schema.get("properties", {}) try:
required_fields = set(input_schema.get("required", [])) input_schema = block.input_schema.jsonschema()
# Get credential field names to exclude from required inputs except Exception as e:
credentials_fields = set( logger.debug(
block.input_schema.get_credentials_fields().keys() "Failed to generate input schema for block %s: %s",
) block_id,
e,
for field_name, field_schema in properties.items():
# Skip credential fields - they're handled separately
if field_name in credentials_fields:
continue
required_inputs.append(
BlockInputFieldInfo(
name=field_name,
type=field_schema.get("type", "string"),
description=field_schema.get("description", ""),
required=field_name in required_fields,
default=field_schema.get("default"),
)
)
blocks.append(
BlockInfoSummary(
id=block_id,
name=block.name,
description=block.description or "",
categories=categories,
input_schema=input_schema,
output_schema=output_schema,
required_inputs=required_inputs,
)
) )
try:
output_schema = block.output_schema.jsonschema()
except Exception as e:
logger.debug(
"Failed to generate output schema for block %s: %s",
block_id,
e,
)
# Get categories from block instance
categories = []
if hasattr(block, "categories") and block.categories:
categories = [cat.value for cat in block.categories]
# Extract required inputs for easier use
required_inputs: list[BlockInputFieldInfo] = []
if input_schema:
properties = input_schema.get("properties", {})
required_fields = set(input_schema.get("required", []))
# Get credential field names to exclude from required inputs
credentials_fields = set(
block.input_schema.get_credentials_fields().keys()
)
for field_name, field_schema in properties.items():
# Skip credential fields - they're handled separately
if field_name in credentials_fields:
continue
required_inputs.append(
BlockInputFieldInfo(
name=field_name,
type=field_schema.get("type", "string"),
description=field_schema.get("description", ""),
required=field_name in required_fields,
default=field_schema.get("default"),
)
)
blocks.append(
BlockInfoSummary(
id=block_id,
name=block.name,
description=block.description or "",
categories=categories,
input_schema=input_schema,
output_schema=output_schema,
required_inputs=required_inputs,
)
)
if len(blocks) >= _TARGET_RESULTS:
break
if blocks and len(blocks) < _TARGET_RESULTS:
logger.debug(
"find_block returned %d/%d results for query '%s' "
"(filtered %d excluded/disabled blocks)",
len(blocks),
_TARGET_RESULTS,
query,
len(results) - len(blocks),
)
if not blocks: if not blocks:
return NoResultsResponse( return NoResultsResponse(

View File

@@ -0,0 +1,139 @@
"""Tests for block filtering in FindBlockTool."""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from backend.api.features.chat.tools.find_block import (
COPILOT_EXCLUDED_BLOCK_IDS,
COPILOT_EXCLUDED_BLOCK_TYPES,
FindBlockTool,
)
from backend.api.features.chat.tools.models import BlockListResponse
from backend.data.block import BlockType
from ._test_data import make_session
_TEST_USER_ID = "test-user-find-block"
def make_mock_block(
block_id: str, name: str, block_type: BlockType, disabled: bool = False
):
"""Create a mock block for testing."""
mock = MagicMock()
mock.id = block_id
mock.name = name
mock.description = f"{name} description"
mock.block_type = block_type
mock.disabled = disabled
mock.input_schema = MagicMock()
mock.input_schema.jsonschema.return_value = {"properties": {}, "required": []}
mock.input_schema.get_credentials_fields.return_value = {}
mock.output_schema = MagicMock()
mock.output_schema.jsonschema.return_value = {}
mock.categories = []
return mock
class TestFindBlockFiltering:
"""Tests for block filtering in FindBlockTool."""
def test_excluded_block_types_contains_expected_types(self):
"""Verify COPILOT_EXCLUDED_BLOCK_TYPES contains all graph-only types."""
assert BlockType.INPUT in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.OUTPUT in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.WEBHOOK in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.WEBHOOK_MANUAL in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.NOTE in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.HUMAN_IN_THE_LOOP in COPILOT_EXCLUDED_BLOCK_TYPES
assert BlockType.AGENT in COPILOT_EXCLUDED_BLOCK_TYPES
def test_excluded_block_ids_contains_smart_decision_maker(self):
"""Verify SmartDecisionMakerBlock is in COPILOT_EXCLUDED_BLOCK_IDS."""
assert "3b191d9f-356f-482d-8238-ba04b6d18381" in COPILOT_EXCLUDED_BLOCK_IDS
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_type_filtered_from_results(self):
"""Verify blocks with excluded BlockTypes are filtered from search results."""
session = make_session(user_id=_TEST_USER_ID)
# Mock search returns an INPUT block (excluded) and a STANDARD block (included)
search_results = [
{"content_id": "input-block-id", "score": 0.9},
{"content_id": "standard-block-id", "score": 0.8},
]
input_block = make_mock_block("input-block-id", "Input Block", BlockType.INPUT)
standard_block = make_mock_block(
"standard-block-id", "HTTP Request", BlockType.STANDARD
)
def mock_get_block(block_id):
return {
"input-block-id": input_block,
"standard-block-id": standard_block,
}.get(block_id)
with patch(
"backend.api.features.chat.tools.find_block.unified_hybrid_search",
new_callable=AsyncMock,
return_value=(search_results, 2),
):
with patch(
"backend.api.features.chat.tools.find_block.get_block",
side_effect=mock_get_block,
):
tool = FindBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID, session=session, query="test"
)
# Should only return the standard block, not the INPUT block
assert isinstance(response, BlockListResponse)
assert len(response.blocks) == 1
assert response.blocks[0].id == "standard-block-id"
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_id_filtered_from_results(self):
"""Verify SmartDecisionMakerBlock is filtered from search results."""
session = make_session(user_id=_TEST_USER_ID)
smart_decision_id = "3b191d9f-356f-482d-8238-ba04b6d18381"
search_results = [
{"content_id": smart_decision_id, "score": 0.9},
{"content_id": "normal-block-id", "score": 0.8},
]
# SmartDecisionMakerBlock has STANDARD type but is excluded by ID
smart_block = make_mock_block(
smart_decision_id, "Smart Decision Maker", BlockType.STANDARD
)
normal_block = make_mock_block(
"normal-block-id", "Normal Block", BlockType.STANDARD
)
def mock_get_block(block_id):
return {
smart_decision_id: smart_block,
"normal-block-id": normal_block,
}.get(block_id)
with patch(
"backend.api.features.chat.tools.find_block.unified_hybrid_search",
new_callable=AsyncMock,
return_value=(search_results, 2),
):
with patch(
"backend.api.features.chat.tools.find_block.get_block",
side_effect=mock_get_block,
):
tool = FindBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID, session=session, query="decision"
)
# Should only return normal block, not SmartDecisionMakerBlock
assert isinstance(response, BlockListResponse)
assert len(response.blocks) == 1
assert response.blocks[0].id == "normal-block-id"

View File

@@ -0,0 +1,29 @@
"""Shared helpers for chat tools."""
from typing import Any
def get_inputs_from_schema(
input_schema: dict[str, Any],
exclude_fields: set[str] | None = None,
) -> list[dict[str, Any]]:
"""Extract input field info from JSON schema."""
if not isinstance(input_schema, dict):
return []
exclude = exclude_fields or set()
properties = input_schema.get("properties", {})
required = set(input_schema.get("required", []))
return [
{
"name": name,
"title": schema.get("title", name),
"type": schema.get("type", "string"),
"description": schema.get("description", ""),
"required": name in required,
"default": schema.get("default"),
}
for name, schema in properties.items()
if name not in exclude
]

View File

@@ -24,6 +24,7 @@ from backend.util.timezone_utils import (
) )
from .base import BaseTool from .base import BaseTool
from .helpers import get_inputs_from_schema
from .models import ( from .models import (
AgentDetails, AgentDetails,
AgentDetailsResponse, AgentDetailsResponse,
@@ -261,7 +262,7 @@ class RunAgentTool(BaseTool):
), ),
requirements={ requirements={
"credentials": requirements_creds_list, "credentials": requirements_creds_list,
"inputs": self._get_inputs_list(graph.input_schema), "inputs": get_inputs_from_schema(graph.input_schema),
"execution_modes": self._get_execution_modes(graph), "execution_modes": self._get_execution_modes(graph),
}, },
), ),
@@ -369,22 +370,6 @@ class RunAgentTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
def _get_inputs_list(self, input_schema: dict[str, Any]) -> list[dict[str, Any]]:
"""Extract inputs list from schema."""
inputs_list = []
if isinstance(input_schema, dict) and "properties" in input_schema:
for field_name, field_schema in input_schema["properties"].items():
inputs_list.append(
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in input_schema.get("required", []),
}
)
return inputs_list
def _get_execution_modes(self, graph: GraphModel) -> list[str]: def _get_execution_modes(self, graph: GraphModel) -> list[str]:
"""Get available execution modes for the graph.""" """Get available execution modes for the graph."""
trigger_info = graph.trigger_setup_info trigger_info = graph.trigger_setup_info
@@ -398,7 +383,7 @@ class RunAgentTool(BaseTool):
suffix: str, suffix: str,
) -> str: ) -> str:
"""Build a message describing available inputs for an agent.""" """Build a message describing available inputs for an agent."""
inputs_list = self._get_inputs_list(graph.input_schema) inputs_list = get_inputs_from_schema(graph.input_schema)
required_names = [i["name"] for i in inputs_list if i["required"]] required_names = [i["name"] for i in inputs_list if i["required"]]
optional_names = [i["name"] for i in inputs_list if not i["required"]] optional_names = [i["name"] for i in inputs_list if not i["required"]]

View File

@@ -8,14 +8,19 @@ from typing import Any
from pydantic_core import PydanticUndefined from pydantic_core import PydanticUndefined
from backend.api.features.chat.model import ChatSession from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block from backend.api.features.chat.tools.find_block import (
COPILOT_EXCLUDED_BLOCK_IDS,
COPILOT_EXCLUDED_BLOCK_TYPES,
)
from backend.data.block import AnyBlockSchema, get_block
from backend.data.execution import ExecutionContext from backend.data.execution import ExecutionContext
from backend.data.model import CredentialsMetaInput from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.data.workspace import get_or_create_workspace from backend.data.workspace import get_or_create_workspace
from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import BlockError from backend.util.exceptions import BlockError
from .base import BaseTool from .base import BaseTool
from .helpers import get_inputs_from_schema
from .models import ( from .models import (
BlockOutputResponse, BlockOutputResponse,
ErrorResponse, ErrorResponse,
@@ -24,7 +29,10 @@ from .models import (
ToolResponseBase, ToolResponseBase,
UserReadiness, UserReadiness,
) )
from .utils import build_missing_credentials_from_field_info from .utils import (
build_missing_credentials_from_field_info,
match_credentials_to_requirements,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -73,91 +81,6 @@ class RunBlockTool(BaseTool):
def requires_auth(self) -> bool: def requires_auth(self) -> bool:
return True return True
async def _check_block_credentials(
self,
user_id: str,
block: Any,
input_data: dict[str, Any] | None = None,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Check if user has required credentials for a block.
Args:
user_id: User ID
block: Block to check credentials for
input_data: Input data for the block (used to determine provider via discriminator)
Returns:
tuple[matched_credentials, missing_credentials]
"""
matched_credentials: dict[str, CredentialsMetaInput] = {}
missing_credentials: list[CredentialsMetaInput] = []
input_data = input_data or {}
# Get credential field info from block's input schema
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return matched_credentials, missing_credentials
# Get user's available credentials
creds_manager = IntegrationCredentialsManager()
available_creds = await creds_manager.store.get_all_creds(user_id)
for field_name, field_info in credentials_fields_info.items():
effective_field_info = field_info
if field_info.discriminator and field_info.discriminator_mapping:
# Get discriminator from input, falling back to schema default
discriminator_value = input_data.get(field_info.discriminator)
if discriminator_value is None:
field = block.input_schema.model_fields.get(
field_info.discriminator
)
if field and field.default is not PydanticUndefined:
discriminator_value = field.default
if (
discriminator_value
and discriminator_value in field_info.discriminator_mapping
):
effective_field_info = field_info.discriminate(discriminator_value)
logger.debug(
f"Discriminated provider for {field_name}: "
f"{discriminator_value} -> {effective_field_info.provider}"
)
matching_cred = next(
(
cred
for cred in available_creds
if cred.provider in effective_field_info.provider
and cred.type in effective_field_info.supported_types
),
None,
)
if matching_cred:
matched_credentials[field_name] = CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
else:
# Create a placeholder for the missing credential
provider = next(iter(effective_field_info.provider), "unknown")
cred_type = next(iter(effective_field_info.supported_types), "api_key")
missing_credentials.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched_credentials, missing_credentials
async def _execute( async def _execute(
self, self,
user_id: str | None, user_id: str | None,
@@ -212,11 +135,24 @@ class RunBlockTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
# Check if block is excluded from CoPilot (graph-only blocks)
if (
block.block_type in COPILOT_EXCLUDED_BLOCK_TYPES
or block.id in COPILOT_EXCLUDED_BLOCK_IDS
):
return ErrorResponse(
message=(
f"Block '{block.name}' cannot be run directly in CoPilot. "
"This block is designed for use within graphs only."
),
session_id=session_id,
)
logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}") logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}")
creds_manager = IntegrationCredentialsManager() creds_manager = IntegrationCredentialsManager()
matched_credentials, missing_credentials = await self._check_block_credentials( matched_credentials, missing_credentials = (
user_id, block, input_data await self._resolve_block_credentials(user_id, block, input_data)
) )
if missing_credentials: if missing_credentials:
@@ -345,29 +281,75 @@ class RunBlockTool(BaseTool):
session_id=session_id, session_id=session_id,
) )
def _get_inputs_list(self, block: Any) -> list[dict[str, Any]]: async def _resolve_block_credentials(
self,
user_id: str,
block: AnyBlockSchema,
input_data: dict[str, Any] | None = None,
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Resolve credentials for a block by matching user's available credentials.
Args:
user_id: User ID
block: Block to resolve credentials for
input_data: Input data for the block (used to determine provider via discriminator)
Returns:
tuple of (matched_credentials, missing_credentials) - matched credentials
are used for block execution, missing ones indicate setup requirements.
"""
input_data = input_data or {}
requirements = self._resolve_discriminated_credentials(block, input_data)
if not requirements:
return {}, []
return await match_credentials_to_requirements(user_id, requirements)
def _get_inputs_list(self, block: AnyBlockSchema) -> list[dict[str, Any]]:
"""Extract non-credential inputs from block schema.""" """Extract non-credential inputs from block schema."""
inputs_list = []
schema = block.input_schema.jsonschema() schema = block.input_schema.jsonschema()
properties = schema.get("properties", {})
required_fields = set(schema.get("required", []))
# Get credential field names to exclude
credentials_fields = set(block.input_schema.get_credentials_fields().keys()) credentials_fields = set(block.input_schema.get_credentials_fields().keys())
return get_inputs_from_schema(schema, exclude_fields=credentials_fields)
for field_name, field_schema in properties.items(): def _resolve_discriminated_credentials(
# Skip credential fields self,
if field_name in credentials_fields: block: AnyBlockSchema,
continue input_data: dict[str, Any],
) -> dict[str, CredentialsFieldInfo]:
"""Resolve credential requirements, applying discriminator logic where needed."""
credentials_fields_info = block.input_schema.get_credentials_fields_info()
if not credentials_fields_info:
return {}
inputs_list.append( resolved: dict[str, CredentialsFieldInfo] = {}
{
"name": field_name,
"title": field_schema.get("title", field_name),
"type": field_schema.get("type", "string"),
"description": field_schema.get("description", ""),
"required": field_name in required_fields,
}
)
return inputs_list for field_name, field_info in credentials_fields_info.items():
effective_field_info = field_info
if field_info.discriminator and field_info.discriminator_mapping:
discriminator_value = input_data.get(field_info.discriminator)
if discriminator_value is None:
field = block.input_schema.model_fields.get(
field_info.discriminator
)
if field and field.default is not PydanticUndefined:
discriminator_value = field.default
if (
discriminator_value
and discriminator_value in field_info.discriminator_mapping
):
effective_field_info = field_info.discriminate(discriminator_value)
# For host-scoped credentials, add the discriminator value
# (e.g., URL) so _credential_is_for_host can match it
effective_field_info.discriminator_values.add(discriminator_value)
logger.debug(
f"Discriminated provider for {field_name}: "
f"{discriminator_value} -> {effective_field_info.provider}"
)
resolved[field_name] = effective_field_info
return resolved

View File

@@ -0,0 +1,106 @@
"""Tests for block execution guards in RunBlockTool."""
from unittest.mock import MagicMock, patch
import pytest
from backend.api.features.chat.tools.models import ErrorResponse
from backend.api.features.chat.tools.run_block import RunBlockTool
from backend.data.block import BlockType
from ._test_data import make_session
_TEST_USER_ID = "test-user-run-block"
def make_mock_block(
block_id: str, name: str, block_type: BlockType, disabled: bool = False
):
"""Create a mock block for testing."""
mock = MagicMock()
mock.id = block_id
mock.name = name
mock.block_type = block_type
mock.disabled = disabled
mock.input_schema = MagicMock()
mock.input_schema.jsonschema.return_value = {"properties": {}, "required": []}
mock.input_schema.get_credentials_fields_info.return_value = []
return mock
class TestRunBlockFiltering:
"""Tests for block execution guards in RunBlockTool."""
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_type_returns_error(self):
"""Attempting to execute a block with excluded BlockType returns error."""
session = make_session(user_id=_TEST_USER_ID)
input_block = make_mock_block("input-block-id", "Input Block", BlockType.INPUT)
with patch(
"backend.api.features.chat.tools.run_block.get_block",
return_value=input_block,
):
tool = RunBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID,
session=session,
block_id="input-block-id",
input_data={},
)
assert isinstance(response, ErrorResponse)
assert "cannot be run directly in CoPilot" in response.message
assert "designed for use within graphs only" in response.message
@pytest.mark.asyncio(loop_scope="session")
async def test_excluded_block_id_returns_error(self):
"""Attempting to execute SmartDecisionMakerBlock returns error."""
session = make_session(user_id=_TEST_USER_ID)
smart_decision_id = "3b191d9f-356f-482d-8238-ba04b6d18381"
smart_block = make_mock_block(
smart_decision_id, "Smart Decision Maker", BlockType.STANDARD
)
with patch(
"backend.api.features.chat.tools.run_block.get_block",
return_value=smart_block,
):
tool = RunBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID,
session=session,
block_id=smart_decision_id,
input_data={},
)
assert isinstance(response, ErrorResponse)
assert "cannot be run directly in CoPilot" in response.message
@pytest.mark.asyncio(loop_scope="session")
async def test_non_excluded_block_passes_guard(self):
"""Non-excluded blocks pass the filtering guard (may fail later for other reasons)."""
session = make_session(user_id=_TEST_USER_ID)
standard_block = make_mock_block(
"standard-id", "HTTP Request", BlockType.STANDARD
)
with patch(
"backend.api.features.chat.tools.run_block.get_block",
return_value=standard_block,
):
tool = RunBlockTool()
response = await tool._execute(
user_id=_TEST_USER_ID,
session=session,
block_id="standard-id",
input_data={},
)
# Should NOT be an ErrorResponse about CoPilot exclusion
# (may be other errors like missing credentials, but not the exclusion guard)
if isinstance(response, ErrorResponse):
assert "cannot be run directly in CoPilot" not in response.message

View File

@@ -8,12 +8,14 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db from backend.api.features.store import db as store_db
from backend.data.graph import GraphModel from backend.data.graph import GraphModel
from backend.data.model import ( from backend.data.model import (
Credentials,
CredentialsFieldInfo, CredentialsFieldInfo,
CredentialsMetaInput, CredentialsMetaInput,
HostScopedCredentials, HostScopedCredentials,
OAuth2Credentials, OAuth2Credentials,
) )
from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.providers import ProviderName
from backend.util.exceptions import NotFoundError from backend.util.exceptions import NotFoundError
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -223,6 +225,99 @@ async def get_or_create_library_agent(
return library_agents[0] return library_agents[0]
async def match_credentials_to_requirements(
user_id: str,
requirements: dict[str, CredentialsFieldInfo],
) -> tuple[dict[str, CredentialsMetaInput], list[CredentialsMetaInput]]:
"""
Match user's credentials against a dictionary of credential requirements.
This is the core matching logic shared by both graph and block credential matching.
"""
matched: dict[str, CredentialsMetaInput] = {}
missing: list[CredentialsMetaInput] = []
if not requirements:
return matched, missing
available_creds = await get_user_credentials(user_id)
for field_name, field_info in requirements.items():
matching_cred = find_matching_credential(available_creds, field_info)
if matching_cred:
try:
matched[field_name] = create_credential_meta_from_match(matching_cred)
except Exception as e:
logger.error(
f"Failed to create CredentialsMetaInput for field '{field_name}': "
f"provider={matching_cred.provider}, type={matching_cred.type}, "
f"credential_id={matching_cred.id}",
exc_info=True,
)
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=f"{field_name} (validation failed: {e})",
)
)
else:
provider = next(iter(field_info.provider), "unknown")
cred_type = next(iter(field_info.supported_types), "api_key")
missing.append(
CredentialsMetaInput(
id=field_name,
provider=provider, # type: ignore
type=cred_type, # type: ignore
title=field_name.replace("_", " ").title(),
)
)
return matched, missing
async def get_user_credentials(user_id: str) -> list[Credentials]:
"""Get all available credentials for a user."""
creds_manager = IntegrationCredentialsManager()
return await creds_manager.store.get_all_creds(user_id)
def find_matching_credential(
available_creds: list[Credentials],
field_info: CredentialsFieldInfo,
) -> Credentials | None:
"""Find a credential that matches the required provider, type, scopes, and host."""
for cred in available_creds:
if cred.provider not in field_info.provider:
continue
if cred.type not in field_info.supported_types:
continue
if cred.type == "oauth2" and not _credential_has_required_scopes(
cred, field_info
):
continue
if cred.type == "host_scoped" and not _credential_is_for_host(cred, field_info):
continue
return cred
return None
def create_credential_meta_from_match(
matching_cred: Credentials,
) -> CredentialsMetaInput:
"""Create a CredentialsMetaInput from a matched credential."""
return CredentialsMetaInput(
id=matching_cred.id,
provider=matching_cred.provider, # type: ignore
type=matching_cred.type,
title=matching_cred.title,
)
async def match_user_credentials_to_graph( async def match_user_credentials_to_graph(
user_id: str, user_id: str,
graph: GraphModel, graph: GraphModel,
@@ -265,7 +360,7 @@ async def match_user_credentials_to_graph(
_, _,
_, _,
) in aggregated_creds.items(): ) in aggregated_creds.items():
# Find first matching credential by provider, type, and scopes # Find first matching credential by provider, type, scopes, and host/URL
matching_cred = next( matching_cred = next(
( (
cred cred
@@ -280,6 +375,10 @@ async def match_user_credentials_to_graph(
cred.type != "host_scoped" cred.type != "host_scoped"
or _credential_is_for_host(cred, credential_requirements) or _credential_is_for_host(cred, credential_requirements)
) )
and (
cred.provider != ProviderName.MCP
or _credential_is_for_mcp_server(cred, credential_requirements)
)
), ),
None, None,
) )
@@ -331,8 +430,6 @@ def _credential_has_required_scopes(
# If no scopes are required, any credential matches # If no scopes are required, any credential matches
if not requirements.required_scopes: if not requirements.required_scopes:
return True return True
# Check that credential scopes are a superset of required scopes
return set(credential.scopes).issuperset(requirements.required_scopes) return set(credential.scopes).issuperset(requirements.required_scopes)
@@ -352,6 +449,22 @@ def _credential_is_for_host(
return credential.matches_url(list(requirements.discriminator_values)[0]) return credential.matches_url(list(requirements.discriminator_values)[0])
def _credential_is_for_mcp_server(
credential: Credentials,
requirements: CredentialsFieldInfo,
) -> bool:
"""Check if an MCP OAuth credential matches the required server URL."""
if not requirements.discriminator_values:
return True
server_url = (
credential.metadata.get("mcp_server_url")
if isinstance(credential, OAuth2Credentials)
else None
)
return server_url in requirements.discriminator_values if server_url else False
async def check_user_has_required_credentials( async def check_user_has_required_credentials(
user_id: str, user_id: str,
required_credentials: list[CredentialsMetaInput], required_credentials: list[CredentialsMetaInput],

View File

@@ -1,7 +1,7 @@
import asyncio import asyncio
import logging import logging
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING, Annotated, List, Literal from typing import TYPE_CHECKING, Annotated, Any, List, Literal
from autogpt_libs.auth import get_user_id from autogpt_libs.auth import get_user_id
from fastapi import ( from fastapi import (
@@ -14,7 +14,7 @@ from fastapi import (
Security, Security,
status, status,
) )
from pydantic import BaseModel, Field, SecretStr from pydantic import BaseModel, Field, SecretStr, model_validator
from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY
from backend.api.features.library.db import set_preset_webhook, update_preset from backend.api.features.library.db import set_preset_webhook, update_preset
@@ -102,9 +102,37 @@ class CredentialsMetaResponse(BaseModel):
scopes: list[str] | None scopes: list[str] | None
username: str | None username: str | None
host: str | None = Field( host: str | None = Field(
default=None, description="Host pattern for host-scoped credentials" default=None,
description="Host pattern for host-scoped or MCP server URL for MCP credentials",
) )
@model_validator(mode="before")
@classmethod
def _normalize_provider(cls, data: Any) -> Any:
"""Fix ``ProviderName.X`` format from Python 3.13 ``str(Enum)`` bug."""
if isinstance(data, dict):
prov = data.get("provider", "")
if isinstance(prov, str) and prov.startswith("ProviderName."):
member = prov.removeprefix("ProviderName.")
try:
data = {**data, "provider": ProviderName[member].value}
except KeyError:
pass
return data
@staticmethod
def get_host(cred: Credentials) -> str | None:
"""Extract host from credential: HostScoped host or MCP server URL."""
if isinstance(cred, HostScopedCredentials):
return cred.host
if isinstance(cred, OAuth2Credentials) and cred.provider in (
ProviderName.MCP,
ProviderName.MCP.value,
"ProviderName.MCP",
):
return (cred.metadata or {}).get("mcp_server_url")
return None
@router.post("/{provider}/callback", summary="Exchange OAuth code for tokens") @router.post("/{provider}/callback", summary="Exchange OAuth code for tokens")
async def callback( async def callback(
@@ -179,9 +207,7 @@ async def callback(
title=credentials.title, title=credentials.title,
scopes=credentials.scopes, scopes=credentials.scopes,
username=credentials.username, username=credentials.username,
host=( host=(CredentialsMetaResponse.get_host(credentials)),
credentials.host if isinstance(credentials, HostScopedCredentials) else None
),
) )
@@ -199,7 +225,7 @@ async def list_credentials(
title=cred.title, title=cred.title,
scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None, scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None,
username=cred.username if isinstance(cred, OAuth2Credentials) else None, username=cred.username if isinstance(cred, OAuth2Credentials) else None,
host=cred.host if isinstance(cred, HostScopedCredentials) else None, host=CredentialsMetaResponse.get_host(cred),
) )
for cred in credentials for cred in credentials
] ]
@@ -222,7 +248,7 @@ async def list_credentials_by_provider(
title=cred.title, title=cred.title,
scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None, scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None,
username=cred.username if isinstance(cred, OAuth2Credentials) else None, username=cred.username if isinstance(cred, OAuth2Credentials) else None,
host=cred.host if isinstance(cred, HostScopedCredentials) else None, host=CredentialsMetaResponse.get_host(cred),
) )
for cred in credentials for cred in credentials
] ]

View File

@@ -0,0 +1,414 @@
"""
MCP (Model Context Protocol) API routes.
Provides endpoints for MCP tool discovery and OAuth authentication so the
frontend can list available tools on an MCP server before placing a block.
"""
import logging
from typing import Annotated, Any
from urllib.parse import urlparse
import fastapi
from autogpt_libs.auth import get_user_id
from fastapi import Security
from pydantic import BaseModel, Field
from backend.api.features.integrations.router import CredentialsMetaResponse
from backend.blocks.mcp.client import MCPClient, MCPClientError
from backend.blocks.mcp.oauth import MCPOAuthHandler
from backend.data.model import OAuth2Credentials
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.providers import ProviderName
from backend.util.request import HTTPClientError, Requests
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
settings = Settings()
router = fastapi.APIRouter(tags=["mcp"])
creds_manager = IntegrationCredentialsManager()
# ====================== Tool Discovery ====================== #
class DiscoverToolsRequest(BaseModel):
"""Request to discover tools on an MCP server."""
server_url: str = Field(description="URL of the MCP server")
auth_token: str | None = Field(
default=None,
description="Optional Bearer token for authenticated MCP servers",
)
class MCPToolResponse(BaseModel):
"""A single MCP tool returned by discovery."""
name: str
description: str
input_schema: dict[str, Any]
class DiscoverToolsResponse(BaseModel):
"""Response containing the list of tools available on an MCP server."""
tools: list[MCPToolResponse]
server_name: str | None = None
protocol_version: str | None = None
@router.post(
"/discover-tools",
summary="Discover available tools on an MCP server",
response_model=DiscoverToolsResponse,
)
async def discover_tools(
request: DiscoverToolsRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> DiscoverToolsResponse:
"""
Connect to an MCP server and return its available tools.
If the user has a stored MCP credential for this server URL, it will be
used automatically — no need to pass an explicit auth token.
"""
auth_token = request.auth_token
# Auto-use stored MCP credential when no explicit token is provided.
if not auth_token:
try:
mcp_creds = await creds_manager.store.get_creds_by_provider(
user_id, ProviderName.MCP.value
)
# Find the freshest credential for this server URL
best_cred: OAuth2Credentials | None = None
for cred in mcp_creds:
if (
isinstance(cred, OAuth2Credentials)
and cred.metadata.get("mcp_server_url") == request.server_url
):
if best_cred is None or (
(cred.access_token_expires_at or 0)
> (best_cred.access_token_expires_at or 0)
):
best_cred = cred
if best_cred:
# Refresh the token if expired before using it
best_cred = await creds_manager.refresh_if_needed(user_id, best_cred)
logger.info(
f"Using MCP credential {best_cred.id} for {request.server_url}, "
f"expires_at={best_cred.access_token_expires_at}"
)
auth_token = best_cred.access_token.get_secret_value()
except Exception:
logger.debug("Could not look up stored MCP credentials", exc_info=True)
try:
client = MCPClient(request.server_url, auth_token=auth_token)
init_result = await client.initialize()
tools = await client.list_tools()
return DiscoverToolsResponse(
tools=[
MCPToolResponse(
name=t.name,
description=t.description,
input_schema=t.input_schema,
)
for t in tools
],
server_name=init_result.get("serverInfo", {}).get("name"),
protocol_version=init_result.get("protocolVersion"),
)
except HTTPClientError as e:
if e.status_code in (401, 403):
logger.warning(
f"MCP server returned {e.status_code} for {request.server_url}: {e}"
)
raise fastapi.HTTPException(
status_code=401,
detail="This MCP server requires authentication. "
"Please provide a valid auth token.",
)
raise fastapi.HTTPException(status_code=502, detail=str(e))
except MCPClientError as e:
raise fastapi.HTTPException(status_code=502, detail=str(e))
except Exception as e:
logger.exception("MCP tool discovery failed")
raise fastapi.HTTPException(
status_code=502,
detail=f"Failed to connect to MCP server: {str(e)}",
)
# ======================== OAuth Flow ======================== #
class MCPOAuthLoginRequest(BaseModel):
"""Request to start an OAuth flow for an MCP server."""
server_url: str = Field(description="URL of the MCP server that requires OAuth")
class MCPOAuthLoginResponse(BaseModel):
"""Response with the OAuth login URL for the user to authenticate."""
login_url: str
state_token: str
@router.post(
"/oauth/login",
summary="Initiate OAuth login for an MCP server",
)
async def mcp_oauth_login(
request: MCPOAuthLoginRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> MCPOAuthLoginResponse:
"""
Discover OAuth metadata from the MCP server and return a login URL.
1. Discovers the protected-resource metadata (RFC 9728)
2. Fetches the authorization server metadata (RFC 8414)
3. Performs Dynamic Client Registration (RFC 7591) if available
4. Returns the authorization URL for the frontend to open in a popup
"""
client = MCPClient(request.server_url)
# Step 1: Discover protected-resource metadata (RFC 9728)
try:
protected_resource = await client.discover_auth()
except Exception as e:
raise fastapi.HTTPException(
status_code=502,
detail=f"Failed to discover OAuth metadata: {e}",
)
metadata: dict[str, Any] | None = None
if protected_resource and "authorization_servers" in protected_resource:
auth_server_url = protected_resource["authorization_servers"][0]
resource_url = protected_resource.get("resource", request.server_url)
# Step 2a: Discover auth-server metadata (RFC 8414)
try:
metadata = await client.discover_auth_server_metadata(auth_server_url)
except Exception as e:
raise fastapi.HTTPException(
status_code=502,
detail=f"Failed to discover authorization server metadata: {e}",
)
else:
# Fallback: Some MCP servers (e.g. Linear) are their own auth server
# and serve OAuth metadata directly without protected-resource metadata.
# Don't assume a resource_url — omitting it lets the auth server choose
# the correct audience for the token (RFC 8707 resource is optional).
resource_url = None
try:
metadata = await client.discover_auth_server_metadata(request.server_url)
except Exception:
pass
if not metadata or "authorization_endpoint" not in metadata:
raise fastapi.HTTPException(
status_code=400,
detail="This MCP server does not advertise OAuth support. "
"You may need to provide an auth token manually.",
)
authorize_url = metadata["authorization_endpoint"]
token_url = metadata["token_endpoint"]
registration_endpoint = metadata.get("registration_endpoint")
revoke_url = metadata.get("revocation_endpoint")
# Step 3: Dynamic Client Registration (RFC 7591) if available
frontend_base_url = settings.config.frontend_base_url
if not frontend_base_url:
raise fastapi.HTTPException(
status_code=500,
detail="Frontend base URL is not configured.",
)
redirect_uri = f"{frontend_base_url}/auth/integrations/mcp_callback"
client_id = ""
client_secret = ""
if registration_endpoint:
reg_result = await _register_mcp_client(
registration_endpoint, redirect_uri, request.server_url
)
if reg_result:
client_id = reg_result.get("client_id", "")
client_secret = reg_result.get("client_secret", "")
if not client_id:
client_id = "autogpt-platform"
# Step 4: Store state token with OAuth metadata for the callback
scopes = (protected_resource or {}).get("scopes_supported") or metadata.get(
"scopes_supported", []
)
state_token, code_challenge = await creds_manager.store.store_state_token(
user_id,
ProviderName.MCP.value,
scopes,
state_metadata={
"authorize_url": authorize_url,
"token_url": token_url,
"revoke_url": revoke_url,
"resource_url": resource_url,
"server_url": request.server_url,
"client_id": client_id,
"client_secret": client_secret,
},
)
# Step 5: Build and return the login URL
handler = MCPOAuthHandler(
client_id=client_id,
client_secret=client_secret,
redirect_uri=redirect_uri,
authorize_url=authorize_url,
token_url=token_url,
resource_url=resource_url,
)
login_url = handler.get_login_url(
scopes, state_token, code_challenge=code_challenge
)
return MCPOAuthLoginResponse(login_url=login_url, state_token=state_token)
class MCPOAuthCallbackRequest(BaseModel):
"""Request to exchange an OAuth code for tokens."""
code: str = Field(description="Authorization code from OAuth callback")
state_token: str = Field(description="State token for CSRF verification")
class MCPOAuthCallbackResponse(BaseModel):
"""Response after successfully storing OAuth credentials."""
credential_id: str
@router.post(
"/oauth/callback",
summary="Exchange OAuth code for MCP tokens",
)
async def mcp_oauth_callback(
request: MCPOAuthCallbackRequest,
user_id: Annotated[str, Security(get_user_id)],
) -> CredentialsMetaResponse:
"""
Exchange the authorization code for tokens and store the credential.
The frontend calls this after receiving the OAuth code from the popup.
On success, subsequent ``/discover-tools`` calls for the same server URL
will automatically use the stored credential.
"""
valid_state = await creds_manager.store.verify_state_token(
user_id, request.state_token, ProviderName.MCP.value
)
if not valid_state:
raise fastapi.HTTPException(
status_code=400,
detail="Invalid or expired state token.",
)
meta = valid_state.state_metadata
frontend_base_url = settings.config.frontend_base_url
redirect_uri = f"{frontend_base_url}/auth/integrations/mcp_callback"
handler = MCPOAuthHandler(
client_id=meta["client_id"],
client_secret=meta.get("client_secret", ""),
redirect_uri=redirect_uri,
authorize_url=meta["authorize_url"],
token_url=meta["token_url"],
revoke_url=meta.get("revoke_url"),
resource_url=meta.get("resource_url"),
)
try:
credentials = await handler.exchange_code_for_tokens(
request.code, valid_state.scopes, valid_state.code_verifier
)
except Exception as e:
logger.exception("MCP OAuth token exchange failed")
raise fastapi.HTTPException(
status_code=400,
detail=f"OAuth token exchange failed: {e}",
)
# Enrich credential metadata for future lookup and token refresh
if credentials.metadata is None:
credentials.metadata = {}
credentials.metadata["mcp_server_url"] = meta["server_url"]
credentials.metadata["mcp_client_id"] = meta["client_id"]
credentials.metadata["mcp_client_secret"] = meta.get("client_secret", "")
credentials.metadata["mcp_token_url"] = meta["token_url"]
credentials.metadata["mcp_resource_url"] = meta.get("resource_url", "")
hostname = urlparse(meta["server_url"]).hostname or meta["server_url"]
credentials.title = f"MCP: {hostname}"
# Remove old MCP credentials for the same server to prevent stale token buildup.
try:
old_creds = await creds_manager.store.get_creds_by_provider(
user_id, ProviderName.MCP.value
)
for old in old_creds:
if (
isinstance(old, OAuth2Credentials)
and old.metadata.get("mcp_server_url") == meta["server_url"]
):
await creds_manager.store.delete_creds_by_id(user_id, old.id)
logger.info(
f"Removed old MCP credential {old.id} for {meta['server_url']}"
)
except Exception:
logger.debug("Could not clean up old MCP credentials", exc_info=True)
await creds_manager.create(user_id, credentials)
return CredentialsMetaResponse(
id=credentials.id,
provider=credentials.provider,
type=credentials.type,
title=credentials.title,
scopes=credentials.scopes,
username=credentials.username,
host=credentials.metadata.get("mcp_server_url"),
)
# ======================== Helpers ======================== #
async def _register_mcp_client(
registration_endpoint: str,
redirect_uri: str,
server_url: str,
) -> dict[str, Any] | None:
"""Attempt Dynamic Client Registration (RFC 7591) with an MCP auth server."""
try:
response = await Requests(raise_for_status=True).post(
registration_endpoint,
json={
"client_name": "AutoGPT Platform",
"redirect_uris": [redirect_uri],
"grant_types": ["authorization_code"],
"response_types": ["code"],
"token_endpoint_auth_method": "client_secret_post",
},
)
data = response.json()
if isinstance(data, dict) and "client_id" in data:
return data
return None
except Exception as e:
logger.warning(f"Dynamic client registration failed for {server_url}: {e}")
return None

View File

@@ -0,0 +1,389 @@
"""Tests for MCP API routes."""
from unittest.mock import AsyncMock, patch
import fastapi
import fastapi.testclient
from autogpt_libs.auth import get_user_id
from backend.api.features.mcp.routes import router
from backend.blocks.mcp.client import MCPClientError, MCPTool
from backend.util.request import HTTPClientError
app = fastapi.FastAPI()
app.include_router(router)
app.dependency_overrides[get_user_id] = lambda: "test-user-id"
client = fastapi.testclient.TestClient(app)
class TestDiscoverTools:
def test_discover_tools_success(self):
mock_tools = [
MCPTool(
name="get_weather",
description="Get weather for a city",
input_schema={
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
),
MCPTool(
name="add_numbers",
description="Add two numbers",
input_schema={
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"},
},
},
),
]
with (patch("backend.api.features.mcp.routes.MCPClient") as MockClient,):
instance = MockClient.return_value
instance.initialize = AsyncMock(
return_value={
"protocolVersion": "2025-03-26",
"serverInfo": {"name": "test-server"},
}
)
instance.list_tools = AsyncMock(return_value=mock_tools)
response = client.post(
"/discover-tools",
json={"server_url": "https://mcp.example.com/mcp"},
)
assert response.status_code == 200
data = response.json()
assert len(data["tools"]) == 2
assert data["tools"][0]["name"] == "get_weather"
assert data["tools"][1]["name"] == "add_numbers"
assert data["server_name"] == "test-server"
assert data["protocol_version"] == "2025-03-26"
def test_discover_tools_with_auth_token(self):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.initialize = AsyncMock(
return_value={"serverInfo": {}, "protocolVersion": "2025-03-26"}
)
instance.list_tools = AsyncMock(return_value=[])
response = client.post(
"/discover-tools",
json={
"server_url": "https://mcp.example.com/mcp",
"auth_token": "my-secret-token",
},
)
assert response.status_code == 200
MockClient.assert_called_once_with(
"https://mcp.example.com/mcp",
auth_token="my-secret-token",
)
def test_discover_tools_auto_uses_stored_credential(self):
"""When no explicit token is given, stored MCP credentials are used."""
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
stored_cred = OAuth2Credentials(
provider="mcp",
title="MCP: example.com",
access_token=SecretStr("stored-token-123"),
refresh_token=None,
access_token_expires_at=None,
refresh_token_expires_at=None,
scopes=[],
metadata={"mcp_server_url": "https://mcp.example.com/mcp"},
)
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
):
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[stored_cred])
mock_cm.refresh_if_needed = AsyncMock(return_value=stored_cred)
instance = MockClient.return_value
instance.initialize = AsyncMock(
return_value={"serverInfo": {}, "protocolVersion": "2025-03-26"}
)
instance.list_tools = AsyncMock(return_value=[])
response = client.post(
"/discover-tools",
json={"server_url": "https://mcp.example.com/mcp"},
)
assert response.status_code == 200
MockClient.assert_called_once_with(
"https://mcp.example.com/mcp",
auth_token="stored-token-123",
)
def test_discover_tools_mcp_error(self):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.initialize = AsyncMock(
side_effect=MCPClientError("Connection refused")
)
response = client.post(
"/discover-tools",
json={"server_url": "https://bad-server.example.com/mcp"},
)
assert response.status_code == 502
assert "Connection refused" in response.json()["detail"]
def test_discover_tools_generic_error(self):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.initialize = AsyncMock(side_effect=Exception("Network timeout"))
response = client.post(
"/discover-tools",
json={"server_url": "https://timeout.example.com/mcp"},
)
assert response.status_code == 502
assert "Failed to connect" in response.json()["detail"]
def test_discover_tools_auth_required(self):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.initialize = AsyncMock(
side_effect=HTTPClientError("HTTP 401 Error: Unauthorized", 401)
)
response = client.post(
"/discover-tools",
json={"server_url": "https://auth-server.example.com/mcp"},
)
assert response.status_code == 401
assert "requires authentication" in response.json()["detail"]
def test_discover_tools_forbidden(self):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.initialize = AsyncMock(
side_effect=HTTPClientError("HTTP 403 Error: Forbidden", 403)
)
response = client.post(
"/discover-tools",
json={"server_url": "https://auth-server.example.com/mcp"},
)
assert response.status_code == 401
assert "requires authentication" in response.json()["detail"]
def test_discover_tools_missing_url(self):
response = client.post("/discover-tools", json={})
assert response.status_code == 422
class TestOAuthLogin:
def test_oauth_login_success(self):
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
patch(
"backend.api.features.mcp.routes._register_mcp_client"
) as mock_register,
):
instance = MockClient.return_value
instance.discover_auth = AsyncMock(
return_value={
"authorization_servers": ["https://auth.sentry.io"],
"resource": "https://mcp.sentry.dev/mcp",
"scopes_supported": ["openid"],
}
)
instance.discover_auth_server_metadata = AsyncMock(
return_value={
"authorization_endpoint": "https://auth.sentry.io/authorize",
"token_endpoint": "https://auth.sentry.io/token",
"registration_endpoint": "https://auth.sentry.io/register",
}
)
mock_register.return_value = {
"client_id": "registered-client-id",
"client_secret": "registered-secret",
}
mock_cm.store.store_state_token = AsyncMock(
return_value=("state-token-123", "code-challenge-abc")
)
mock_settings.config.frontend_base_url = "http://localhost:3000"
response = client.post(
"/oauth/login",
json={"server_url": "https://mcp.sentry.dev/mcp"},
)
assert response.status_code == 200
data = response.json()
assert "login_url" in data
assert data["state_token"] == "state-token-123"
assert "auth.sentry.io/authorize" in data["login_url"]
assert "registered-client-id" in data["login_url"]
def test_oauth_login_no_oauth_support(self):
with patch("backend.api.features.mcp.routes.MCPClient") as MockClient:
instance = MockClient.return_value
instance.discover_auth = AsyncMock(return_value=None)
response = client.post(
"/oauth/login",
json={"server_url": "https://simple-server.example.com/mcp"},
)
assert response.status_code == 400
assert "does not advertise OAuth" in response.json()["detail"]
def test_oauth_login_fallback_to_public_client(self):
"""When DCR is unavailable, falls back to default public client ID."""
with (
patch("backend.api.features.mcp.routes.MCPClient") as MockClient,
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
):
instance = MockClient.return_value
instance.discover_auth = AsyncMock(
return_value={
"authorization_servers": ["https://auth.example.com"],
"resource": "https://mcp.example.com/mcp",
}
)
instance.discover_auth_server_metadata = AsyncMock(
return_value={
"authorization_endpoint": "https://auth.example.com/authorize",
"token_endpoint": "https://auth.example.com/token",
# No registration_endpoint
}
)
mock_cm.store.store_state_token = AsyncMock(
return_value=("state-abc", "challenge-xyz")
)
mock_settings.config.frontend_base_url = "http://localhost:3000"
response = client.post(
"/oauth/login",
json={"server_url": "https://mcp.example.com/mcp"},
)
assert response.status_code == 200
data = response.json()
assert "autogpt-platform" in data["login_url"]
class TestOAuthCallback:
def test_oauth_callback_success(self):
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
mock_creds = OAuth2Credentials(
provider="mcp",
title=None,
access_token=SecretStr("access-token-xyz"),
refresh_token=None,
access_token_expires_at=None,
refresh_token_expires_at=None,
scopes=[],
metadata={
"mcp_token_url": "https://auth.sentry.io/token",
"mcp_resource_url": "https://mcp.sentry.dev/mcp",
},
)
with (
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
patch("backend.api.features.mcp.routes.MCPOAuthHandler") as MockHandler,
):
mock_settings.config.frontend_base_url = "http://localhost:3000"
# Mock state verification
mock_state = AsyncMock()
mock_state.state_metadata = {
"authorize_url": "https://auth.sentry.io/authorize",
"token_url": "https://auth.sentry.io/token",
"client_id": "test-client-id",
"client_secret": "test-secret",
"server_url": "https://mcp.sentry.dev/mcp",
}
mock_state.scopes = ["openid"]
mock_state.code_verifier = "verifier-123"
mock_cm.store.verify_state_token = AsyncMock(return_value=mock_state)
mock_cm.create = AsyncMock()
handler_instance = MockHandler.return_value
handler_instance.exchange_code_for_tokens = AsyncMock(
return_value=mock_creds
)
# Mock old credential cleanup
mock_cm.store.get_creds_by_provider = AsyncMock(return_value=[])
response = client.post(
"/oauth/callback",
json={"code": "auth-code-abc", "state_token": "state-token-123"},
)
assert response.status_code == 200
data = response.json()
assert "id" in data
assert data["provider"] == "mcp"
assert data["type"] == "oauth2"
mock_cm.create.assert_called_once()
def test_oauth_callback_invalid_state(self):
with patch("backend.api.features.mcp.routes.creds_manager") as mock_cm:
mock_cm.store.verify_state_token = AsyncMock(return_value=None)
response = client.post(
"/oauth/callback",
json={"code": "auth-code", "state_token": "bad-state"},
)
assert response.status_code == 400
assert "Invalid or expired" in response.json()["detail"]
def test_oauth_callback_token_exchange_fails(self):
with (
patch("backend.api.features.mcp.routes.creds_manager") as mock_cm,
patch("backend.api.features.mcp.routes.settings") as mock_settings,
patch("backend.api.features.mcp.routes.MCPOAuthHandler") as MockHandler,
):
mock_settings.config.frontend_base_url = "http://localhost:3000"
mock_state = AsyncMock()
mock_state.state_metadata = {
"authorize_url": "https://auth.example.com/authorize",
"token_url": "https://auth.example.com/token",
"client_id": "cid",
"server_url": "https://mcp.example.com/mcp",
}
mock_state.scopes = []
mock_state.code_verifier = "v"
mock_cm.store.verify_state_token = AsyncMock(return_value=mock_state)
handler_instance = MockHandler.return_value
handler_instance.exchange_code_for_tokens = AsyncMock(
side_effect=RuntimeError("Token exchange failed")
)
response = client.post(
"/oauth/callback",
json={"code": "bad-code", "state_token": "state"},
)
assert response.status_code == 400
assert "token exchange failed" in response.json()["detail"].lower()

View File

@@ -20,7 +20,6 @@ from typing import AsyncGenerator
import httpx import httpx
import pytest import pytest
import pytest_asyncio
from autogpt_libs.api_key.keysmith import APIKeySmith from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission from prisma.enums import APIKeyPermission
from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken
@@ -39,13 +38,13 @@ keysmith = APIKeySmith()
# ============================================================================ # ============================================================================
@pytest.fixture(scope="session") @pytest.fixture
def test_user_id() -> str: def test_user_id() -> str:
"""Test user ID for OAuth tests.""" """Test user ID for OAuth tests."""
return str(uuid.uuid4()) return str(uuid.uuid4())
@pytest_asyncio.fixture(scope="session", loop_scope="session") @pytest.fixture
async def test_user(server, test_user_id: str): async def test_user(server, test_user_id: str):
"""Create a test user in the database.""" """Create a test user in the database."""
await PrismaUser.prisma().create( await PrismaUser.prisma().create(
@@ -68,7 +67,7 @@ async def test_user(server, test_user_id: str):
await PrismaUser.prisma().delete(where={"id": test_user_id}) await PrismaUser.prisma().delete(where={"id": test_user_id})
@pytest_asyncio.fixture @pytest.fixture
async def test_oauth_app(test_user: str): async def test_oauth_app(test_user: str):
"""Create a test OAuth application in the database.""" """Create a test OAuth application in the database."""
app_id = str(uuid.uuid4()) app_id = str(uuid.uuid4())
@@ -123,7 +122,7 @@ def pkce_credentials() -> tuple[str, str]:
return generate_pkce() return generate_pkce()
@pytest_asyncio.fixture @pytest.fixture
async def client(server, test_user: str) -> AsyncGenerator[httpx.AsyncClient, None]: async def client(server, test_user: str) -> AsyncGenerator[httpx.AsyncClient, None]:
""" """
Create an async HTTP client that talks directly to the FastAPI app. Create an async HTTP client that talks directly to the FastAPI app.
@@ -288,7 +287,7 @@ async def test_authorize_invalid_client_returns_error(
assert query_params["error"][0] == "invalid_client" assert query_params["error"][0] == "invalid_client"
@pytest_asyncio.fixture @pytest.fixture
async def inactive_oauth_app(test_user: str): async def inactive_oauth_app(test_user: str):
"""Create an inactive test OAuth application in the database.""" """Create an inactive test OAuth application in the database."""
app_id = str(uuid.uuid4()) app_id = str(uuid.uuid4())
@@ -1005,7 +1004,7 @@ async def test_token_refresh_revoked(
assert "revoked" in response.json()["detail"].lower() assert "revoked" in response.json()["detail"].lower()
@pytest_asyncio.fixture @pytest.fixture
async def other_oauth_app(test_user: str): async def other_oauth_app(test_user: str):
"""Create a second OAuth application for cross-app tests.""" """Create a second OAuth application for cross-app tests."""
app_id = str(uuid.uuid4()) app_id = str(uuid.uuid4())

View File

@@ -8,6 +8,7 @@ Includes BM25 reranking for improved lexical relevance.
import logging import logging
import re import re
import time
from dataclasses import dataclass from dataclasses import dataclass
from typing import Any, Literal from typing import Any, Literal
@@ -362,7 +363,11 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param} LIMIT {limit_param} OFFSET {offset_param}
""" """
results = await query_raw_with_schema(sql_query, *params) try:
results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
total = results[0]["total_count"] if results else 0 total = results[0]["total_count"] if results else 0
# Apply BM25 reranking # Apply BM25 reranking
@@ -686,7 +691,11 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param} LIMIT {limit_param} OFFSET {offset_param}
""" """
results = await query_raw_with_schema(sql_query, *params) try:
results = await query_raw_with_schema(sql_query, *params)
except Exception as e:
await _log_vector_error_diagnostics(e)
raise
total = results[0]["total_count"] if results else 0 total = results[0]["total_count"] if results else 0
@@ -718,6 +727,87 @@ async def hybrid_search_simple(
return await hybrid_search(query=query, page=page, page_size=page_size) return await hybrid_search(query=query, page=page, page_size=page_size)
# ============================================================================
# Diagnostics
# ============================================================================
# Rate limit: only log vector error diagnostics once per this interval
_VECTOR_DIAG_INTERVAL_SECONDS = 60
_last_vector_diag_time: float = 0
async def _log_vector_error_diagnostics(error: Exception) -> None:
"""Log diagnostic info when 'type vector does not exist' error occurs.
Note: Diagnostic queries use query_raw_with_schema which may run on a different
pooled connection than the one that failed. Session-level search_path can differ,
so these diagnostics show cluster-wide state, not necessarily the failed session.
Includes rate limiting to avoid log spam - only logs once per minute.
Caller should re-raise the error after calling this function.
"""
global _last_vector_diag_time
# Check if this is the vector type error
error_str = str(error).lower()
if not (
"type" in error_str and "vector" in error_str and "does not exist" in error_str
):
return
# Rate limit: only log once per interval
now = time.time()
if now - _last_vector_diag_time < _VECTOR_DIAG_INTERVAL_SECONDS:
return
_last_vector_diag_time = now
try:
diagnostics: dict[str, object] = {}
try:
search_path_result = await query_raw_with_schema("SHOW search_path")
diagnostics["search_path"] = search_path_result
except Exception as e:
diagnostics["search_path"] = f"Error: {e}"
try:
schema_result = await query_raw_with_schema("SELECT current_schema()")
diagnostics["current_schema"] = schema_result
except Exception as e:
diagnostics["current_schema"] = f"Error: {e}"
try:
user_result = await query_raw_with_schema(
"SELECT current_user, session_user, current_database()"
)
diagnostics["user_info"] = user_result
except Exception as e:
diagnostics["user_info"] = f"Error: {e}"
try:
# Check pgvector extension installation (cluster-wide, stable info)
ext_result = await query_raw_with_schema(
"SELECT extname, extversion, nspname as schema "
"FROM pg_extension e "
"JOIN pg_namespace n ON e.extnamespace = n.oid "
"WHERE extname = 'vector'"
)
diagnostics["pgvector_extension"] = ext_result
except Exception as e:
diagnostics["pgvector_extension"] = f"Error: {e}"
logger.error(
f"Vector type error diagnostics:\n"
f" Error: {error}\n"
f" search_path: {diagnostics.get('search_path')}\n"
f" current_schema: {diagnostics.get('current_schema')}\n"
f" user_info: {diagnostics.get('user_info')}\n"
f" pgvector_extension: {diagnostics.get('pgvector_extension')}"
)
except Exception as diag_error:
logger.error(f"Failed to collect vector error diagnostics: {diag_error}")
# Backward compatibility alias - HybridSearchWeights maps to StoreAgentSearchWeights # Backward compatibility alias - HybridSearchWeights maps to StoreAgentSearchWeights
# for existing code that expects the popularity parameter # for existing code that expects the popularity parameter
HybridSearchWeights = StoreAgentSearchWeights HybridSearchWeights = StoreAgentSearchWeights

View File

@@ -26,6 +26,7 @@ import backend.api.features.executions.review.routes
import backend.api.features.library.db import backend.api.features.library.db
import backend.api.features.library.model import backend.api.features.library.model
import backend.api.features.library.routes import backend.api.features.library.routes
import backend.api.features.mcp.routes as mcp_routes
import backend.api.features.oauth import backend.api.features.oauth
import backend.api.features.otto.routes import backend.api.features.otto.routes
import backend.api.features.postmark.postmark import backend.api.features.postmark.postmark
@@ -343,6 +344,11 @@ app.include_router(
tags=["workspace"], tags=["workspace"],
prefix="/api/workspace", prefix="/api/workspace",
) )
app.include_router(
mcp_routes.router,
tags=["v2", "mcp"],
prefix="/api/mcp",
)
app.include_router( app.include_router(
backend.api.features.oauth.router, backend.api.features.oauth.router,
tags=["oauth"], tags=["oauth"],

View File

@@ -531,12 +531,12 @@ class LLMResponse(BaseModel):
def convert_openai_tool_fmt_to_anthropic( def convert_openai_tool_fmt_to_anthropic(
openai_tools: list[dict] | None = None, openai_tools: list[dict] | None = None,
) -> Iterable[ToolParam] | anthropic.NotGiven: ) -> Iterable[ToolParam] | anthropic.Omit:
""" """
Convert OpenAI tool format to Anthropic tool format. Convert OpenAI tool format to Anthropic tool format.
""" """
if not openai_tools or len(openai_tools) == 0: if not openai_tools or len(openai_tools) == 0:
return anthropic.NOT_GIVEN return anthropic.omit
anthropic_tools = [] anthropic_tools = []
for tool in openai_tools: for tool in openai_tools:

View File

@@ -0,0 +1,301 @@
"""
MCP (Model Context Protocol) Tool Block.
A single dynamic block that can connect to any MCP server, discover available tools,
and execute them. Works like AgentExecutorBlock — the user selects a tool from a
dropdown and the input/output schema adapts dynamically.
"""
import json
import logging
from typing import Any, Literal
from pydantic import SecretStr
from backend.blocks.mcp.client import MCPClient, MCPClientError
from backend.data.block import (
Block,
BlockCategory,
BlockInput,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
)
from backend.data.model import (
CredentialsField,
CredentialsMetaInput,
OAuth2Credentials,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.json import validate_with_jsonschema
logger = logging.getLogger(__name__)
TEST_CREDENTIALS = OAuth2Credentials(
id="test-mcp-cred",
provider="mcp",
access_token=SecretStr("mock-mcp-token"),
refresh_token=SecretStr("mock-refresh"),
scopes=[],
title="Mock MCP credential",
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
MCPCredentials = CredentialsMetaInput[Literal[ProviderName.MCP], Literal["oauth2"]]
class MCPToolBlock(Block):
"""
A block that connects to an MCP server, lets the user pick a tool,
and executes it with dynamic input/output schema.
The flow:
1. User provides an MCP server URL (and optional credentials)
2. Frontend calls the backend to get tool list from that URL
3. User selects a tool from a dropdown (available_tools)
4. The block's input schema updates to reflect the selected tool's parameters
5. On execution, the block calls the MCP server to run the tool
"""
class Input(BlockSchemaInput):
server_url: str = SchemaField(
description="URL of the MCP server (Streamable HTTP endpoint)",
placeholder="https://mcp.example.com/mcp",
)
credentials: MCPCredentials = CredentialsField(
discriminator="server_url",
description="MCP server OAuth credentials",
default={},
)
selected_tool: str = SchemaField(
description="The MCP tool to execute",
placeholder="Select a tool",
default="",
)
tool_input_schema: dict[str, Any] = SchemaField(
description="JSON Schema for the selected tool's input parameters. "
"Populated automatically when a tool is selected.",
default={},
hidden=True,
)
tool_arguments: dict[str, Any] = SchemaField(
description="Arguments to pass to the selected MCP tool. "
"The fields here are defined by the tool's input schema.",
default={},
)
@classmethod
def get_input_schema(cls, data: BlockInput) -> dict[str, Any]:
"""Return the tool's input schema so the builder UI renders dynamic fields."""
return data.get("tool_input_schema", {})
@classmethod
def get_input_defaults(cls, data: BlockInput) -> BlockInput:
"""Return the current tool_arguments as defaults for the dynamic fields."""
return data.get("tool_arguments", {})
@classmethod
def get_missing_input(cls, data: BlockInput) -> set[str]:
"""Check which required tool arguments are missing."""
required_fields = cls.get_input_schema(data).get("required", [])
tool_arguments = data.get("tool_arguments", {})
return set(required_fields) - set(tool_arguments)
@classmethod
def get_mismatch_error(cls, data: BlockInput) -> str | None:
"""Validate tool_arguments against the tool's input schema."""
tool_schema = cls.get_input_schema(data)
if not tool_schema:
return None
tool_arguments = data.get("tool_arguments", {})
return validate_with_jsonschema(tool_schema, tool_arguments)
class Output(BlockSchemaOutput):
result: Any = SchemaField(description="The result returned by the MCP tool")
error: str = SchemaField(description="Error message if the tool call failed")
def __init__(self):
super().__init__(
id="a0a4b1c2-d3e4-4f56-a7b8-c9d0e1f2a3b4",
description="Connect to any MCP server and execute its tools. "
"Provide a server URL, select a tool, and pass arguments dynamically.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=MCPToolBlock.Input,
output_schema=MCPToolBlock.Output,
block_type=BlockType.STANDARD,
test_credentials=TEST_CREDENTIALS,
test_input={
"server_url": "https://mcp.example.com/mcp",
"credentials": TEST_CREDENTIALS_INPUT,
"selected_tool": "get_weather",
"tool_input_schema": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
"tool_arguments": {"city": "London"},
},
test_output=[
(
"result",
{"weather": "sunny", "temperature": 20},
),
],
test_mock={
"_call_mcp_tool": lambda *a, **kw: {
"weather": "sunny",
"temperature": 20,
},
},
)
async def _call_mcp_tool(
self,
server_url: str,
tool_name: str,
arguments: dict[str, Any],
auth_token: str | None = None,
) -> Any:
"""Call a tool on the MCP server. Extracted for easy mocking in tests."""
client = MCPClient(server_url, auth_token=auth_token)
await client.initialize()
result = await client.call_tool(tool_name, arguments)
if result.is_error:
error_text = ""
for item in result.content:
if item.get("type") == "text":
error_text += item.get("text", "")
raise MCPClientError(
f"MCP tool '{tool_name}' returned an error: "
f"{error_text or 'Unknown error'}"
)
# Extract text content from the result
output_parts = []
for item in result.content:
if item.get("type") == "text":
text = item.get("text", "")
# Try to parse as JSON for structured output
try:
output_parts.append(json.loads(text))
except (json.JSONDecodeError, ValueError):
output_parts.append(text)
elif item.get("type") == "image":
output_parts.append(
{
"type": "image",
"data": item.get("data"),
"mimeType": item.get("mimeType"),
}
)
elif item.get("type") == "resource":
output_parts.append(item.get("resource", {}))
# If single result, unwrap
if len(output_parts) == 1:
return output_parts[0]
return output_parts if output_parts else None
@staticmethod
async def _auto_lookup_credential(
user_id: str, server_url: str
) -> "OAuth2Credentials | None":
"""Auto-lookup stored MCP credential for a server URL.
This is a fallback for nodes that don't have ``credentials`` explicitly
set (e.g. nodes created before the credential field was wired up).
"""
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.providers import ProviderName
try:
mgr = IntegrationCredentialsManager()
mcp_creds = await mgr.store.get_creds_by_provider(
user_id, ProviderName.MCP.value
)
best: OAuth2Credentials | None = None
for cred in mcp_creds:
if (
isinstance(cred, OAuth2Credentials)
and cred.metadata.get("mcp_server_url") == server_url
):
if best is None or (
(cred.access_token_expires_at or 0)
> (best.access_token_expires_at or 0)
):
best = cred
if best:
best = await mgr.refresh_if_needed(user_id, best)
logger.info(
"Auto-resolved MCP credential %s for %s", best.id, server_url
)
return best
except Exception:
logger.debug("Auto-lookup MCP credential failed", exc_info=True)
return None
async def run(
self,
input_data: Input,
*,
user_id: str,
credentials: OAuth2Credentials | None = None,
**kwargs,
) -> BlockOutput:
if not input_data.server_url:
yield "error", "MCP server URL is required"
return
if not input_data.selected_tool:
yield "error", "No tool selected. Please select a tool from the dropdown."
return
# Validate required tool arguments before calling the server.
# The executor-level validation is bypassed for MCP blocks because
# get_input_defaults() flattens tool_arguments, stripping tool_input_schema
# from the validation context.
required = set(input_data.tool_input_schema.get("required", []))
if required:
missing = required - set(input_data.tool_arguments.keys())
if missing:
yield "error", (
f"Missing required argument(s): {', '.join(sorted(missing))}. "
f"Please fill in all required fields marked with * in the block form."
)
return
# If no credentials were injected by the executor (e.g. legacy nodes
# that don't have the credentials field set), try to auto-lookup
# the stored MCP credential for this server URL.
if credentials is None:
credentials = await self._auto_lookup_credential(
user_id, input_data.server_url
)
auth_token = (
credentials.access_token.get_secret_value() if credentials else None
)
try:
result = await self._call_mcp_tool(
server_url=input_data.server_url,
tool_name=input_data.selected_tool,
arguments=input_data.tool_arguments,
auth_token=auth_token,
)
yield "result", result
except MCPClientError as e:
yield "error", str(e)
except Exception as e:
logger.exception(f"MCP tool call failed: {e}")
yield "error", f"MCP tool call failed: {str(e)}"

View File

@@ -0,0 +1,318 @@
"""
MCP (Model Context Protocol) HTTP client.
Implements the MCP Streamable HTTP transport for listing tools and calling tools
on remote MCP servers. Uses JSON-RPC 2.0 over HTTP POST.
Handles both JSON and SSE (text/event-stream) response formats per the MCP spec.
Reference: https://modelcontextprotocol.io/specification/2025-03-26/basic/transports
"""
import json
import logging
from dataclasses import dataclass, field
from typing import Any
from backend.util.request import Requests
logger = logging.getLogger(__name__)
@dataclass
class MCPTool:
"""Represents an MCP tool discovered from a server."""
name: str
description: str
input_schema: dict[str, Any]
@dataclass
class MCPCallResult:
"""Result from calling an MCP tool."""
content: list[dict[str, Any]] = field(default_factory=list)
is_error: bool = False
class MCPClientError(Exception):
"""Raised when an MCP protocol error occurs."""
pass
class MCPClient:
"""
Async HTTP client for the MCP Streamable HTTP transport.
Communicates with MCP servers using JSON-RPC 2.0 over HTTP POST.
Supports optional Bearer token authentication.
"""
def __init__(
self,
server_url: str,
auth_token: str | None = None,
):
self.server_url = server_url.rstrip("/")
self.auth_token = auth_token
self._request_id = 0
self._session_id: str | None = None
def _next_id(self) -> int:
self._request_id += 1
return self._request_id
def _build_headers(self) -> dict[str, str]:
headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/event-stream",
}
if self.auth_token:
headers["Authorization"] = f"Bearer {self.auth_token}"
if self._session_id:
headers["Mcp-Session-Id"] = self._session_id
return headers
def _build_jsonrpc_request(
self, method: str, params: dict[str, Any] | None = None
) -> dict[str, Any]:
req: dict[str, Any] = {
"jsonrpc": "2.0",
"method": method,
"id": self._next_id(),
}
if params is not None:
req["params"] = params
return req
@staticmethod
def _parse_sse_response(text: str) -> dict[str, Any]:
"""Parse an SSE (text/event-stream) response body into JSON-RPC data.
MCP servers may return responses as SSE with format:
event: message
data: {"jsonrpc":"2.0","result":{...},"id":1}
We extract the last `data:` line that contains a JSON-RPC response
(i.e. has an "id" field), which is the reply to our request.
"""
last_data: dict[str, Any] | None = None
for line in text.splitlines():
stripped = line.strip()
if stripped.startswith("data:"):
payload = stripped[len("data:") :].strip()
if not payload:
continue
try:
parsed = json.loads(payload)
# Only keep JSON-RPC responses (have "id"), skip notifications
if isinstance(parsed, dict) and "id" in parsed:
last_data = parsed
except (json.JSONDecodeError, ValueError):
continue
if last_data is None:
raise MCPClientError("No JSON-RPC response found in SSE stream")
return last_data
async def _send_request(
self, method: str, params: dict[str, Any] | None = None
) -> Any:
"""Send a JSON-RPC request to the MCP server and return the result.
Handles both ``application/json`` and ``text/event-stream`` responses
as required by the MCP Streamable HTTP transport specification.
"""
payload = self._build_jsonrpc_request(method, params)
headers = self._build_headers()
requests = Requests(
raise_for_status=True,
extra_headers=headers,
)
response = await requests.post(self.server_url, json=payload)
# Capture session ID from response (MCP Streamable HTTP transport)
session_id = response.headers.get("Mcp-Session-Id")
if session_id:
self._session_id = session_id
content_type = response.headers.get("content-type", "")
if "text/event-stream" in content_type:
body = self._parse_sse_response(response.text())
else:
try:
body = response.json()
except Exception as e:
raise MCPClientError(
f"MCP server returned non-JSON response: {e}"
) from e
# Handle JSON-RPC error
if "error" in body:
error = body["error"]
if isinstance(error, dict):
raise MCPClientError(
f"MCP server error [{error.get('code', '?')}]: "
f"{error.get('message', 'Unknown error')}"
)
raise MCPClientError(f"MCP server error: {error}")
return body.get("result")
async def _send_notification(self, method: str) -> None:
"""Send a JSON-RPC notification (no id, no response expected)."""
headers = self._build_headers()
notification = {"jsonrpc": "2.0", "method": method}
requests = Requests(
raise_for_status=False,
extra_headers=headers,
)
await requests.post(self.server_url, json=notification)
async def discover_auth(self) -> dict[str, Any] | None:
"""Probe the MCP server's OAuth metadata (RFC 9728 / MCP spec).
Returns ``None`` if the server doesn't require auth, otherwise returns
a dict with:
- ``authorization_servers``: list of authorization server URLs
- ``resource``: the resource indicator URL (usually the MCP endpoint)
- ``scopes_supported``: optional list of supported scopes
The caller can then fetch the authorization server metadata to get
``authorization_endpoint``, ``token_endpoint``, etc.
"""
from urllib.parse import urlparse
parsed = urlparse(self.server_url)
base = f"{parsed.scheme}://{parsed.netloc}"
# Build candidates for protected-resource metadata (per RFC 9728)
path = parsed.path.rstrip("/")
candidates = []
if path and path != "/":
candidates.append(f"{base}/.well-known/oauth-protected-resource{path}")
candidates.append(f"{base}/.well-known/oauth-protected-resource")
requests = Requests(
raise_for_status=False,
)
for url in candidates:
try:
resp = await requests.get(url)
if resp.status == 200:
data = resp.json()
if isinstance(data, dict) and "authorization_servers" in data:
return data
except Exception:
continue
return None
async def discover_auth_server_metadata(
self, auth_server_url: str
) -> dict[str, Any] | None:
"""Fetch the OAuth Authorization Server Metadata (RFC 8414).
Given an authorization server URL, returns a dict with:
- ``authorization_endpoint``
- ``token_endpoint``
- ``registration_endpoint`` (for dynamic client registration)
- ``scopes_supported``
- ``code_challenge_methods_supported``
- etc.
"""
from urllib.parse import urlparse
parsed = urlparse(auth_server_url)
base = f"{parsed.scheme}://{parsed.netloc}"
path = parsed.path.rstrip("/")
# Try standard metadata endpoints (RFC 8414 and OpenID Connect)
candidates = []
if path and path != "/":
candidates.append(f"{base}/.well-known/oauth-authorization-server{path}")
candidates.append(f"{base}/.well-known/oauth-authorization-server")
candidates.append(f"{base}/.well-known/openid-configuration")
requests = Requests(
raise_for_status=False,
)
for url in candidates:
try:
resp = await requests.get(url)
if resp.status == 200:
data = resp.json()
if isinstance(data, dict) and "authorization_endpoint" in data:
return data
except Exception:
continue
return None
async def initialize(self) -> dict[str, Any]:
"""
Send the MCP initialize request.
This is required by the MCP protocol before any other requests.
Returns the server's capabilities.
"""
result = await self._send_request(
"initialize",
{
"protocolVersion": "2025-03-26",
"capabilities": {},
"clientInfo": {"name": "AutoGPT-Platform", "version": "1.0.0"},
},
)
# Send initialized notification (no response expected)
await self._send_notification("notifications/initialized")
return result or {}
async def list_tools(self) -> list[MCPTool]:
"""
Discover available tools from the MCP server.
Returns a list of MCPTool objects with name, description, and input schema.
"""
result = await self._send_request("tools/list")
if not result or "tools" not in result:
return []
tools = []
for tool_data in result["tools"]:
tools.append(
MCPTool(
name=tool_data.get("name", ""),
description=tool_data.get("description", ""),
input_schema=tool_data.get("inputSchema", {}),
)
)
return tools
async def call_tool(
self, tool_name: str, arguments: dict[str, Any]
) -> MCPCallResult:
"""
Call a tool on the MCP server.
Args:
tool_name: The name of the tool to call.
arguments: The arguments to pass to the tool.
Returns:
MCPCallResult with the tool's response content.
"""
result = await self._send_request(
"tools/call",
{"name": tool_name, "arguments": arguments},
)
if not result:
return MCPCallResult(is_error=True)
return MCPCallResult(
content=result.get("content", []),
is_error=result.get("isError", False),
)

View File

@@ -0,0 +1,42 @@
"""
Conftest for MCP block tests.
Override the session-scoped server and graph_cleanup fixtures from
backend/conftest.py so that MCP integration tests don't spin up the
full SpinTestServer infrastructure.
"""
import pytest
def pytest_configure(config: pytest.Config) -> None:
config.addinivalue_line("markers", "e2e: end-to-end tests requiring network")
def pytest_collection_modifyitems(
config: pytest.Config, items: list[pytest.Item]
) -> None:
"""Skip e2e tests unless --run-e2e is passed."""
if not config.getoption("--run-e2e", default=False):
skip_e2e = pytest.mark.skip(reason="need --run-e2e option to run")
for item in items:
if "e2e" in item.keywords:
item.add_marker(skip_e2e)
def pytest_addoption(parser: pytest.Parser) -> None:
parser.addoption(
"--run-e2e", action="store_true", default=False, help="run e2e tests"
)
@pytest.fixture(scope="session")
def server():
"""No-op override — MCP tests don't need the full platform server."""
yield None
@pytest.fixture(scope="session", autouse=True)
def graph_cleanup(server):
"""No-op override — MCP tests don't create graphs."""
yield

View File

@@ -0,0 +1,198 @@
"""
MCP OAuth handler for MCP servers that use OAuth 2.1 authorization.
Unlike other OAuth handlers (GitHub, Google, etc.) where endpoints are fixed,
MCP servers have dynamic endpoints discovered via RFC 9728 / RFC 8414 metadata.
This handler accepts those endpoints at construction time.
"""
import logging
import time
import urllib.parse
from typing import ClassVar, Optional
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.providers import ProviderName
from backend.util.request import Requests
logger = logging.getLogger(__name__)
class MCPOAuthHandler(BaseOAuthHandler):
"""
OAuth handler for MCP servers with dynamically-discovered endpoints.
Construction requires the authorization and token endpoint URLs,
which are obtained via MCP OAuth metadata discovery
(``MCPClient.discover_auth`` + ``discover_auth_server_metadata``).
"""
PROVIDER_NAME: ClassVar[ProviderName | str] = ProviderName.MCP
DEFAULT_SCOPES: ClassVar[list[str]] = []
def __init__(
self,
client_id: str,
client_secret: str,
redirect_uri: str,
*,
authorize_url: str,
token_url: str,
revoke_url: str | None = None,
resource_url: str | None = None,
):
self.client_id = client_id
self.client_secret = client_secret
self.redirect_uri = redirect_uri
self.authorize_url = authorize_url
self.token_url = token_url
self.revoke_url = revoke_url
self.resource_url = resource_url
def get_login_url(
self,
scopes: list[str],
state: str,
code_challenge: Optional[str],
) -> str:
scopes = self.handle_default_scopes(scopes)
params: dict[str, str] = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"state": state,
}
if scopes:
params["scope"] = " ".join(scopes)
# PKCE (S256) — included when the caller provides a code_challenge
if code_challenge:
params["code_challenge"] = code_challenge
params["code_challenge_method"] = "S256"
# MCP spec requires resource indicator (RFC 8707)
if self.resource_url:
params["resource"] = self.resource_url
return f"{self.authorize_url}?{urllib.parse.urlencode(params)}"
async def exchange_code_for_tokens(
self,
code: str,
scopes: list[str],
code_verifier: Optional[str],
) -> OAuth2Credentials:
data: dict[str, str] = {
"grant_type": "authorization_code",
"code": code,
"redirect_uri": self.redirect_uri,
"client_id": self.client_id,
}
if self.client_secret:
data["client_secret"] = self.client_secret
if code_verifier:
data["code_verifier"] = code_verifier
if self.resource_url:
data["resource"] = self.resource_url
response = await Requests(raise_for_status=True).post(
self.token_url,
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
tokens = response.json()
if "error" in tokens:
raise RuntimeError(
f"Token exchange failed: {tokens.get('error_description', tokens['error'])}"
)
now = int(time.time())
expires_in = tokens.get("expires_in")
return OAuth2Credentials(
provider=self.PROVIDER_NAME,
title=None,
access_token=SecretStr(tokens["access_token"]),
refresh_token=(
SecretStr(tokens["refresh_token"])
if tokens.get("refresh_token")
else None
),
access_token_expires_at=now + expires_in if expires_in else None,
refresh_token_expires_at=None,
scopes=scopes,
metadata={
"mcp_token_url": self.token_url,
"mcp_resource_url": self.resource_url,
},
)
async def _refresh_tokens(
self, credentials: OAuth2Credentials
) -> OAuth2Credentials:
if not credentials.refresh_token:
raise ValueError("No refresh token available for MCP OAuth credentials")
data: dict[str, str] = {
"grant_type": "refresh_token",
"refresh_token": credentials.refresh_token.get_secret_value(),
"client_id": self.client_id,
}
if self.client_secret:
data["client_secret"] = self.client_secret
if self.resource_url:
data["resource"] = self.resource_url
response = await Requests(raise_for_status=True).post(
self.token_url,
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
tokens = response.json()
if "error" in tokens:
raise RuntimeError(
f"Token refresh failed: {tokens.get('error_description', tokens['error'])}"
)
now = int(time.time())
expires_in = tokens.get("expires_in")
return OAuth2Credentials(
id=credentials.id,
provider=self.PROVIDER_NAME,
title=credentials.title,
access_token=SecretStr(tokens["access_token"]),
refresh_token=(
SecretStr(tokens["refresh_token"])
if tokens.get("refresh_token")
else credentials.refresh_token
),
access_token_expires_at=now + expires_in if expires_in else None,
refresh_token_expires_at=credentials.refresh_token_expires_at,
scopes=credentials.scopes,
metadata=credentials.metadata,
)
async def revoke_tokens(self, credentials: OAuth2Credentials) -> bool:
if not self.revoke_url:
return False
try:
data = {
"token": credentials.access_token.get_secret_value(),
"token_type_hint": "access_token",
"client_id": self.client_id,
}
await Requests().post(
self.revoke_url,
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
)
return True
except Exception:
logger.warning("Failed to revoke MCP OAuth tokens", exc_info=True)
return False

View File

@@ -0,0 +1,104 @@
"""
End-to-end tests against a real public MCP server.
These tests hit the OpenAI docs MCP server (https://developers.openai.com/mcp)
which is publicly accessible without authentication and returns SSE responses.
Mark: These are tagged with ``@pytest.mark.e2e`` so they can be run/skipped
independently of the rest of the test suite (they require network access).
"""
import json
import pytest
from backend.blocks.mcp.client import MCPClient
# Public MCP server that requires no authentication
OPENAI_DOCS_MCP_URL = "https://developers.openai.com/mcp"
@pytest.mark.e2e
class TestRealMCPServer:
"""Tests against the live OpenAI docs MCP server."""
@pytest.mark.asyncio
async def test_initialize(self):
"""Verify we can complete the MCP handshake with a real server."""
client = MCPClient(OPENAI_DOCS_MCP_URL)
result = await client.initialize()
assert result["protocolVersion"] == "2025-03-26"
assert "serverInfo" in result
assert result["serverInfo"]["name"] == "openai-docs-mcp"
assert "tools" in result.get("capabilities", {})
@pytest.mark.asyncio
async def test_list_tools(self):
"""Verify we can discover tools from a real MCP server."""
client = MCPClient(OPENAI_DOCS_MCP_URL)
await client.initialize()
tools = await client.list_tools()
assert len(tools) >= 3 # server has at least 5 tools as of writing
tool_names = {t.name for t in tools}
# These tools are documented and should be stable
assert "search_openai_docs" in tool_names
assert "list_openai_docs" in tool_names
assert "fetch_openai_doc" in tool_names
# Verify schema structure
search_tool = next(t for t in tools if t.name == "search_openai_docs")
assert "query" in search_tool.input_schema.get("properties", {})
assert "query" in search_tool.input_schema.get("required", [])
@pytest.mark.asyncio
async def test_call_tool_list_api_endpoints(self):
"""Call the list_api_endpoints tool and verify we get real data."""
client = MCPClient(OPENAI_DOCS_MCP_URL)
await client.initialize()
result = await client.call_tool("list_api_endpoints", {})
assert not result.is_error
assert len(result.content) >= 1
assert result.content[0]["type"] == "text"
data = json.loads(result.content[0]["text"])
assert "paths" in data or "urls" in data
# The OpenAI API should have many endpoints
total = data.get("total", len(data.get("paths", [])))
assert total > 50
@pytest.mark.asyncio
async def test_call_tool_search(self):
"""Search for docs and verify we get results."""
client = MCPClient(OPENAI_DOCS_MCP_URL)
await client.initialize()
result = await client.call_tool(
"search_openai_docs", {"query": "chat completions", "limit": 3}
)
assert not result.is_error
assert len(result.content) >= 1
@pytest.mark.asyncio
async def test_sse_response_handling(self):
"""Verify the client correctly handles SSE responses from a real server.
This is the key test — our local test server returns JSON,
but real MCP servers typically return SSE. This proves the
SSE parsing works end-to-end.
"""
client = MCPClient(OPENAI_DOCS_MCP_URL)
# initialize() internally calls _send_request which must parse SSE
result = await client.initialize()
# If we got here without error, SSE parsing works
assert isinstance(result, dict)
assert "protocolVersion" in result
# Also verify list_tools works (another SSE response)
tools = await client.list_tools()
assert len(tools) > 0
assert all(hasattr(t, "name") for t in tools)

View File

@@ -0,0 +1,389 @@
"""
Integration tests for MCP client and MCPToolBlock against a real HTTP server.
These tests spin up a local MCP test server and run the full client/block flow
against it — no mocking, real HTTP requests.
"""
import asyncio
import json
import threading
from unittest.mock import patch
import pytest
from aiohttp import web
from pydantic import SecretStr
from backend.blocks.mcp.block import MCPToolBlock
from backend.blocks.mcp.client import MCPClient
from backend.blocks.mcp.test_server import create_test_mcp_app
from backend.data.model import OAuth2Credentials
MOCK_USER_ID = "test-user-integration"
class _MCPTestServer:
"""
Run an MCP test server in a background thread with its own event loop.
This avoids event loop conflicts with pytest-asyncio.
"""
def __init__(self, auth_token: str | None = None):
self.auth_token = auth_token
self.url: str = ""
self._runner: web.AppRunner | None = None
self._loop: asyncio.AbstractEventLoop | None = None
self._thread: threading.Thread | None = None
self._started = threading.Event()
def _run(self):
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
self._loop.run_until_complete(self._start())
self._started.set()
self._loop.run_forever()
async def _start(self):
app = create_test_mcp_app(auth_token=self.auth_token)
self._runner = web.AppRunner(app)
await self._runner.setup()
site = web.TCPSite(self._runner, "127.0.0.1", 0)
await site.start()
port = site._server.sockets[0].getsockname()[1] # type: ignore[union-attr]
self.url = f"http://127.0.0.1:{port}/mcp"
def start(self):
self._thread = threading.Thread(target=self._run, daemon=True)
self._thread.start()
if not self._started.wait(timeout=5):
raise RuntimeError("MCP test server failed to start within 5 seconds")
return self
def stop(self):
if self._loop and self._runner:
asyncio.run_coroutine_threadsafe(self._runner.cleanup(), self._loop).result(
timeout=5
)
self._loop.call_soon_threadsafe(self._loop.stop)
if self._thread:
self._thread.join(timeout=5)
@pytest.fixture(scope="module")
def mcp_server():
"""Start a local MCP test server in a background thread."""
server = _MCPTestServer()
server.start()
yield server.url
server.stop()
@pytest.fixture(scope="module")
def mcp_server_with_auth():
"""Start a local MCP test server with auth in a background thread."""
server = _MCPTestServer(auth_token="test-secret-token")
server.start()
yield server.url, "test-secret-token"
server.stop()
@pytest.fixture(autouse=True)
def _allow_localhost():
"""
Allow 127.0.0.1 through SSRF protection for integration tests.
The Requests class blocks private IPs by default. We patch the Requests
constructor to always include 127.0.0.1 as a trusted origin so the local
test server is reachable.
"""
from backend.util.request import Requests
original_init = Requests.__init__
def patched_init(self, *args, **kwargs):
trusted = list(kwargs.get("trusted_origins") or [])
trusted.append("http://127.0.0.1")
kwargs["trusted_origins"] = trusted
original_init(self, *args, **kwargs)
with patch.object(Requests, "__init__", patched_init):
yield
def _make_client(url: str, auth_token: str | None = None) -> MCPClient:
"""Create an MCPClient for integration tests."""
return MCPClient(url, auth_token=auth_token)
# ── MCPClient integration tests ──────────────────────────────────────
class TestMCPClientIntegration:
"""Test MCPClient against a real local MCP server."""
@pytest.mark.asyncio
async def test_initialize(self, mcp_server):
client = _make_client(mcp_server)
result = await client.initialize()
assert result["protocolVersion"] == "2025-03-26"
assert result["serverInfo"]["name"] == "test-mcp-server"
assert "tools" in result["capabilities"]
@pytest.mark.asyncio
async def test_list_tools(self, mcp_server):
client = _make_client(mcp_server)
await client.initialize()
tools = await client.list_tools()
assert len(tools) == 3
tool_names = {t.name for t in tools}
assert tool_names == {"get_weather", "add_numbers", "echo"}
# Check get_weather schema
weather = next(t for t in tools if t.name == "get_weather")
assert weather.description == "Get current weather for a city"
assert "city" in weather.input_schema["properties"]
assert weather.input_schema["required"] == ["city"]
# Check add_numbers schema
add = next(t for t in tools if t.name == "add_numbers")
assert "a" in add.input_schema["properties"]
assert "b" in add.input_schema["properties"]
@pytest.mark.asyncio
async def test_call_tool_get_weather(self, mcp_server):
client = _make_client(mcp_server)
await client.initialize()
result = await client.call_tool("get_weather", {"city": "London"})
assert not result.is_error
assert len(result.content) == 1
assert result.content[0]["type"] == "text"
data = json.loads(result.content[0]["text"])
assert data["city"] == "London"
assert data["temperature"] == 22
assert data["condition"] == "sunny"
@pytest.mark.asyncio
async def test_call_tool_add_numbers(self, mcp_server):
client = _make_client(mcp_server)
await client.initialize()
result = await client.call_tool("add_numbers", {"a": 3, "b": 7})
assert not result.is_error
data = json.loads(result.content[0]["text"])
assert data["result"] == 10
@pytest.mark.asyncio
async def test_call_tool_echo(self, mcp_server):
client = _make_client(mcp_server)
await client.initialize()
result = await client.call_tool("echo", {"message": "Hello MCP!"})
assert not result.is_error
assert result.content[0]["text"] == "Hello MCP!"
@pytest.mark.asyncio
async def test_call_unknown_tool(self, mcp_server):
client = _make_client(mcp_server)
await client.initialize()
result = await client.call_tool("nonexistent_tool", {})
assert result.is_error
assert "Unknown tool" in result.content[0]["text"]
@pytest.mark.asyncio
async def test_auth_success(self, mcp_server_with_auth):
url, token = mcp_server_with_auth
client = _make_client(url, auth_token=token)
result = await client.initialize()
assert result["protocolVersion"] == "2025-03-26"
tools = await client.list_tools()
assert len(tools) == 3
@pytest.mark.asyncio
async def test_auth_failure(self, mcp_server_with_auth):
url, _ = mcp_server_with_auth
client = _make_client(url, auth_token="wrong-token")
with pytest.raises(Exception):
await client.initialize()
@pytest.mark.asyncio
async def test_auth_missing(self, mcp_server_with_auth):
url, _ = mcp_server_with_auth
client = _make_client(url)
with pytest.raises(Exception):
await client.initialize()
# ── MCPToolBlock integration tests ───────────────────────────────────
class TestMCPToolBlockIntegration:
"""Test MCPToolBlock end-to-end against a real local MCP server."""
@pytest.mark.asyncio
async def test_full_flow_get_weather(self, mcp_server):
"""Full flow: discover tools, select one, execute it."""
# Step 1: Discover tools (simulating what the frontend/API would do)
client = _make_client(mcp_server)
await client.initialize()
tools = await client.list_tools()
assert len(tools) == 3
# Step 2: User selects "get_weather" and we get its schema
weather_tool = next(t for t in tools if t.name == "get_weather")
# Step 3: Execute the block — no credentials (public server)
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url=mcp_server,
selected_tool="get_weather",
tool_input_schema=weather_tool.input_schema,
tool_arguments={"city": "Paris"},
)
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "result"
result = outputs[0][1]
assert result["city"] == "Paris"
assert result["temperature"] == 22
assert result["condition"] == "sunny"
@pytest.mark.asyncio
async def test_full_flow_add_numbers(self, mcp_server):
"""Full flow for add_numbers tool."""
client = _make_client(mcp_server)
await client.initialize()
tools = await client.list_tools()
add_tool = next(t for t in tools if t.name == "add_numbers")
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url=mcp_server,
selected_tool="add_numbers",
tool_input_schema=add_tool.input_schema,
tool_arguments={"a": 42, "b": 58},
)
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "result"
assert outputs[0][1]["result"] == 100
@pytest.mark.asyncio
async def test_full_flow_echo_plain_text(self, mcp_server):
"""Verify plain text (non-JSON) responses work."""
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url=mcp_server,
selected_tool="echo",
tool_input_schema={
"type": "object",
"properties": {"message": {"type": "string"}},
"required": ["message"],
},
tool_arguments={"message": "Hello from AutoGPT!"},
)
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "result"
assert outputs[0][1] == "Hello from AutoGPT!"
@pytest.mark.asyncio
async def test_full_flow_unknown_tool_yields_error(self, mcp_server):
"""Calling an unknown tool should yield an error output."""
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url=mcp_server,
selected_tool="nonexistent_tool",
tool_arguments={},
)
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "error"
assert "returned an error" in outputs[0][1]
@pytest.mark.asyncio
async def test_full_flow_with_auth(self, mcp_server_with_auth):
"""Full flow with authentication via credentials kwarg."""
url, token = mcp_server_with_auth
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url=url,
selected_tool="echo",
tool_input_schema={
"type": "object",
"properties": {"message": {"type": "string"}},
"required": ["message"],
},
tool_arguments={"message": "Authenticated!"},
)
# Pass credentials via the standard kwarg (as the executor would)
test_creds = OAuth2Credentials(
id="test-cred",
provider="mcp",
access_token=SecretStr(token),
refresh_token=SecretStr(""),
scopes=[],
title="Test MCP credential",
)
outputs = []
async for name, data in block.run(
input_data, user_id=MOCK_USER_ID, credentials=test_creds
):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "result"
assert outputs[0][1] == "Authenticated!"
@pytest.mark.asyncio
async def test_no_credentials_runs_without_auth(self, mcp_server):
"""Block runs without auth when no credentials are provided."""
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url=mcp_server,
selected_tool="echo",
tool_input_schema={
"type": "object",
"properties": {"message": {"type": "string"}},
"required": ["message"],
},
tool_arguments={"message": "No auth needed"},
)
outputs = []
async for name, data in block.run(
input_data, user_id=MOCK_USER_ID, credentials=None
):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "result"
assert outputs[0][1] == "No auth needed"

View File

@@ -0,0 +1,619 @@
"""
Tests for MCP client and MCPToolBlock.
"""
import json
from unittest.mock import AsyncMock, patch
import pytest
from backend.blocks.mcp.block import MCPToolBlock
from backend.blocks.mcp.client import MCPCallResult, MCPClient, MCPClientError
from backend.util.test import execute_block_test
# ── SSE parsing unit tests ───────────────────────────────────────────
class TestSSEParsing:
"""Tests for SSE (text/event-stream) response parsing."""
def test_parse_sse_simple(self):
sse = (
"event: message\n"
'data: {"jsonrpc":"2.0","result":{"tools":[]},"id":1}\n'
"\n"
)
body = MCPClient._parse_sse_response(sse)
assert body["result"] == {"tools": []}
assert body["id"] == 1
def test_parse_sse_with_notifications(self):
"""SSE streams can contain notifications (no id) before the response."""
sse = (
"event: message\n"
'data: {"jsonrpc":"2.0","method":"some/notification"}\n'
"\n"
"event: message\n"
'data: {"jsonrpc":"2.0","result":{"ok":true},"id":2}\n'
"\n"
)
body = MCPClient._parse_sse_response(sse)
assert body["result"] == {"ok": True}
assert body["id"] == 2
def test_parse_sse_error_response(self):
sse = (
"event: message\n"
'data: {"jsonrpc":"2.0","error":{"code":-32600,"message":"Bad Request"},"id":1}\n'
)
body = MCPClient._parse_sse_response(sse)
assert "error" in body
assert body["error"]["code"] == -32600
def test_parse_sse_no_data_raises(self):
with pytest.raises(MCPClientError, match="No JSON-RPC response found"):
MCPClient._parse_sse_response("event: message\n\n")
def test_parse_sse_empty_raises(self):
with pytest.raises(MCPClientError, match="No JSON-RPC response found"):
MCPClient._parse_sse_response("")
def test_parse_sse_ignores_non_data_lines(self):
sse = (
": comment line\n"
"event: message\n"
"id: 123\n"
'data: {"jsonrpc":"2.0","result":"ok","id":1}\n'
"\n"
)
body = MCPClient._parse_sse_response(sse)
assert body["result"] == "ok"
def test_parse_sse_uses_last_response(self):
"""If multiple responses exist, use the last one."""
sse = (
'data: {"jsonrpc":"2.0","result":"first","id":1}\n'
"\n"
'data: {"jsonrpc":"2.0","result":"second","id":2}\n'
"\n"
)
body = MCPClient._parse_sse_response(sse)
assert body["result"] == "second"
# ── MCPClient unit tests ─────────────────────────────────────────────
class TestMCPClient:
"""Tests for the MCP HTTP client."""
def test_build_headers_without_auth(self):
client = MCPClient("https://mcp.example.com")
headers = client._build_headers()
assert "Authorization" not in headers
assert headers["Content-Type"] == "application/json"
def test_build_headers_with_auth(self):
client = MCPClient("https://mcp.example.com", auth_token="my-token")
headers = client._build_headers()
assert headers["Authorization"] == "Bearer my-token"
def test_build_jsonrpc_request(self):
client = MCPClient("https://mcp.example.com")
req = client._build_jsonrpc_request("tools/list")
assert req["jsonrpc"] == "2.0"
assert req["method"] == "tools/list"
assert "id" in req
assert "params" not in req
def test_build_jsonrpc_request_with_params(self):
client = MCPClient("https://mcp.example.com")
req = client._build_jsonrpc_request(
"tools/call", {"name": "test", "arguments": {"x": 1}}
)
assert req["params"] == {"name": "test", "arguments": {"x": 1}}
def test_request_id_increments(self):
client = MCPClient("https://mcp.example.com")
req1 = client._build_jsonrpc_request("tools/list")
req2 = client._build_jsonrpc_request("tools/list")
assert req2["id"] > req1["id"]
def test_server_url_trailing_slash_stripped(self):
client = MCPClient("https://mcp.example.com/mcp/")
assert client.server_url == "https://mcp.example.com/mcp"
@pytest.mark.asyncio
async def test_send_request_success(self):
client = MCPClient("https://mcp.example.com")
mock_response = AsyncMock()
mock_response.json.return_value = {
"jsonrpc": "2.0",
"result": {"tools": []},
"id": 1,
}
with patch.object(client, "_send_request", return_value={"tools": []}):
result = await client._send_request("tools/list")
assert result == {"tools": []}
@pytest.mark.asyncio
async def test_send_request_error(self):
client = MCPClient("https://mcp.example.com")
async def mock_send(*args, **kwargs):
raise MCPClientError("MCP server error [-32600]: Invalid Request")
with patch.object(client, "_send_request", side_effect=mock_send):
with pytest.raises(MCPClientError, match="Invalid Request"):
await client._send_request("tools/list")
@pytest.mark.asyncio
async def test_list_tools(self):
client = MCPClient("https://mcp.example.com")
mock_result = {
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
},
{
"name": "search",
"description": "Search the web",
"inputSchema": {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"],
},
},
]
}
with patch.object(client, "_send_request", return_value=mock_result):
tools = await client.list_tools()
assert len(tools) == 2
assert tools[0].name == "get_weather"
assert tools[0].description == "Get current weather for a city"
assert tools[0].input_schema["properties"]["city"]["type"] == "string"
assert tools[1].name == "search"
@pytest.mark.asyncio
async def test_list_tools_empty(self):
client = MCPClient("https://mcp.example.com")
with patch.object(client, "_send_request", return_value={"tools": []}):
tools = await client.list_tools()
assert tools == []
@pytest.mark.asyncio
async def test_list_tools_none_result(self):
client = MCPClient("https://mcp.example.com")
with patch.object(client, "_send_request", return_value=None):
tools = await client.list_tools()
assert tools == []
@pytest.mark.asyncio
async def test_call_tool_success(self):
client = MCPClient("https://mcp.example.com")
mock_result = {
"content": [
{"type": "text", "text": json.dumps({"temp": 20, "city": "London"})}
],
"isError": False,
}
with patch.object(client, "_send_request", return_value=mock_result):
result = await client.call_tool("get_weather", {"city": "London"})
assert not result.is_error
assert len(result.content) == 1
assert result.content[0]["type"] == "text"
@pytest.mark.asyncio
async def test_call_tool_error(self):
client = MCPClient("https://mcp.example.com")
mock_result = {
"content": [{"type": "text", "text": "City not found"}],
"isError": True,
}
with patch.object(client, "_send_request", return_value=mock_result):
result = await client.call_tool("get_weather", {"city": "???"})
assert result.is_error
@pytest.mark.asyncio
async def test_call_tool_none_result(self):
client = MCPClient("https://mcp.example.com")
with patch.object(client, "_send_request", return_value=None):
result = await client.call_tool("get_weather", {"city": "London"})
assert result.is_error
@pytest.mark.asyncio
async def test_initialize(self):
client = MCPClient("https://mcp.example.com")
mock_result = {
"protocolVersion": "2025-03-26",
"capabilities": {"tools": {}},
"serverInfo": {"name": "test-server", "version": "1.0.0"},
}
with (
patch.object(client, "_send_request", return_value=mock_result) as mock_req,
patch.object(client, "_send_notification") as mock_notif,
):
result = await client.initialize()
mock_req.assert_called_once()
mock_notif.assert_called_once_with("notifications/initialized")
assert result["protocolVersion"] == "2025-03-26"
# ── MCPToolBlock unit tests ──────────────────────────────────────────
MOCK_USER_ID = "test-user-123"
class TestMCPToolBlock:
"""Tests for the MCPToolBlock."""
def test_block_instantiation(self):
block = MCPToolBlock()
assert block.id == "a0a4b1c2-d3e4-4f56-a7b8-c9d0e1f2a3b4"
assert block.name == "MCPToolBlock"
def test_input_schema_has_required_fields(self):
block = MCPToolBlock()
schema = block.input_schema.jsonschema()
props = schema.get("properties", {})
assert "server_url" in props
assert "selected_tool" in props
assert "tool_arguments" in props
assert "credentials" in props
def test_output_schema(self):
block = MCPToolBlock()
schema = block.output_schema.jsonschema()
props = schema.get("properties", {})
assert "result" in props
assert "error" in props
def test_get_input_schema_with_tool_schema(self):
tool_schema = {
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"],
}
data = {"tool_input_schema": tool_schema}
result = MCPToolBlock.Input.get_input_schema(data)
assert result == tool_schema
def test_get_input_schema_without_tool_schema(self):
result = MCPToolBlock.Input.get_input_schema({})
assert result == {}
def test_get_input_defaults(self):
data = {"tool_arguments": {"city": "London"}}
result = MCPToolBlock.Input.get_input_defaults(data)
assert result == {"city": "London"}
def test_get_missing_input(self):
data = {
"tool_input_schema": {
"type": "object",
"properties": {
"city": {"type": "string"},
"units": {"type": "string"},
},
"required": ["city", "units"],
},
"tool_arguments": {"city": "London"},
}
missing = MCPToolBlock.Input.get_missing_input(data)
assert missing == {"units"}
def test_get_missing_input_all_present(self):
data = {
"tool_input_schema": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
},
"tool_arguments": {"city": "London"},
}
missing = MCPToolBlock.Input.get_missing_input(data)
assert missing == set()
@pytest.mark.asyncio
async def test_run_with_mock(self):
"""Test the block using the built-in test infrastructure."""
block = MCPToolBlock()
await execute_block_test(block)
@pytest.mark.asyncio
async def test_run_missing_server_url(self):
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url="",
selected_tool="test",
)
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert outputs == [("error", "MCP server URL is required")]
@pytest.mark.asyncio
async def test_run_missing_tool(self):
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url="https://mcp.example.com/mcp",
selected_tool="",
)
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert outputs == [
("error", "No tool selected. Please select a tool from the dropdown.")
]
@pytest.mark.asyncio
async def test_run_success(self):
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url="https://mcp.example.com/mcp",
selected_tool="get_weather",
tool_input_schema={
"type": "object",
"properties": {"city": {"type": "string"}},
},
tool_arguments={"city": "London"},
)
async def mock_call(*args, **kwargs):
return {"temp": 20, "city": "London"}
block._call_mcp_tool = mock_call # type: ignore
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert len(outputs) == 1
assert outputs[0][0] == "result"
assert outputs[0][1] == {"temp": 20, "city": "London"}
@pytest.mark.asyncio
async def test_run_mcp_error(self):
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url="https://mcp.example.com/mcp",
selected_tool="bad_tool",
)
async def mock_call(*args, **kwargs):
raise MCPClientError("Tool not found")
block._call_mcp_tool = mock_call # type: ignore
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert outputs[0][0] == "error"
assert "Tool not found" in outputs[0][1]
@pytest.mark.asyncio
async def test_call_mcp_tool_parses_json_text(self):
block = MCPToolBlock()
mock_result = MCPCallResult(
content=[
{"type": "text", "text": '{"temp": 20}'},
],
is_error=False,
)
async def mock_init(self):
return {}
async def mock_call(self, name, args):
return mock_result
with (
patch.object(MCPClient, "initialize", mock_init),
patch.object(MCPClient, "call_tool", mock_call),
):
result = await block._call_mcp_tool(
"https://mcp.example.com", "test_tool", {}
)
assert result == {"temp": 20}
@pytest.mark.asyncio
async def test_call_mcp_tool_plain_text(self):
block = MCPToolBlock()
mock_result = MCPCallResult(
content=[
{"type": "text", "text": "Hello, world!"},
],
is_error=False,
)
async def mock_init(self):
return {}
async def mock_call(self, name, args):
return mock_result
with (
patch.object(MCPClient, "initialize", mock_init),
patch.object(MCPClient, "call_tool", mock_call),
):
result = await block._call_mcp_tool(
"https://mcp.example.com", "test_tool", {}
)
assert result == "Hello, world!"
@pytest.mark.asyncio
async def test_call_mcp_tool_multiple_content(self):
block = MCPToolBlock()
mock_result = MCPCallResult(
content=[
{"type": "text", "text": "Part 1"},
{"type": "text", "text": '{"part": 2}'},
],
is_error=False,
)
async def mock_init(self):
return {}
async def mock_call(self, name, args):
return mock_result
with (
patch.object(MCPClient, "initialize", mock_init),
patch.object(MCPClient, "call_tool", mock_call),
):
result = await block._call_mcp_tool(
"https://mcp.example.com", "test_tool", {}
)
assert result == ["Part 1", {"part": 2}]
@pytest.mark.asyncio
async def test_call_mcp_tool_error_result(self):
block = MCPToolBlock()
mock_result = MCPCallResult(
content=[{"type": "text", "text": "Something went wrong"}],
is_error=True,
)
async def mock_init(self):
return {}
async def mock_call(self, name, args):
return mock_result
with (
patch.object(MCPClient, "initialize", mock_init),
patch.object(MCPClient, "call_tool", mock_call),
):
with pytest.raises(MCPClientError, match="returned an error"):
await block._call_mcp_tool("https://mcp.example.com", "test_tool", {})
@pytest.mark.asyncio
async def test_call_mcp_tool_image_content(self):
block = MCPToolBlock()
mock_result = MCPCallResult(
content=[
{
"type": "image",
"data": "base64data==",
"mimeType": "image/png",
}
],
is_error=False,
)
async def mock_init(self):
return {}
async def mock_call(self, name, args):
return mock_result
with (
patch.object(MCPClient, "initialize", mock_init),
patch.object(MCPClient, "call_tool", mock_call),
):
result = await block._call_mcp_tool(
"https://mcp.example.com", "test_tool", {}
)
assert result == {
"type": "image",
"data": "base64data==",
"mimeType": "image/png",
}
@pytest.mark.asyncio
async def test_run_with_credentials(self):
"""Verify the block uses OAuth2Credentials and passes auth token."""
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url="https://mcp.example.com/mcp",
selected_tool="test_tool",
)
captured_tokens: list[str | None] = []
async def mock_call(server_url, tool_name, arguments, auth_token=None):
captured_tokens.append(auth_token)
return "ok"
block._call_mcp_tool = mock_call # type: ignore
test_creds = OAuth2Credentials(
id="cred-123",
provider="mcp",
access_token=SecretStr("resolved-token"),
refresh_token=SecretStr(""),
scopes=[],
title="Test MCP credential",
)
async for _ in block.run(
input_data, user_id=MOCK_USER_ID, credentials=test_creds
):
pass
assert captured_tokens == ["resolved-token"]
@pytest.mark.asyncio
async def test_run_without_credentials(self):
"""Verify the block works without credentials (public server)."""
block = MCPToolBlock()
input_data = MCPToolBlock.Input(
server_url="https://mcp.example.com/mcp",
selected_tool="test_tool",
)
captured_tokens: list[str | None] = []
async def mock_call(server_url, tool_name, arguments, auth_token=None):
captured_tokens.append(auth_token)
return "ok"
block._call_mcp_tool = mock_call # type: ignore
outputs = []
async for name, data in block.run(input_data, user_id=MOCK_USER_ID):
outputs.append((name, data))
assert captured_tokens == [None]
assert outputs == [("result", "ok")]

View File

@@ -0,0 +1,242 @@
"""
Tests for MCP OAuth handler.
"""
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from pydantic import SecretStr
from backend.blocks.mcp.client import MCPClient
from backend.blocks.mcp.oauth import MCPOAuthHandler
from backend.data.model import OAuth2Credentials
def _mock_response(json_data: dict, status: int = 200) -> MagicMock:
"""Create a mock Response with synchronous json() (matching Requests.Response)."""
resp = MagicMock()
resp.status = status
resp.ok = 200 <= status < 300
resp.json.return_value = json_data
return resp
class TestMCPOAuthHandler:
"""Tests for the MCPOAuthHandler."""
def _make_handler(self, **overrides) -> MCPOAuthHandler:
defaults = {
"client_id": "test-client-id",
"client_secret": "test-client-secret",
"redirect_uri": "https://app.example.com/callback",
"authorize_url": "https://auth.example.com/authorize",
"token_url": "https://auth.example.com/token",
}
defaults.update(overrides)
return MCPOAuthHandler(**defaults)
def test_get_login_url_basic(self):
handler = self._make_handler()
url = handler.get_login_url(
scopes=["read", "write"],
state="random-state-token",
code_challenge="S256-challenge-value",
)
assert "https://auth.example.com/authorize?" in url
assert "response_type=code" in url
assert "client_id=test-client-id" in url
assert "state=random-state-token" in url
assert "code_challenge=S256-challenge-value" in url
assert "code_challenge_method=S256" in url
assert "scope=read+write" in url
def test_get_login_url_with_resource(self):
handler = self._make_handler(resource_url="https://mcp.example.com/mcp")
url = handler.get_login_url(
scopes=[], state="state", code_challenge="challenge"
)
assert "resource=https" in url
def test_get_login_url_without_pkce(self):
handler = self._make_handler()
url = handler.get_login_url(scopes=["read"], state="state", code_challenge=None)
assert "code_challenge" not in url
assert "code_challenge_method" not in url
@pytest.mark.asyncio
async def test_exchange_code_for_tokens(self):
handler = self._make_handler()
resp = _mock_response(
{
"access_token": "new-access-token",
"refresh_token": "new-refresh-token",
"expires_in": 3600,
"token_type": "Bearer",
}
)
with patch("backend.blocks.mcp.oauth.Requests") as MockRequests:
instance = MockRequests.return_value
instance.post = AsyncMock(return_value=resp)
creds = await handler.exchange_code_for_tokens(
code="auth-code",
scopes=["read"],
code_verifier="pkce-verifier",
)
assert isinstance(creds, OAuth2Credentials)
assert creds.access_token.get_secret_value() == "new-access-token"
assert creds.refresh_token is not None
assert creds.refresh_token.get_secret_value() == "new-refresh-token"
assert creds.scopes == ["read"]
assert creds.access_token_expires_at is not None
@pytest.mark.asyncio
async def test_refresh_tokens(self):
handler = self._make_handler()
existing_creds = OAuth2Credentials(
id="existing-id",
provider="mcp",
access_token=SecretStr("old-token"),
refresh_token=SecretStr("old-refresh"),
scopes=["read"],
title="test",
)
resp = _mock_response(
{
"access_token": "refreshed-token",
"refresh_token": "new-refresh",
"expires_in": 3600,
}
)
with patch("backend.blocks.mcp.oauth.Requests") as MockRequests:
instance = MockRequests.return_value
instance.post = AsyncMock(return_value=resp)
refreshed = await handler._refresh_tokens(existing_creds)
assert refreshed.id == "existing-id"
assert refreshed.access_token.get_secret_value() == "refreshed-token"
assert refreshed.refresh_token is not None
assert refreshed.refresh_token.get_secret_value() == "new-refresh"
@pytest.mark.asyncio
async def test_refresh_tokens_no_refresh_token(self):
handler = self._make_handler()
creds = OAuth2Credentials(
provider="mcp",
access_token=SecretStr("token"),
scopes=["read"],
title="test",
)
with pytest.raises(ValueError, match="No refresh token"):
await handler._refresh_tokens(creds)
@pytest.mark.asyncio
async def test_revoke_tokens_no_url(self):
handler = self._make_handler(revoke_url=None)
creds = OAuth2Credentials(
provider="mcp",
access_token=SecretStr("token"),
scopes=[],
title="test",
)
result = await handler.revoke_tokens(creds)
assert result is False
@pytest.mark.asyncio
async def test_revoke_tokens_with_url(self):
handler = self._make_handler(revoke_url="https://auth.example.com/revoke")
creds = OAuth2Credentials(
provider="mcp",
access_token=SecretStr("token"),
scopes=[],
title="test",
)
resp = _mock_response({}, status=200)
with patch("backend.blocks.mcp.oauth.Requests") as MockRequests:
instance = MockRequests.return_value
instance.post = AsyncMock(return_value=resp)
result = await handler.revoke_tokens(creds)
assert result is True
class TestMCPClientDiscovery:
"""Tests for MCPClient OAuth metadata discovery."""
@pytest.mark.asyncio
async def test_discover_auth_found(self):
client = MCPClient("https://mcp.example.com/mcp")
metadata = {
"authorization_servers": ["https://auth.example.com"],
"resource": "https://mcp.example.com/mcp",
}
resp = _mock_response(metadata, status=200)
with patch("backend.blocks.mcp.client.Requests") as MockRequests:
instance = MockRequests.return_value
instance.get = AsyncMock(return_value=resp)
result = await client.discover_auth()
assert result is not None
assert result["authorization_servers"] == ["https://auth.example.com"]
@pytest.mark.asyncio
async def test_discover_auth_not_found(self):
client = MCPClient("https://mcp.example.com/mcp")
resp = _mock_response({}, status=404)
with patch("backend.blocks.mcp.client.Requests") as MockRequests:
instance = MockRequests.return_value
instance.get = AsyncMock(return_value=resp)
result = await client.discover_auth()
assert result is None
@pytest.mark.asyncio
async def test_discover_auth_server_metadata(self):
client = MCPClient("https://mcp.example.com/mcp")
server_metadata = {
"issuer": "https://auth.example.com",
"authorization_endpoint": "https://auth.example.com/authorize",
"token_endpoint": "https://auth.example.com/token",
"registration_endpoint": "https://auth.example.com/register",
"code_challenge_methods_supported": ["S256"],
}
resp = _mock_response(server_metadata, status=200)
with patch("backend.blocks.mcp.client.Requests") as MockRequests:
instance = MockRequests.return_value
instance.get = AsyncMock(return_value=resp)
result = await client.discover_auth_server_metadata(
"https://auth.example.com"
)
assert result is not None
assert result["authorization_endpoint"] == "https://auth.example.com/authorize"
assert result["token_endpoint"] == "https://auth.example.com/token"

View File

@@ -0,0 +1,162 @@
"""
Minimal MCP server for integration testing.
Implements the MCP Streamable HTTP transport (JSON-RPC 2.0 over HTTP POST)
with a few sample tools. Runs on localhost with a random available port.
"""
import json
import logging
from aiohttp import web
logger = logging.getLogger(__name__)
# Sample tools this test server exposes
TEST_TOOLS = [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name",
},
},
"required": ["city"],
},
},
{
"name": "add_numbers",
"description": "Add two numbers together",
"inputSchema": {
"type": "object",
"properties": {
"a": {"type": "number", "description": "First number"},
"b": {"type": "number", "description": "Second number"},
},
"required": ["a", "b"],
},
},
{
"name": "echo",
"description": "Echo back the input message",
"inputSchema": {
"type": "object",
"properties": {
"message": {"type": "string", "description": "Message to echo"},
},
"required": ["message"],
},
},
]
def _handle_initialize(params: dict) -> dict:
return {
"protocolVersion": "2025-03-26",
"capabilities": {"tools": {"listChanged": False}},
"serverInfo": {"name": "test-mcp-server", "version": "1.0.0"},
}
def _handle_tools_list(params: dict) -> dict:
return {"tools": TEST_TOOLS}
def _handle_tools_call(params: dict) -> dict:
tool_name = params.get("name", "")
arguments = params.get("arguments", {})
if tool_name == "get_weather":
city = arguments.get("city", "Unknown")
return {
"content": [
{
"type": "text",
"text": json.dumps(
{"city": city, "temperature": 22, "condition": "sunny"}
),
}
],
}
elif tool_name == "add_numbers":
a = arguments.get("a", 0)
b = arguments.get("b", 0)
return {
"content": [{"type": "text", "text": json.dumps({"result": a + b})}],
}
elif tool_name == "echo":
message = arguments.get("message", "")
return {
"content": [{"type": "text", "text": message}],
}
else:
return {
"content": [{"type": "text", "text": f"Unknown tool: {tool_name}"}],
"isError": True,
}
HANDLERS = {
"initialize": _handle_initialize,
"tools/list": _handle_tools_list,
"tools/call": _handle_tools_call,
}
async def handle_mcp_request(request: web.Request) -> web.Response:
"""Handle incoming MCP JSON-RPC 2.0 requests."""
# Check auth if configured
expected_token = request.app.get("auth_token")
if expected_token:
auth_header = request.headers.get("Authorization", "")
if auth_header != f"Bearer {expected_token}":
return web.json_response(
{
"jsonrpc": "2.0",
"error": {"code": -32001, "message": "Unauthorized"},
"id": None,
},
status=401,
)
body = await request.json()
# Handle notifications (no id field) — just acknowledge
if "id" not in body:
return web.Response(status=202)
method = body.get("method", "")
params = body.get("params", {})
request_id = body.get("id")
handler = HANDLERS.get(method)
if not handler:
return web.json_response(
{
"jsonrpc": "2.0",
"error": {
"code": -32601,
"message": f"Method not found: {method}",
},
"id": request_id,
}
)
result = handler(params)
return web.json_response({"jsonrpc": "2.0", "result": result, "id": request_id})
def create_test_mcp_app(auth_token: str | None = None) -> web.Application:
"""Create an aiohttp app that acts as an MCP server."""
app = web.Application()
app.router.add_post("/mcp", handle_mcp_request)
if auth_token:
app["auth_token"] = auth_token
return app

View File

@@ -1,7 +1,7 @@
import logging import logging
import os import os
import pytest_asyncio import pytest
from dotenv import load_dotenv from dotenv import load_dotenv
from backend.util.logging import configure_logging from backend.util.logging import configure_logging
@@ -19,7 +19,7 @@ if not os.getenv("PRISMA_DEBUG"):
prisma_logger.setLevel(logging.INFO) prisma_logger.setLevel(logging.INFO)
@pytest_asyncio.fixture(scope="session", loop_scope="session") @pytest.fixture(scope="session")
async def server(): async def server():
from backend.util.test import SpinTestServer from backend.util.test import SpinTestServer
@@ -27,7 +27,7 @@ async def server():
yield server yield server
@pytest_asyncio.fixture(scope="session", loop_scope="session", autouse=True) @pytest.fixture(scope="session", autouse=True)
async def graph_cleanup(server): async def graph_cleanup(server):
created_graph_ids = [] created_graph_ids = []
original_create_graph = server.agent_server.test_create_graph original_create_graph = server.agent_server.test_create_graph

View File

@@ -3,8 +3,6 @@
import queue import queue
import threading import threading
import pytest
from backend.data.execution import ExecutionQueue from backend.data.execution import ExecutionQueue

View File

@@ -39,6 +39,7 @@ from backend.util import type as type_utils
from backend.util.exceptions import GraphNotAccessibleError, GraphNotInLibraryError from backend.util.exceptions import GraphNotAccessibleError, GraphNotInLibraryError
from backend.util.json import SafeJson from backend.util.json import SafeJson
from backend.util.models import Pagination from backend.util.models import Pagination
from backend.util.request import parse_url
from .block import ( from .block import (
AnyBlockSchema, AnyBlockSchema,
@@ -462,6 +463,9 @@ class GraphModel(Graph, GraphMeta):
continue continue
if ProviderName.HTTP in field.provider: if ProviderName.HTTP in field.provider:
continue continue
# MCP credentials are intentionally split by server URL
if ProviderName.MCP in field.provider:
continue
# If this happens, that means a block implementation probably needs # If this happens, that means a block implementation probably needs
# to be updated. # to be updated.
@@ -518,6 +522,18 @@ class GraphModel(Graph, GraphMeta):
"required": ["id", "provider", "type"], "required": ["id", "provider", "type"],
} }
# Add a descriptive display title when URL-based discriminator values
# are present (e.g. "mcp.sentry.dev" instead of just "Mcp")
if (
field_info.discriminator
and not field_info.discriminator_mapping
and field_info.discriminator_values
):
hostnames = sorted(
parse_url(str(v)).netloc for v in field_info.discriminator_values
)
field_schema["display_name"] = ", ".join(hostnames)
# Add other (optional) field info items # Add other (optional) field info items
field_schema.update( field_schema.update(
field_info.model_dump( field_info.model_dump(
@@ -562,8 +578,17 @@ class GraphModel(Graph, GraphMeta):
for graph in [self] + self.sub_graphs: for graph in [self] + self.sub_graphs:
for node in graph.nodes: for node in graph.nodes:
# Track if this node requires credentials (credentials_optional=False means required) # A node's credentials are optional if either:
node_required_map[node.id] = not node.credentials_optional # 1. The node metadata says so (credentials_optional=True), or
# 2. All credential fields on the block have defaults (not required by schema)
block_required = node.block.input_schema.get_required_fields()
creds_required_by_schema = any(
fname in block_required
for fname in node.block.input_schema.get_credentials_fields()
)
node_required_map[node.id] = (
not node.credentials_optional and creds_required_by_schema
)
for ( for (
field_name, field_name,

View File

@@ -463,3 +463,120 @@ def test_node_credentials_optional_with_other_metadata():
assert node.credentials_optional is True assert node.credentials_optional is True
assert node.metadata["position"] == {"x": 100, "y": 200} assert node.metadata["position"] == {"x": 100, "y": 200}
assert node.metadata["customized_name"] == "My Custom Node" assert node.metadata["customized_name"] == "My Custom Node"
# ============================================================================
# Tests for MCP Credential Deduplication
# ============================================================================
def test_mcp_credential_combine_different_servers():
"""Two MCP credential fields with different server URLs should produce
separate entries when combined (not merged into one)."""
from backend.data.model import CredentialsFieldInfo, CredentialsType
from backend.integrations.providers import ProviderName
oauth2_types: frozenset[CredentialsType] = frozenset(["oauth2"])
field_sentry = CredentialsFieldInfo(
credentials_provider=frozenset([ProviderName.MCP]),
credentials_types=oauth2_types,
credentials_scopes=None,
discriminator="server_url",
discriminator_values={"https://mcp.sentry.dev/mcp"},
)
field_linear = CredentialsFieldInfo(
credentials_provider=frozenset([ProviderName.MCP]),
credentials_types=oauth2_types,
credentials_scopes=None,
discriminator="server_url",
discriminator_values={"https://mcp.linear.app/mcp"},
)
combined = CredentialsFieldInfo.combine(
(field_sentry, ("node-sentry", "credentials")),
(field_linear, ("node-linear", "credentials")),
)
# Should produce 2 separate credential entries
assert len(combined) == 2, (
f"Expected 2 credential entries for 2 MCP blocks with different servers, "
f"got {len(combined)}: {list(combined.keys())}"
)
# Each entry should contain the server hostname in its key
keys = list(combined.keys())
assert any(
"mcp.sentry.dev" in k for k in keys
), f"Expected 'mcp.sentry.dev' in one key, got {keys}"
assert any(
"mcp.linear.app" in k for k in keys
), f"Expected 'mcp.linear.app' in one key, got {keys}"
def test_mcp_credential_combine_same_server():
"""Two MCP credential fields with the same server URL should be combined
into one credential entry."""
from backend.data.model import CredentialsFieldInfo, CredentialsType
from backend.integrations.providers import ProviderName
oauth2_types: frozenset[CredentialsType] = frozenset(["oauth2"])
field_a = CredentialsFieldInfo(
credentials_provider=frozenset([ProviderName.MCP]),
credentials_types=oauth2_types,
credentials_scopes=None,
discriminator="server_url",
discriminator_values={"https://mcp.sentry.dev/mcp"},
)
field_b = CredentialsFieldInfo(
credentials_provider=frozenset([ProviderName.MCP]),
credentials_types=oauth2_types,
credentials_scopes=None,
discriminator="server_url",
discriminator_values={"https://mcp.sentry.dev/mcp"},
)
combined = CredentialsFieldInfo.combine(
(field_a, ("node-a", "credentials")),
(field_b, ("node-b", "credentials")),
)
# Should produce 1 credential entry (same server URL)
assert len(combined) == 1, (
f"Expected 1 credential entry for 2 MCP blocks with same server, "
f"got {len(combined)}: {list(combined.keys())}"
)
def test_mcp_credential_combine_no_discriminator_values():
"""MCP credential fields without discriminator_values should be merged
into a single entry (backwards compat for blocks without server_url set)."""
from backend.data.model import CredentialsFieldInfo, CredentialsType
from backend.integrations.providers import ProviderName
oauth2_types: frozenset[CredentialsType] = frozenset(["oauth2"])
field_a = CredentialsFieldInfo(
credentials_provider=frozenset([ProviderName.MCP]),
credentials_types=oauth2_types,
credentials_scopes=None,
discriminator="server_url",
)
field_b = CredentialsFieldInfo(
credentials_provider=frozenset([ProviderName.MCP]),
credentials_types=oauth2_types,
credentials_scopes=None,
discriminator="server_url",
)
combined = CredentialsFieldInfo.combine(
(field_a, ("node-a", "credentials")),
(field_b, ("node-b", "credentials")),
)
# Should produce 1 entry (no URL differentiation)
assert len(combined) == 1, (
f"Expected 1 credential entry for MCP blocks without discriminator_values, "
f"got {len(combined)}: {list(combined.keys())}"
)

View File

@@ -29,6 +29,7 @@ from pydantic import (
GetCoreSchemaHandler, GetCoreSchemaHandler,
SecretStr, SecretStr,
field_serializer, field_serializer,
model_validator,
) )
from pydantic_core import ( from pydantic_core import (
CoreSchema, CoreSchema,
@@ -499,6 +500,25 @@ class CredentialsMetaInput(BaseModel, Generic[CP, CT]):
provider: CP provider: CP
type: CT type: CT
@model_validator(mode="before")
@classmethod
def _normalize_legacy_provider(cls, data: Any) -> Any:
"""Fix ``ProviderName.X`` format from Python 3.13 ``str(Enum)`` bug.
Python 3.13 changed ``str(StrEnum)`` to return ``"ClassName.MEMBER"``
instead of the plain value. Old stored credential references may have
``provider: "ProviderName.MCP"`` instead of ``"mcp"``.
"""
if isinstance(data, dict):
prov = data.get("provider", "")
if isinstance(prov, str) and prov.startswith("ProviderName."):
member = prov.removeprefix("ProviderName.")
try:
data = {**data, "provider": ProviderName[member].value}
except KeyError:
pass
return data
@classmethod @classmethod
def allowed_providers(cls) -> tuple[ProviderName, ...] | None: def allowed_providers(cls) -> tuple[ProviderName, ...] | None:
return get_args(cls.model_fields["provider"].annotation) return get_args(cls.model_fields["provider"].annotation)
@@ -603,11 +623,18 @@ class CredentialsFieldInfo(BaseModel, Generic[CP, CT]):
] = defaultdict(list) ] = defaultdict(list)
for field, key in fields: for field, key in fields:
if field.provider == frozenset([ProviderName.HTTP]): if (
# HTTP host-scoped credentials can have different hosts that reqires different credential sets. field.discriminator
# Group by host extracted from the URL and not field.discriminator_mapping
and field.discriminator_values
):
# URL-based discrimination (e.g. HTTP host-scoped, MCP server URL):
# Each unique host gets its own credential entry.
provider_prefix = next(iter(field.provider))
# Use .value for enum types to get the plain string (e.g. "mcp" not "ProviderName.MCP")
prefix_str = getattr(provider_prefix, "value", str(provider_prefix))
providers = frozenset( providers = frozenset(
[cast(CP, "http")] [cast(CP, prefix_str)]
+ [ + [
cast(CP, parse_url(str(value)).netloc) cast(CP, parse_url(str(value)).netloc)
for value in field.discriminator_values for value in field.discriminator_values

View File

@@ -1,3 +1,4 @@
import asyncio
import logging import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from enum import Enum from enum import Enum
@@ -225,6 +226,10 @@ class SyncRabbitMQ(RabbitMQBase):
class AsyncRabbitMQ(RabbitMQBase): class AsyncRabbitMQ(RabbitMQBase):
"""Asynchronous RabbitMQ client""" """Asynchronous RabbitMQ client"""
def __init__(self, config: RabbitMQConfig):
super().__init__(config)
self._reconnect_lock: asyncio.Lock | None = None
@property @property
def is_connected(self) -> bool: def is_connected(self) -> bool:
return bool(self._connection and not self._connection.is_closed) return bool(self._connection and not self._connection.is_closed)
@@ -235,7 +240,17 @@ class AsyncRabbitMQ(RabbitMQBase):
@conn_retry("AsyncRabbitMQ", "Acquiring async connection") @conn_retry("AsyncRabbitMQ", "Acquiring async connection")
async def connect(self): async def connect(self):
if self.is_connected: if self.is_connected and self._channel and not self._channel.is_closed:
return
if (
self.is_connected
and self._connection
and (self._channel is None or self._channel.is_closed)
):
self._channel = await self._connection.channel()
await self._channel.set_qos(prefetch_count=1)
await self.declare_infrastructure()
return return
self._connection = await aio_pika.connect_robust( self._connection = await aio_pika.connect_robust(
@@ -291,24 +306,46 @@ class AsyncRabbitMQ(RabbitMQBase):
exchange, routing_key=queue.routing_key or queue.name exchange, routing_key=queue.routing_key or queue.name
) )
@func_retry @property
async def publish_message( def _lock(self) -> asyncio.Lock:
if self._reconnect_lock is None:
self._reconnect_lock = asyncio.Lock()
return self._reconnect_lock
async def _ensure_channel(self) -> aio_pika.abc.AbstractChannel:
"""Get a valid channel, reconnecting if the current one is stale.
Uses a lock to prevent concurrent reconnection attempts from racing.
"""
if self.is_ready:
return self._channel # type: ignore # is_ready guarantees non-None
async with self._lock:
# Double-check after acquiring lock
if self.is_ready:
return self._channel # type: ignore
self._channel = None
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
return self._channel
async def _publish_once(
self, self,
routing_key: str, routing_key: str,
message: str, message: str,
exchange: Optional[Exchange] = None, exchange: Optional[Exchange] = None,
persistent: bool = True, persistent: bool = True,
) -> None: ) -> None:
if not self.is_ready: channel = await self._ensure_channel()
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
if exchange: if exchange:
exchange_obj = await self._channel.get_exchange(exchange.name) exchange_obj = await channel.get_exchange(exchange.name)
else: else:
exchange_obj = self._channel.default_exchange exchange_obj = channel.default_exchange
await exchange_obj.publish( await exchange_obj.publish(
aio_pika.Message( aio_pika.Message(
@@ -322,9 +359,23 @@ class AsyncRabbitMQ(RabbitMQBase):
routing_key=routing_key, routing_key=routing_key,
) )
@func_retry
async def publish_message(
self,
routing_key: str,
message: str,
exchange: Optional[Exchange] = None,
persistent: bool = True,
) -> None:
try:
await self._publish_once(routing_key, message, exchange, persistent)
except aio_pika.exceptions.ChannelInvalidStateError:
logger.warning(
"RabbitMQ channel invalid, forcing reconnect and retrying publish"
)
async with self._lock:
self._channel = None
await self._publish_once(routing_key, message, exchange, persistent)
async def get_channel(self) -> aio_pika.abc.AbstractChannel: async def get_channel(self) -> aio_pika.abc.AbstractChannel:
if not self.is_ready: return await self._ensure_channel()
await self.connect()
if self._channel is None:
raise RuntimeError("Channel should be established after connect")
return self._channel

View File

@@ -18,6 +18,7 @@ from redis.asyncio.lock import Lock as AsyncRedisLock
from backend.blocks.agent import AgentExecutorBlock from backend.blocks.agent import AgentExecutorBlock
from backend.blocks.io import AgentOutputBlock from backend.blocks.io import AgentOutputBlock
from backend.blocks.mcp.block import MCPToolBlock
from backend.data import redis_client as redis from backend.data import redis_client as redis
from backend.data.block import ( from backend.data.block import (
BlockInput, BlockInput,
@@ -229,6 +230,10 @@ async def execute_node(
_input_data.nodes_input_masks = nodes_input_masks _input_data.nodes_input_masks = nodes_input_masks
_input_data.user_id = user_id _input_data.user_id = user_id
input_data = _input_data.model_dump() input_data = _input_data.model_dump()
elif isinstance(node_block, MCPToolBlock):
_mcp_data = MCPToolBlock.Input(**node.input_default)
_mcp_data.tool_arguments = input_data
input_data = _mcp_data.model_dump()
data.inputs = input_data data.inputs = input_data
# Execute the node # Execute the node
@@ -265,8 +270,34 @@ async def execute_node(
# Handle regular credentials fields # Handle regular credentials fields
for field_name, input_type in input_model.get_credentials_fields().items(): for field_name, input_type in input_model.get_credentials_fields().items():
credentials_meta = input_type(**input_data[field_name]) field_value = input_data.get(field_name)
credentials, lock = await creds_manager.acquire(user_id, credentials_meta.id) if not field_value or (
isinstance(field_value, dict) and not field_value.get("id")
):
# No credentials configured — nullify so JSON schema validation
# doesn't choke on the empty default `{}`.
input_data[field_name] = None
continue # Block runs without credentials
credentials_meta = input_type(**field_value)
# Write normalized values back so JSON schema validation also passes
# (model_validator may have fixed legacy formats like "ProviderName.MCP")
input_data[field_name] = credentials_meta.model_dump(mode="json")
try:
credentials, lock = await creds_manager.acquire(
user_id, credentials_meta.id
)
except ValueError:
# Credential was deleted or doesn't exist.
# If the field has a default, run without credentials.
if input_model.model_fields[field_name].default is not None:
log_metadata.warning(
f"Credentials #{credentials_meta.id} not found, "
"running without (field has default)"
)
input_data[field_name] = input_model.model_fields[field_name].default
continue
raise
creds_locks.append(lock) creds_locks.append(lock)
extra_exec_kwargs[field_name] = credentials extra_exec_kwargs[field_name] = credentials

View File

@@ -265,7 +265,13 @@ async def _validate_node_input_credentials(
# Track if any credential field is missing for this node # Track if any credential field is missing for this node
has_missing_credentials = False has_missing_credentials = False
# A credential field is optional if the node metadata says so, or if
# the block schema declares a default for the field.
required_fields = block.input_schema.get_required_fields()
is_creds_optional = node.credentials_optional
for field_name, credentials_meta_type in credentials_fields.items(): for field_name, credentials_meta_type in credentials_fields.items():
field_is_optional = is_creds_optional or field_name not in required_fields
try: try:
# Check nodes_input_masks first, then input_default # Check nodes_input_masks first, then input_default
field_value = None field_value = None
@@ -278,7 +284,7 @@ async def _validate_node_input_credentials(
elif field_name in node.input_default: elif field_name in node.input_default:
# For optional credentials, don't use input_default - treat as missing # For optional credentials, don't use input_default - treat as missing
# This prevents stale credential IDs from failing validation # This prevents stale credential IDs from failing validation
if node.credentials_optional: if field_is_optional:
field_value = None field_value = None
else: else:
field_value = node.input_default[field_name] field_value = node.input_default[field_name]
@@ -288,8 +294,8 @@ async def _validate_node_input_credentials(
isinstance(field_value, dict) and not field_value.get("id") isinstance(field_value, dict) and not field_value.get("id")
): ):
has_missing_credentials = True has_missing_credentials = True
# If node has credentials_optional flag, mark for skipping instead of error # If credential field is optional, skip instead of error
if node.credentials_optional: if field_is_optional:
continue # Don't add error, will be marked for skip after loop continue # Don't add error, will be marked for skip after loop
else: else:
credential_errors[node.id][ credential_errors[node.id][
@@ -339,16 +345,16 @@ async def _validate_node_input_credentials(
] = "Invalid credentials: type/provider mismatch" ] = "Invalid credentials: type/provider mismatch"
continue continue
# If node has optional credentials and any are missing, mark for skipping # If node has optional credentials and any are missing, allow running without.
# But only if there are no other errors for this node # The executor will pass credentials=None to the block's run().
if ( if (
has_missing_credentials has_missing_credentials
and node.credentials_optional and is_creds_optional
and node.id not in credential_errors and node.id not in credential_errors
): ):
nodes_to_skip.add(node.id)
logger.info( logger.info(
f"Node #{node.id} will be skipped: optional credentials not configured" f"Node #{node.id}: optional credentials not configured, "
"running without"
) )
return credential_errors, nodes_to_skip return credential_errors, nodes_to_skip

View File

@@ -495,6 +495,7 @@ async def test_validate_node_input_credentials_returns_nodes_to_skip(
mock_block.input_schema.get_credentials_fields.return_value = { mock_block.input_schema.get_credentials_fields.return_value = {
"credentials": mock_credentials_field_type "credentials": mock_credentials_field_type
} }
mock_block.input_schema.get_required_fields.return_value = {"credentials"}
mock_node.block = mock_block mock_node.block = mock_block
# Create mock graph # Create mock graph
@@ -508,8 +509,8 @@ async def test_validate_node_input_credentials_returns_nodes_to_skip(
nodes_input_masks=None, nodes_input_masks=None,
) )
# Node should be in nodes_to_skip, not in errors # Node should NOT be in nodes_to_skip (runs without credentials) and not in errors
assert mock_node.id in nodes_to_skip assert mock_node.id not in nodes_to_skip
assert mock_node.id not in errors assert mock_node.id not in errors
@@ -535,6 +536,7 @@ async def test_validate_node_input_credentials_required_missing_creds_error(
mock_block.input_schema.get_credentials_fields.return_value = { mock_block.input_schema.get_credentials_fields.return_value = {
"credentials": mock_credentials_field_type "credentials": mock_credentials_field_type
} }
mock_block.input_schema.get_required_fields.return_value = {"credentials"}
mock_node.block = mock_block mock_node.block = mock_block
# Create mock graph # Create mock graph

View File

@@ -22,6 +22,27 @@ from backend.util.settings import Settings
settings = Settings() settings = Settings()
def _provider_matches(stored: str, expected: str) -> bool:
"""Compare provider strings, handling Python 3.13 ``str(StrEnum)`` bug.
On Python 3.13, ``str(ProviderName.MCP)`` returns ``"ProviderName.MCP"``
instead of ``"mcp"``. OAuth states persisted with the buggy format need
to match when ``expected`` is the canonical value (e.g. ``"mcp"``).
"""
if stored == expected:
return True
if stored.startswith("ProviderName."):
member = stored.removeprefix("ProviderName.")
from backend.integrations.providers import ProviderName
try:
return ProviderName[member].value == expected
except KeyError:
pass
return False
# This is an overrride since ollama doesn't actually require an API key, but the creddential system enforces one be attached # This is an overrride since ollama doesn't actually require an API key, but the creddential system enforces one be attached
ollama_credentials = APIKeyCredentials( ollama_credentials = APIKeyCredentials(
id="744fdc56-071a-4761-b5a5-0af0ce10a2b5", id="744fdc56-071a-4761-b5a5-0af0ce10a2b5",
@@ -389,7 +410,7 @@ class IntegrationCredentialsStore:
self, user_id: str, provider: str self, user_id: str, provider: str
) -> list[Credentials]: ) -> list[Credentials]:
credentials = await self.get_all_creds(user_id) credentials = await self.get_all_creds(user_id)
return [c for c in credentials if c.provider == provider] return [c for c in credentials if _provider_matches(c.provider, provider)]
async def get_authorized_providers(self, user_id: str) -> list[str]: async def get_authorized_providers(self, user_id: str) -> list[str]:
credentials = await self.get_all_creds(user_id) credentials = await self.get_all_creds(user_id)
@@ -485,17 +506,6 @@ class IntegrationCredentialsStore:
async with self.edit_user_integrations(user_id) as user_integrations: async with self.edit_user_integrations(user_id) as user_integrations:
user_integrations.oauth_states.append(state) user_integrations.oauth_states.append(state)
async with await self.locked_user_integrations(user_id):
user_integrations = await self._get_user_integrations(user_id)
oauth_states = user_integrations.oauth_states
oauth_states.append(state)
user_integrations.oauth_states = oauth_states
await self.db_manager.update_user_integrations(
user_id=user_id, data=user_integrations
)
return token, code_challenge return token, code_challenge
def _generate_code_challenge(self) -> tuple[str, str]: def _generate_code_challenge(self) -> tuple[str, str]:
@@ -521,7 +531,7 @@ class IntegrationCredentialsStore:
state state
for state in oauth_states for state in oauth_states
if secrets.compare_digest(state.token, token) if secrets.compare_digest(state.token, token)
and state.provider == provider and _provider_matches(state.provider, provider)
and state.expires_at > now.timestamp() and state.expires_at > now.timestamp()
), ),
None, None,

View File

@@ -137,7 +137,10 @@ class IntegrationCredentialsManager:
self, user_id: str, credentials: OAuth2Credentials, lock: bool = True self, user_id: str, credentials: OAuth2Credentials, lock: bool = True
) -> OAuth2Credentials: ) -> OAuth2Credentials:
async with self._locked(user_id, credentials.id, "refresh"): async with self._locked(user_id, credentials.id, "refresh"):
oauth_handler = await _get_provider_oauth_handler(credentials.provider) if credentials.provider == ProviderName.MCP.value:
oauth_handler = _create_mcp_oauth_handler(credentials)
else:
oauth_handler = await _get_provider_oauth_handler(credentials.provider)
if oauth_handler.needs_refresh(credentials): if oauth_handler.needs_refresh(credentials):
logger.debug( logger.debug(
f"Refreshing '{credentials.provider}' " f"Refreshing '{credentials.provider}' "
@@ -236,3 +239,25 @@ async def _get_provider_oauth_handler(provider_name_str: str) -> "BaseOAuthHandl
client_secret=client_secret, client_secret=client_secret,
redirect_uri=f"{frontend_base_url}/auth/integrations/oauth_callback", redirect_uri=f"{frontend_base_url}/auth/integrations/oauth_callback",
) )
def _create_mcp_oauth_handler(
credentials: OAuth2Credentials,
) -> "BaseOAuthHandler":
"""Create an MCPOAuthHandler from credential metadata for token refresh.
MCP OAuth handlers have dynamic endpoints discovered per-server, so they
can't be registered as singletons in HANDLERS_BY_NAME. Instead, the handler
is reconstructed from metadata stored on the credential during initial auth.
"""
from backend.blocks.mcp.oauth import MCPOAuthHandler
meta = credentials.metadata or {}
return MCPOAuthHandler(
client_id=meta.get("mcp_client_id", ""),
client_secret=meta.get("mcp_client_secret", ""),
redirect_uri="", # Not needed for token refresh
authorize_url="", # Not needed for token refresh
token_url=meta.get("mcp_token_url", ""),
resource_url=meta.get("mcp_resource_url"),
)

View File

@@ -30,6 +30,7 @@ class ProviderName(str, Enum):
IDEOGRAM = "ideogram" IDEOGRAM = "ideogram"
JINA = "jina" JINA = "jina"
LLAMA_API = "llama_api" LLAMA_API = "llama_api"
MCP = "mcp"
MEDIUM = "medium" MEDIUM = "medium"
MEM0 = "mem0" MEM0 = "mem0"
NOTION = "notion" NOTION = "notion"

View File

@@ -50,6 +50,21 @@ async def _on_graph_activate(graph: "BaseGraph | GraphModel", user_id: str):
if ( if (
creds_meta := new_node.input_default.get(creds_field_name) creds_meta := new_node.input_default.get(creds_field_name)
) and not await get_credentials(creds_meta["id"]): ) and not await get_credentials(creds_meta["id"]):
# If the credential field is optional (has a default in the
# schema, or node metadata marks it optional), clear the stale
# reference instead of blocking the save.
creds_field_optional = (
new_node.credentials_optional
or creds_field_name not in block_input_schema.get_required_fields()
)
if creds_field_optional:
new_node.input_default[creds_field_name] = {}
logger.warning(
f"Node #{new_node.id}: cleared stale optional "
f"credentials #{creds_meta['id']} for "
f"'{creds_field_name}'"
)
continue
raise ValueError( raise ValueError(
f"Node #{new_node.id} input '{creds_field_name}' updated with " f"Node #{new_node.id} input '{creds_field_name}' updated with "
f"non-existent credentials #{creds_meta['id']}" f"non-existent credentials #{creds_meta['id']}"

View File

@@ -342,6 +342,14 @@ async def store_media_file(
if not target_path.is_file(): if not target_path.is_file():
raise ValueError(f"Local file does not exist: {target_path}") raise ValueError(f"Local file does not exist: {target_path}")
# Virus scan the local file before any further processing
local_content = target_path.read_bytes()
if len(local_content) > MAX_FILE_SIZE_BYTES:
raise ValueError(
f"File too large: {len(local_content)} bytes > {MAX_FILE_SIZE_BYTES} bytes"
)
await scan_content_safe(local_content, filename=sanitized_file)
# Return based on requested format # Return based on requested format
if return_format == "for_local_processing": if return_format == "for_local_processing":
# Use when processing files locally with tools like ffmpeg, MoviePy, PIL # Use when processing files locally with tools like ffmpeg, MoviePy, PIL

View File

@@ -247,3 +247,100 @@ class TestFileCloudIntegration:
execution_context=make_test_context(graph_exec_id=graph_exec_id), execution_context=make_test_context(graph_exec_id=graph_exec_id),
return_format="for_local_processing", return_format="for_local_processing",
) )
@pytest.mark.asyncio
async def test_store_media_file_local_path_scanned(self):
"""Test that local file paths are scanned for viruses."""
graph_exec_id = "test-exec-123"
local_file = "test_video.mp4"
file_content = b"fake video content"
with patch(
"backend.util.file.get_cloud_storage_handler"
) as mock_handler_getter, patch(
"backend.util.file.scan_content_safe"
) as mock_scan, patch(
"backend.util.file.Path"
) as mock_path_class:
# Mock cloud storage handler - not a cloud path
mock_handler = MagicMock()
mock_handler.is_cloud_path.return_value = False
mock_handler_getter.return_value = mock_handler
# Mock virus scanner
mock_scan.return_value = None
# Mock file system operations
mock_base_path = MagicMock()
mock_target_path = MagicMock()
mock_resolved_path = MagicMock()
mock_path_class.return_value = mock_base_path
mock_base_path.mkdir = MagicMock()
mock_base_path.__truediv__ = MagicMock(return_value=mock_target_path)
mock_target_path.resolve.return_value = mock_resolved_path
mock_resolved_path.is_relative_to.return_value = True
mock_resolved_path.is_file.return_value = True
mock_resolved_path.read_bytes.return_value = file_content
mock_resolved_path.relative_to.return_value = Path(local_file)
mock_resolved_path.name = local_file
result = await store_media_file(
file=MediaFileType(local_file),
execution_context=make_test_context(graph_exec_id=graph_exec_id),
return_format="for_local_processing",
)
# Verify virus scan was called for local file
mock_scan.assert_called_once_with(file_content, filename=local_file)
# Result should be the relative path
assert str(result) == local_file
@pytest.mark.asyncio
async def test_store_media_file_local_path_virus_detected(self):
"""Test that infected local files raise VirusDetectedError."""
from backend.api.features.store.exceptions import VirusDetectedError
graph_exec_id = "test-exec-123"
local_file = "infected.exe"
file_content = b"malicious content"
with patch(
"backend.util.file.get_cloud_storage_handler"
) as mock_handler_getter, patch(
"backend.util.file.scan_content_safe"
) as mock_scan, patch(
"backend.util.file.Path"
) as mock_path_class:
# Mock cloud storage handler - not a cloud path
mock_handler = MagicMock()
mock_handler.is_cloud_path.return_value = False
mock_handler_getter.return_value = mock_handler
# Mock virus scanner to detect virus
mock_scan.side_effect = VirusDetectedError(
"EICAR-Test-File", "File rejected due to virus detection"
)
# Mock file system operations
mock_base_path = MagicMock()
mock_target_path = MagicMock()
mock_resolved_path = MagicMock()
mock_path_class.return_value = mock_base_path
mock_base_path.mkdir = MagicMock()
mock_base_path.__truediv__ = MagicMock(return_value=mock_target_path)
mock_target_path.resolve.return_value = mock_resolved_path
mock_resolved_path.is_relative_to.return_value = True
mock_resolved_path.is_file.return_value = True
mock_resolved_path.read_bytes.return_value = file_content
with pytest.raises(VirusDetectedError):
await store_media_file(
file=MediaFileType(local_file),
execution_context=make_test_context(graph_exec_id=graph_exec_id),
return_format="for_local_processing",
)

View File

@@ -101,7 +101,7 @@ class HostResolver(abc.AbstractResolver):
def __init__(self, ssl_hostname: str, ip_addresses: list[str]): def __init__(self, ssl_hostname: str, ip_addresses: list[str]):
self.ssl_hostname = ssl_hostname self.ssl_hostname = ssl_hostname
self.ip_addresses = ip_addresses self.ip_addresses = ip_addresses
self._default = aiohttp.AsyncResolver() self._default = aiohttp.ThreadedResolver()
async def resolve(self, host, port=0, family=socket.AF_INET): async def resolve(self, host, port=0, family=socket.AF_INET):
if host == self.ssl_hostname: if host == self.ssl_hostname:
@@ -467,7 +467,7 @@ class Requests:
resolver = HostResolver(ssl_hostname=hostname, ip_addresses=ip_addresses) resolver = HostResolver(ssl_hostname=hostname, ip_addresses=ip_addresses)
ssl_context = ssl.create_default_context() ssl_context = ssl.create_default_context()
connector = aiohttp.TCPConnector(resolver=resolver, ssl=ssl_context) connector = aiohttp.TCPConnector(resolver=resolver, ssl=ssl_context)
session_kwargs = {} session_kwargs: dict = {}
if connector: if connector:
session_kwargs["connector"] = connector session_kwargs["connector"] = connector

View File

@@ -269,19 +269,20 @@ files = [
[[package]] [[package]]
name = "anthropic" name = "anthropic"
version = "0.59.0" version = "0.79.0"
description = "The official Python library for the anthropic API" description = "The official Python library for the anthropic API"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "anthropic-0.59.0-py3-none-any.whl", hash = "sha256:cbc8b3dccef66ad6435c4fa1d317e5ebb092399a4b88b33a09dc4bf3944c3183"}, {file = "anthropic-0.79.0-py3-none-any.whl", hash = "sha256:04cbd473b6bbda4ca2e41dd670fe2f829a911530f01697d0a1e37321eb75f3cf"},
{file = "anthropic-0.59.0.tar.gz", hash = "sha256:d710d1ef0547ebbb64b03f219e44ba078e83fc83752b96a9b22e9726b523fd8f"}, {file = "anthropic-0.79.0.tar.gz", hash = "sha256:8707aafb3b1176ed6c13e2b1c9fb3efddce90d17aee5d8b83a86c70dcdcca871"},
] ]
[package.dependencies] [package.dependencies]
anyio = ">=3.5.0,<5" anyio = ">=3.5.0,<5"
distro = ">=1.7.0,<2" distro = ">=1.7.0,<2"
docstring-parser = ">=0.15,<1"
httpx = ">=0.25.0,<1" httpx = ">=0.25.0,<1"
jiter = ">=0.4.0,<1" jiter = ">=0.4.0,<1"
pydantic = ">=1.9.0,<3" pydantic = ">=1.9.0,<3"
@@ -289,7 +290,7 @@ sniffio = "*"
typing-extensions = ">=4.10,<5" typing-extensions = ">=4.10,<5"
[package.extras] [package.extras]
aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.8)"] aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.9)"]
bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"] bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"]
vertex = ["google-auth[requests] (>=2,<3)"] vertex = ["google-auth[requests] (>=2,<3)"]
@@ -438,7 +439,7 @@ develop = true
[package.dependencies] [package.dependencies]
colorama = "^0.4.6" colorama = "^0.4.6"
cryptography = "^45.0" cryptography = "^46.0"
expiringdict = "^1.2.2" expiringdict = "^1.2.2"
fastapi = "^0.128.0" fastapi = "^0.128.0"
google-cloud-logging = "^3.13.0" google-cloud-logging = "^3.13.0"
@@ -970,62 +971,75 @@ pytz = ">2021.1"
[[package]] [[package]]
name = "cryptography" name = "cryptography"
version = "45.0.7" version = "46.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7" python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "cryptography-45.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:3be4f21c6245930688bd9e162829480de027f8bf962ede33d4f8ba7d67a00cee"}, {file = "cryptography-46.0.4-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:281526e865ed4166009e235afadf3a4c4cba6056f99336a99efba65336fd5485"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:67285f8a611b0ebc0857ced2081e30302909f571a46bfa7a3cc0ad303fe015c6"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5f14fba5bf6f4390d7ff8f086c566454bff0411f6d8aa7af79c88b6f9267aecc"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:577470e39e60a6cd7780793202e63536026d9b8641de011ed9d8174da9ca5339"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:47bcd19517e6389132f76e2d5303ded6cf3f78903da2158a671be8de024f4cd0"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:4bd3e5c4b9682bc112d634f2c6ccc6736ed3635fc3319ac2bb11d768cc5a00d8"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:01df4f50f314fbe7009f54046e908d1754f19d0c6d3070df1e6268c5a4af09fa"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:465ccac9d70115cd4de7186e60cfe989de73f7bb23e8a7aa45af18f7412e75bf"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:5aa3e463596b0087b3da0dbe2b2487e9fc261d25da85754e30e3b40637d61f81"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:16ede8a4f7929b4b7ff3642eba2bf79aa1d71f24ab6ee443935c0d269b6bc513"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:0a9ad24359fee86f131836a9ac3bffc9329e956624a2d379b613f8f8abaf5255"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8978132287a9d3ad6b54fcd1e08548033cc09dc6aacacb6c004c73c3eb5d3ac3"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:dc1272e25ef673efe72f2096e92ae39dea1a1a450dd44918b15351f72c5a168e"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:b6a0e535baec27b528cb07a119f321ac024592388c5681a5ced167ae98e9fff3"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:de0f5f4ec8711ebc555f54735d4c673fc34b65c44283895f1a08c2b49d2fd99c"},
{file = "cryptography-45.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:a24ee598d10befaec178efdff6054bc4d7e883f615bfbcd08126a0f4931c83a6"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:eeeb2e33d8dbcccc34d64651f00a98cb41b2dc69cef866771a5717e6734dfa32"},
{file = "cryptography-45.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:fa26fa54c0a9384c27fcdc905a2fb7d60ac6e47d14bc2692145f2b3b1e2cfdbd"}, {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:3d425eacbc9aceafd2cb429e42f4e5d5633c6f873f5e567077043ef1b9bbf616"},
{file = "cryptography-45.0.7-cp311-abi3-win32.whl", hash = "sha256:bef32a5e327bd8e5af915d3416ffefdbe65ed975b646b3805be81b23580b57b8"}, {file = "cryptography-46.0.4-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91627ebf691d1ea3976a031b61fb7bac1ccd745afa03602275dda443e11c8de0"},
{file = "cryptography-45.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:3808e6b2e5f0b46d981c24d79648e5c25c35e59902ea4391a0dcb3e667bf7443"}, {file = "cryptography-46.0.4-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2d08bc22efd73e8854b0b7caff402d735b354862f1145d7be3b9c0f740fef6a0"},
{file = "cryptography-45.0.7-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:bfb4c801f65dd61cedfc61a83732327fafbac55a47282e6f26f073ca7a41c3b2"}, {file = "cryptography-46.0.4-cp311-abi3-win32.whl", hash = "sha256:82a62483daf20b8134f6e92898da70d04d0ef9a75829d732ea1018678185f4f5"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:81823935e2f8d476707e85a78a405953a03ef7b7b4f55f93f7c2d9680e5e0691"}, {file = "cryptography-46.0.4-cp311-abi3-win_amd64.whl", hash = "sha256:6225d3ebe26a55dbc8ead5ad1265c0403552a63336499564675b29eb3184c09b"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3994c809c17fc570c2af12c9b840d7cea85a9fd3e5c0e0491f4fa3c029216d59"}, {file = "cryptography-46.0.4-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:485e2b65d25ec0d901bca7bcae0f53b00133bf3173916d8e421f6fddde103908"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dad43797959a74103cb59c5dac71409f9c27d34c8a05921341fb64ea8ccb1dd4"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:078e5f06bd2fa5aea5a324f2a09f914b1484f1d0c2a4d6a8a28c74e72f65f2da"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:ce7a453385e4c4693985b4a4a3533e041558851eae061a58a5405363b098fcd3"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dce1e4f068f03008da7fa51cc7abc6ddc5e5de3e3d1550334eaf8393982a5829"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:b04f85ac3a90c227b6e5890acb0edbaf3140938dbecf07bff618bf3638578cf1"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:2067461c80271f422ee7bdbe79b9b4be54a5162e90345f86a23445a0cf3fd8a2"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:48c41a44ef8b8c2e80ca4527ee81daa4c527df3ecbc9423c41a420a9559d0e27"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:c92010b58a51196a5f41c3795190203ac52edfd5dc3ff99149b4659eba9d2085"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f3df7b3d0f91b88b2106031fd995802a2e9ae13e02c36c1fc075b43f420f3a17"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:829c2b12bbc5428ab02d6b7f7e9bbfd53e33efd6672d21341f2177470171ad8b"},
{file = "cryptography-45.0.7-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:dd342f085542f6eb894ca00ef70236ea46070c8a13824c6bde0dfdcd36065b9b"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:62217ba44bf81b30abaeda1488686a04a702a261e26f87db51ff61d9d3510abd"},
{file = "cryptography-45.0.7-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:1993a1bb7e4eccfb922b6cd414f072e08ff5816702a0bdb8941c247a6b1b287c"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:9c2da296c8d3415b93e6053f5a728649a87a48ce084a9aaf51d6e46c87c7f2d2"},
{file = "cryptography-45.0.7-cp37-abi3-win32.whl", hash = "sha256:18fcf70f243fe07252dcb1b268a687f2358025ce32f9f88028ca5c364b123ef5"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:9b34d8ba84454641a6bf4d6762d15847ecbd85c1316c0a7984e6e4e9f748ec2e"},
{file = "cryptography-45.0.7-cp37-abi3-win_amd64.whl", hash = "sha256:7285a89df4900ed3bfaad5679b1e668cb4b38a8de1ccbfc84b05f34512da0a90"}, {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:df4a817fa7138dd0c96c8c8c20f04b8aaa1fac3bbf610913dcad8ea82e1bfd3f"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:de58755d723e86175756f463f2f0bddd45cc36fbd62601228a3f8761c9f58252"}, {file = "cryptography-46.0.4-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:b1de0ebf7587f28f9190b9cb526e901bf448c9e6a99655d2b07fff60e8212a82"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a20e442e917889d1a6b3c570c9e3fa2fdc398c20868abcea268ea33c024c4083"}, {file = "cryptography-46.0.4-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:9b4d17bc7bd7cdd98e3af40b441feaea4c68225e2eb2341026c84511ad246c0c"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:258e0dff86d1d891169b5af222d362468a9570e2532923088658aa866eb11130"}, {file = "cryptography-46.0.4-cp314-cp314t-win32.whl", hash = "sha256:c411f16275b0dea722d76544a61d6421e2cc829ad76eec79280dbdc9ddf50061"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:d97cf502abe2ab9eff8bd5e4aca274da8d06dd3ef08b759a8d6143f4ad65d4b4"}, {file = "cryptography-46.0.4-cp314-cp314t-win_amd64.whl", hash = "sha256:728fedc529efc1439eb6107b677f7f7558adab4553ef8669f0d02d42d7b959a7"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:c987dad82e8c65ebc985f5dae5e74a3beda9d0a2a4daf8a1115f3772b59e5141"}, {file = "cryptography-46.0.4-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:a9556ba711f7c23f77b151d5798f3ac44a13455cc68db7697a1096e6d0563cab"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:c13b1e3afd29a5b3b2656257f14669ca8fa8d7956d509926f0b130b600b50ab7"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8bf75b0259e87fa70bddc0b8b4078b76e7fd512fd9afae6c1193bcf440a4dbef"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:4a862753b36620af6fc54209264f92c716367f2f0ff4624952276a6bbd18cbde"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3c268a3490df22270955966ba236d6bc4a8f9b6e4ffddb78aac535f1a5ea471d"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:06ce84dc14df0bf6ea84666f958e6080cdb6fe1231be2a51f3fc1267d9f3fb34"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:812815182f6a0c1d49a37893a303b44eaac827d7f0d582cecfc81b6427f22973"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d0c5c6bac22b177bf8da7435d9d27a6834ee130309749d162b26c3105c0795a9"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:a90e43e3ef65e6dcf969dfe3bb40cbf5aef0d523dff95bfa24256be172a845f4"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:2f641b64acc00811da98df63df7d59fd4706c0df449da71cb7ac39a0732b40ae"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:a05177ff6296644ef2876fce50518dffb5bcdf903c85250974fc8bc85d54c0af"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:f5414a788ecc6ee6bc58560e85ca624258a55ca434884445440a810796ea0e0b"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:daa392191f626d50f1b136c9b4cf08af69ca8279d110ea24f5c2700054d2e263"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:1f3d56f73595376f4244646dd5c5870c14c196949807be39e79e7bd9bac3da63"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:e07ea39c5b048e085f15923511d8121e4a9dc45cee4e3b970ca4f0d338f23095"},
{file = "cryptography-45.0.7.tar.gz", hash = "sha256:4b1654dfc64ea479c242508eb8c724044f1e964a47d1d1cacc5132292d851971"}, {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:d5a45ddc256f492ce42a4e35879c5e5528c09cd9ad12420828c972951d8e016b"},
{file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:6bb5157bf6a350e5b28aee23beb2d84ae6f5be390b2f8ee7ea179cda077e1019"},
{file = "cryptography-46.0.4-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:dd5aba870a2c40f87a3af043e0dee7d9eb02d4aff88a797b48f2b43eff8c3ab4"},
{file = "cryptography-46.0.4-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:93d8291da8d71024379ab2cb0b5c57915300155ad42e07f76bea6ad838d7e59b"},
{file = "cryptography-46.0.4-cp38-abi3-win32.whl", hash = "sha256:0563655cb3c6d05fb2afe693340bc050c30f9f34e15763361cf08e94749401fc"},
{file = "cryptography-46.0.4-cp38-abi3-win_amd64.whl", hash = "sha256:fa0900b9ef9c49728887d1576fd8d9e7e3ea872fa9b25ef9b64888adc434e976"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:766330cce7416c92b5e90c3bb71b1b79521760cdcfc3a6a1a182d4c9fab23d2b"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c236a44acfb610e70f6b3e1c3ca20ff24459659231ef2f8c48e879e2d32b73da"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8a15fb869670efa8f83cbffbc8753c1abf236883225aed74cd179b720ac9ec80"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:fdc3daab53b212472f1524d070735b2f0c214239df131903bae1d598016fa822"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:44cc0675b27cadb71bdbb96099cca1fa051cd11d2ade09e5cd3a2edb929ed947"},
{file = "cryptography-46.0.4-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:be8c01a7d5a55f9a47d1888162b76c8f49d62b234d88f0ff91a9fbebe32ffbc3"},
{file = "cryptography-46.0.4.tar.gz", hash = "sha256:bfd019f60f8abc2ed1b9be4ddc21cfef059c841d86d710bb69909a688cbb8f59"},
] ]
[package.dependencies] [package.dependencies]
cffi = {version = ">=1.14", markers = "platform_python_implementation != \"PyPy\""} cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
typing-extensions = {version = ">=4.13.2", markers = "python_full_version < \"3.11.0\""}
[package.extras] [package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs ; python_full_version >= \"3.8.0\"", "sphinx-rtd-theme (>=3.0.0) ; python_full_version >= \"3.8.0\""] docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"] docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_full_version >= \"3.8.0\""] nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist ; python_full_version >= \"3.8.0\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"] pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"] sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"] ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.7)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"] test = ["certifi (>=2024)", "cryptography-vectors (==46.0.4)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"] test-randomorder = ["pytest-randomly"]
[[package]] [[package]]
@@ -1135,6 +1149,23 @@ idna = ["idna (>=3.10)"]
trio = ["trio (>=0.30)"] trio = ["trio (>=0.30)"]
wmi = ["wmi (>=1.5.1) ; platform_system == \"Windows\""] wmi = ["wmi (>=1.5.1) ; platform_system == \"Windows\""]
[[package]]
name = "docstring-parser"
version = "0.17.0"
description = "Parse Python docstrings in reST, Google and Numpydoc format"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "docstring_parser-0.17.0-py3-none-any.whl", hash = "sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708"},
{file = "docstring_parser-0.17.0.tar.gz", hash = "sha256:583de4a309722b3315439bb31d64ba3eebada841f2e2cee23b99df001434c912"},
]
[package.extras]
dev = ["pre-commit (>=2.16.0) ; python_version >= \"3.9\"", "pydoctor (>=25.4.0)", "pytest"]
docs = ["pydoctor (>=25.4.0)"]
test = ["pytest"]
[[package]] [[package]]
name = "dulwich" name = "dulwich"
version = "0.22.8" version = "0.22.8"
@@ -1351,14 +1382,14 @@ tzdata = "*"
[[package]] [[package]]
name = "fastapi" name = "fastapi"
version = "0.128.3" version = "0.128.5"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "fastapi-0.128.3-py3-none-any.whl", hash = "sha256:c8cdf7c2182c9a06bf9cfa3329819913c189dc86389b90d5709892053582db29"}, {file = "fastapi-0.128.5-py3-none-any.whl", hash = "sha256:bceec0de8aa6564599c5bcc0593b0d287703562c848271fca8546fd2c87bf4dd"},
{file = "fastapi-0.128.3.tar.gz", hash = "sha256:ed99383fd96063447597d5aa2a9ec3973be198e3b4fc10c55f15c62efdb21c60"}, {file = "fastapi-0.128.5.tar.gz", hash = "sha256:a7173579fc162d6471e3c6fbd9a4b7610c7a3b367bcacf6c4f90d5d022cab711"},
] ]
[package.dependencies] [package.dependencies]
@@ -3932,14 +3963,14 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]] [[package]]
name = "ollama" name = "ollama"
version = "0.5.4" version = "0.6.1"
description = "The official Python client for Ollama." description = "The official Python client for Ollama."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "ollama-0.5.4-py3-none-any.whl", hash = "sha256:6374c9bb4f2a371b3583c09786112ba85b006516745689c172a7e28af4d4d1a2"}, {file = "ollama-0.6.1-py3-none-any.whl", hash = "sha256:fc4c984b345735c5486faeee67d8a265214a31cbb828167782dc642ce0a2bf8c"},
{file = "ollama-0.5.4.tar.gz", hash = "sha256:75857505a5d42e5e58114a1b78cc8c24596d8866863359d8a2329946a9b6d6f3"}, {file = "ollama-0.6.1.tar.gz", hash = "sha256:478c67546836430034b415ed64fa890fd3d1ff91781a9d548b3325274e69d7c6"},
] ]
[package.dependencies] [package.dependencies]
@@ -4609,20 +4640,20 @@ testing = ["coverage", "pytest", "pytest-benchmark"]
[[package]] [[package]]
name = "poethepoet" name = "poethepoet"
version = "0.37.0" version = "0.41.0"
description = "A task runner that works well with poetry and uv." description = "A task runner that works well with poetry and uv."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.10"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "poethepoet-0.37.0-py3-none-any.whl", hash = "sha256:861790276315abcc8df1b4bd60e28c3d48a06db273edd3092f3c94e1a46e5e22"}, {file = "poethepoet-0.41.0-py3-none-any.whl", hash = "sha256:4bab9fd8271664c5d21407e8f12827daeb6aa484dc6cc7620f0c3b4e62b42ee4"},
{file = "poethepoet-0.37.0.tar.gz", hash = "sha256:73edf458707c674a079baa46802e21455bda3a7f82a408e58c31b9f4fe8e933d"}, {file = "poethepoet-0.41.0.tar.gz", hash = "sha256:dcaad621dc061f6a90b17d091bebb9ca043d67bfe9bd6aa4185aea3ebf7ff3e6"},
] ]
[package.dependencies] [package.dependencies]
pastel = ">=0.2.1,<0.3.0" pastel = ">=0.2.1,<0.3.0"
pyyaml = ">=6.0.2,<7.0" pyyaml = ">=6.0.3,<7.0"
tomli = {version = ">=1.2.2", markers = "python_version < \"3.11\""} tomli = {version = ">=1.3.0", markers = "python_version < \"3.11\""}
[package.extras] [package.extras]
poetry-plugin = ["poetry (>=1.2.0,<3.0.0) ; python_version < \"4.0\""] poetry-plugin = ["poetry (>=1.2.0,<3.0.0) ; python_version < \"4.0\""]
@@ -4697,14 +4728,14 @@ tests = ["coverage-conditional-plugin (>=0.9.0)", "portalocker[redis]", "pytest
[[package]] [[package]]
name = "postgrest" name = "postgrest"
version = "2.27.2" version = "2.27.3"
description = "PostgREST client for Python. This library provides an ORM interface to PostgREST." description = "PostgREST client for Python. This library provides an ORM interface to PostgREST."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "postgrest-2.27.2-py3-none-any.whl", hash = "sha256:1666fef3de05ca097a314433dd5ae2f2d71c613cb7b233d0f468c4ffe37277da"}, {file = "postgrest-2.27.3-py3-none-any.whl", hash = "sha256:ed79123af7127edd78d538bfe8351d277e45b1a36994a4dbf57ae27dde87a7b7"},
{file = "postgrest-2.27.2.tar.gz", hash = "sha256:55407d530b5af3d64e883a71fec1f345d369958f723ce4a8ab0b7d169e313242"}, {file = "postgrest-2.27.3.tar.gz", hash = "sha256:c2e2679addfc8eaab23197bad7ddaee6cbb4cbe8c483ebd2d2e5219543037cc3"},
] ]
[package.dependencies] [package.dependencies]
@@ -4862,17 +4893,19 @@ tqdm = "*"
[[package]] [[package]]
name = "prometheus-client" name = "prometheus-client"
version = "0.22.1" version = "0.24.1"
description = "Python client for the Prometheus monitoring system." description = "Python client for the Prometheus monitoring system."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "prometheus_client-0.22.1-py3-none-any.whl", hash = "sha256:cca895342e308174341b2cbf99a56bef291fbc0ef7b9e5412a0f26d653ba7094"}, {file = "prometheus_client-0.24.1-py3-none-any.whl", hash = "sha256:150db128af71a5c2482b36e588fc8a6b95e498750da4b17065947c16070f4055"},
{file = "prometheus_client-0.22.1.tar.gz", hash = "sha256:190f1331e783cf21eb60bca559354e0a4d4378facecf78f5428c39b675d20d28"}, {file = "prometheus_client-0.24.1.tar.gz", hash = "sha256:7e0ced7fbbd40f7b84962d5d2ab6f17ef88a72504dcf7c0b40737b43b2a461f9"},
] ]
[package.extras] [package.extras]
aiohttp = ["aiohttp"]
django = ["django"]
twisted = ["twisted"] twisted = ["twisted"]
[[package]] [[package]]
@@ -5886,18 +5919,18 @@ pytest = ">=3.0.0"
[[package]] [[package]]
name = "pytest-watcher" name = "pytest-watcher"
version = "0.4.3" version = "0.6.3"
description = "Automatically rerun your tests on file modifications" description = "Automatically rerun your tests on file modifications"
optional = false optional = false
python-versions = "<4.0.0,>=3.7.0" python-versions = ">=3.9"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "pytest_watcher-0.4.3-py3-none-any.whl", hash = "sha256:d59b1e1396f33a65ea4949b713d6884637755d641646960056a90b267c3460f9"}, {file = "pytest_watcher-0.6.3-py3-none-any.whl", hash = "sha256:83e7748c933087e8276edb6078663e6afa9926434b4fd8b85cf6b32b1d5bec89"},
{file = "pytest_watcher-0.4.3.tar.gz", hash = "sha256:0cb0e4661648c8c0ff2b2d25efa5a8e421784b9e4c60fcecbf9b7c30b2d731b3"}, {file = "pytest_watcher-0.6.3.tar.gz", hash = "sha256:842dc904264df0ad2d5264153a66bb452fccfa46598cd6e0a5ef1d19afed9b13"},
] ]
[package.dependencies] [package.dependencies]
tomli = {version = ">=2.0.1,<3.0.0", markers = "python_version < \"3.11\""} tomli = {version = ">=2.0.1", markers = "python_version < \"3.11\""}
watchdog = ">=2.0.0" watchdog = ">=2.0.0"
[[package]] [[package]]
@@ -5932,14 +5965,14 @@ cli = ["click (>=5.0)"]
[[package]] [[package]]
name = "python-multipart" name = "python-multipart"
version = "0.0.20" version = "0.0.22"
description = "A streaming multipart parser for Python" description = "A streaming multipart parser for Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.10"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104"}, {file = "python_multipart-0.0.22-py3-none-any.whl", hash = "sha256:2b2cd894c83d21bf49d702499531c7bafd057d730c201782048f7945d82de155"},
{file = "python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13"}, {file = "python_multipart-0.0.22.tar.gz", hash = "sha256:7340bef99a7e0032613f56dc36027b959fd3b30a787ed62d310e951f7c3a3a58"},
] ]
[[package]] [[package]]
@@ -6227,14 +6260,14 @@ all = ["numpy"]
[[package]] [[package]]
name = "realtime" name = "realtime"
version = "2.27.2" version = "2.27.3"
description = "" description = ""
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "realtime-2.27.2-py3-none-any.whl", hash = "sha256:34a9cbb26a274e707e8fc9e3ee0a66de944beac0fe604dc336d1e985db2c830f"}, {file = "realtime-2.27.3-py3-none-any.whl", hash = "sha256:f571115f86988e33c41c895cb3fba2eaa1b693aeaede3617288f44274ca90f43"},
{file = "realtime-2.27.2.tar.gz", hash = "sha256:b960a90294d2cea1b3f1275ecb89204304728e08fff1c393cc1b3150739556b3"}, {file = "realtime-2.27.3.tar.gz", hash = "sha256:02b082243107656a5ef3fb63e8e2ab4c40bc199abb45adb8a42ed63f089a1041"},
] ]
[package.dependencies] [package.dependencies]
@@ -6639,31 +6672,30 @@ pyasn1 = ">=0.1.3"
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.14.14" version = "0.15.0"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["dev"] groups = ["dev"]
files = [ files = [
{file = "ruff-0.14.14-py3-none-linux_armv6l.whl", hash = "sha256:7cfe36b56e8489dee8fbc777c61959f60ec0f1f11817e8f2415f429552846aed"}, {file = "ruff-0.15.0-py3-none-linux_armv6l.whl", hash = "sha256:aac4ebaa612a82b23d45964586f24ae9bc23ca101919f5590bdb368d74ad5455"},
{file = "ruff-0.14.14-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:6006a0082336e7920b9573ef8a7f52eec837add1265cc74e04ea8a4368cd704c"}, {file = "ruff-0.15.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:dcd4be7cc75cfbbca24a98d04d0b9b36a270d0833241f776b788d59f4142b14d"},
{file = "ruff-0.14.14-py3-none-macosx_11_0_arm64.whl", hash = "sha256:026c1d25996818f0bf498636686199d9bd0d9d6341c9c2c3b62e2a0198b758de"}, {file = "ruff-0.15.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d747e3319b2bce179c7c1eaad3d884dc0a199b5f4d5187620530adf9105268ce"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f666445819d31210b71e0a6d1c01e24447a20b85458eea25a25fe8142210ae0e"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:650bd9c56ae03102c51a5e4b554d74d825ff3abe4db22b90fd32d816c2e90621"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:3c0f18b922c6d2ff9a5e6c3ee16259adc513ca775bcf82c67ebab7cbd9da5bc8"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a6664b7eac559e3048223a2da77769c2f92b43a6dfd4720cef42654299a599c9"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1629e67489c2dea43e8658c3dba659edbfd87361624b4040d1df04c9740ae906"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f811f97b0f092b35320d1556f3353bf238763420ade5d9e62ebd2b73f2ff179"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:27493a2131ea0f899057d49d303e4292b2cae2bb57253c1ed1f256fbcd1da480"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:761ec0a66680fab6454236635a39abaf14198818c8cdf691e036f4bc0f406b2d"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:01ff589aab3f5b539e35db38425da31a57521efd1e4ad1ae08fc34dbe30bd7df"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:940f11c2604d317e797b289f4f9f3fa5555ffe4fb574b55ed006c3d9b6f0eb78"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1cc12d74eef0f29f51775f5b755913eb523546b88e2d733e1d701fe65144e89b"}, {file = "ruff-0.15.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bcbca3d40558789126da91d7ef9a7c87772ee107033db7191edefa34e2c7f1b4"},
{file = "ruff-0.14.14-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb8481604b7a9e75eff53772496201690ce2687067e038b3cc31aaf16aa0b974"}, {file = "ruff-0.15.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:9a121a96db1d75fa3eb39c4539e607f628920dd72ff1f7c5ee4f1b768ac62d6e"},
{file = "ruff-0.14.14-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:14649acb1cf7b5d2d283ebd2f58d56b75836ed8c6f329664fa91cdea19e76e66"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:5298d518e493061f2eabd4abd067c7e4fb89e2f63291c94332e35631c07c3662"},
{file = "ruff-0.14.14-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e8058d2145566510790eab4e2fad186002e288dec5e0d343a92fe7b0bc1b3e13"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:afb6e603d6375ff0d6b0cee563fa21ab570fd15e65c852cb24922cef25050cf1"},
{file = "ruff-0.14.14-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e651e977a79e4c758eb807f0481d673a67ffe53cfa92209781dfa3a996cf8412"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:77e515f6b15f828b94dc17d2b4ace334c9ddb7d9468c54b2f9ed2b9c1593ef16"},
{file = "ruff-0.14.14-py3-none-musllinux_1_2_i686.whl", hash = "sha256:cc8b22da8d9d6fdd844a68ae937e2a0adf9b16514e9a97cc60355e2d4b219fc3"}, {file = "ruff-0.15.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:6f6e80850a01eb13b3e42ee0ebdf6e4497151b48c35051aab51c101266d187a3"},
{file = "ruff-0.14.14-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:16bc890fb4cc9781bb05beb5ab4cd51be9e7cb376bf1dd3580512b24eb3fda2b"}, {file = "ruff-0.15.0-py3-none-win32.whl", hash = "sha256:238a717ef803e501b6d51e0bdd0d2c6e8513fe9eec14002445134d3907cd46c3"},
{file = "ruff-0.14.14-py3-none-win32.whl", hash = "sha256:b530c191970b143375b6a68e6f743800b2b786bbcf03a7965b06c4bf04568167"}, {file = "ruff-0.15.0-py3-none-win_amd64.whl", hash = "sha256:dd5e4d3301dc01de614da3cdffc33d4b1b96fb89e45721f1598e5532ccf78b18"},
{file = "ruff-0.14.14-py3-none-win_amd64.whl", hash = "sha256:3dde1435e6b6fe5b66506c1dff67a421d0b7f6488d466f651c07f4cab3bf20fd"}, {file = "ruff-0.15.0-py3-none-win_arm64.whl", hash = "sha256:c480d632cc0ca3f0727acac8b7d053542d9e114a462a145d0b00e7cd658c515a"},
{file = "ruff-0.14.14-py3-none-win_arm64.whl", hash = "sha256:56e6981a98b13a32236a72a8da421d7839221fa308b223b9283312312e5ac76c"}, {file = "ruff-0.15.0.tar.gz", hash = "sha256:6bdea47cdbea30d40f8f8d7d69c0854ba7c15420ec75a26f463290949d7f7e9a"},
{file = "ruff-0.14.14.tar.gz", hash = "sha256:2d0f819c9a90205f3a867dbbd0be083bee9912e170fd7d9704cc8ae45824896b"},
] ]
[[package]] [[package]]
@@ -6992,14 +7024,14 @@ full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart
[[package]] [[package]]
name = "storage3" name = "storage3"
version = "2.27.2" version = "2.27.3"
description = "Supabase Storage client for Python." description = "Supabase Storage client for Python."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "storage3-2.27.2-py3-none-any.whl", hash = "sha256:e6f16e7a260729e7b1f46e9bf61746805a02e30f5e419ee1291007c432e3ec63"}, {file = "storage3-2.27.3-py3-none-any.whl", hash = "sha256:11a05b7da84bccabeeea12d940bca3760cf63fe6ca441868677335cfe4fdfbe0"},
{file = "storage3-2.27.2.tar.gz", hash = "sha256:cb4807b7f86b4bb1272ac6fdd2f3cfd8ba577297046fa5f88557425200275af5"}, {file = "storage3-2.27.3.tar.gz", hash = "sha256:dc1a4a010cf36d5482c5cb6c1c28fc5f00e23284342b89e4ae43b5eae8501ddb"},
] ]
[package.dependencies] [package.dependencies]
@@ -7059,35 +7091,35 @@ typing-extensions = {version = ">=4.5.0", markers = "python_version >= \"3.7\""}
[[package]] [[package]]
name = "supabase" name = "supabase"
version = "2.27.2" version = "2.27.3"
description = "Supabase client for Python." description = "Supabase client for Python."
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "supabase-2.27.2-py3-none-any.whl", hash = "sha256:d4dce00b3a418ee578017ec577c0e5be47a9a636355009c76f20ed2faa15bc54"}, {file = "supabase-2.27.3-py3-none-any.whl", hash = "sha256:082a74642fcf9954693f1ce8c251baf23e4bda26ffdbc8dcd4c99c82e60d69ff"},
{file = "supabase-2.27.2.tar.gz", hash = "sha256:2aed40e4f3454438822442a1e94a47be6694c2c70392e7ae99b51a226d4293f7"}, {file = "supabase-2.27.3.tar.gz", hash = "sha256:5e5a348232ac4315c1032ddd687278f0b982465471f0cbb52bca7e6a66495ff3"},
] ]
[package.dependencies] [package.dependencies]
httpx = ">=0.26,<0.29" httpx = ">=0.26,<0.29"
postgrest = "2.27.2" postgrest = "2.27.3"
realtime = "2.27.2" realtime = "2.27.3"
storage3 = "2.27.2" storage3 = "2.27.3"
supabase-auth = "2.27.2" supabase-auth = "2.27.3"
supabase-functions = "2.27.2" supabase-functions = "2.27.3"
yarl = ">=1.22.0" yarl = ">=1.22.0"
[[package]] [[package]]
name = "supabase-auth" name = "supabase-auth"
version = "2.27.2" version = "2.27.3"
description = "Python Client Library for Supabase Auth" description = "Python Client Library for Supabase Auth"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "supabase_auth-2.27.2-py3-none-any.whl", hash = "sha256:78ec25b11314d0a9527a7205f3b1c72560dccdc11b38392f80297ef98664ee91"}, {file = "supabase_auth-2.27.3-py3-none-any.whl", hash = "sha256:82a4262eaad85383319d394dab0eea11fcf3ebd774062aef8ea3874ae2f02579"},
{file = "supabase_auth-2.27.2.tar.gz", hash = "sha256:0f5bcc79b3677cb42e9d321f3c559070cfa40d6a29a67672cc8382fb7dc2fe97"}, {file = "supabase_auth-2.27.3.tar.gz", hash = "sha256:39894d4bc60b6f23b5cff4d0d7d4c1659e5d69563cadf014d4896f780ca8ca78"},
] ]
[package.dependencies] [package.dependencies]
@@ -7097,14 +7129,14 @@ pyjwt = {version = ">=2.10.1", extras = ["crypto"]}
[[package]] [[package]]
name = "supabase-functions" name = "supabase-functions"
version = "2.27.2" version = "2.27.3"
description = "Library for Supabase Functions" description = "Library for Supabase Functions"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "supabase_functions-2.27.2-py3-none-any.whl", hash = "sha256:db480efc669d0bca07605b9b6f167312af43121adcc842a111f79bea416ef754"}, {file = "supabase_functions-2.27.3-py3-none-any.whl", hash = "sha256:9d14a931d49ede1c6cf5fbfceb11c44061535ba1c3f310f15384964d86a83d9e"},
{file = "supabase_functions-2.27.2.tar.gz", hash = "sha256:d0c8266207a94371cb3fd35ad3c7f025b78a97cf026861e04ccd35ac1775f80b"}, {file = "supabase_functions-2.27.3.tar.gz", hash = "sha256:e954f1646da8ca6e7e16accef58d0884a5f97b25956ee98e7d4927a210ed92f9"},
] ]
[package.dependencies] [package.dependencies]
@@ -7114,14 +7146,14 @@ yarl = ">=1.20.1"
[[package]] [[package]]
name = "tenacity" name = "tenacity"
version = "9.1.3" version = "9.1.4"
description = "Retry code until it succeeds" description = "Retry code until it succeeds"
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.10"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "tenacity-9.1.3-py3-none-any.whl", hash = "sha256:51171cfc6b8a7826551e2f029426b10a6af189c5ac6986adcd7eb36d42f17954"}, {file = "tenacity-9.1.4-py3-none-any.whl", hash = "sha256:6095a360c919085f28c6527de529e76a06ad89b23659fa881ae0649b867a9d55"},
{file = "tenacity-9.1.3.tar.gz", hash = "sha256:a6724c947aa717087e2531f883bde5c9188f603f6669a9b8d54eb998e604c12a"}, {file = "tenacity-9.1.4.tar.gz", hash = "sha256:adb31d4c263f2bd041081ab33b498309a57c77f9acf2db65aadf0898179cf93a"},
] ]
[package.extras] [package.extras]
@@ -7130,43 +7162,69 @@ test = ["pytest", "tornado (>=4.5)", "typeguard"]
[[package]] [[package]]
name = "tiktoken" name = "tiktoken"
version = "0.9.0" version = "0.12.0"
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models" description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
optional = false optional = false
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
files = [ files = [
{file = "tiktoken-0.9.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:586c16358138b96ea804c034b8acf3f5d3f0258bd2bc3b0227af4af5d622e382"}, {file = "tiktoken-0.12.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3de02f5a491cfd179aec916eddb70331814bd6bf764075d39e21d5862e533970"},
{file = "tiktoken-0.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d9c59ccc528c6c5dd51820b3474402f69d9a9e1d656226848ad68a8d5b2e5108"}, {file = "tiktoken-0.12.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b6cfb6d9b7b54d20af21a912bfe63a2727d9cfa8fbda642fd8322c70340aad16"},
{file = "tiktoken-0.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0968d5beeafbca2a72c595e8385a1a1f8af58feaebb02b227229b69ca5357fd"}, {file = "tiktoken-0.12.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:cde24cdb1b8a08368f709124f15b36ab5524aac5fa830cc3fdce9c03d4fb8030"},
{file = "tiktoken-0.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:92a5fb085a6a3b7350b8fc838baf493317ca0e17bd95e8642f95fc69ecfed1de"}, {file = "tiktoken-0.12.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:6de0da39f605992649b9cfa6f84071e3f9ef2cec458d08c5feb1b6f0ff62e134"},
{file = "tiktoken-0.9.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:15a2752dea63d93b0332fb0ddb05dd909371ededa145fe6a3242f46724fa7990"}, {file = "tiktoken-0.12.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:6faa0534e0eefbcafaccb75927a4a380463a2eaa7e26000f0173b920e98b720a"},
{file = "tiktoken-0.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:26113fec3bd7a352e4b33dbaf1bd8948de2507e30bd95a44e2b1156647bc01b4"}, {file = "tiktoken-0.12.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:82991e04fc860afb933efb63957affc7ad54f83e2216fe7d319007dab1ba5892"},
{file = "tiktoken-0.9.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:f32cc56168eac4851109e9b5d327637f15fd662aa30dd79f964b7c39fbadd26e"}, {file = "tiktoken-0.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:6fb2995b487c2e31acf0a9e17647e3b242235a20832642bb7a9d1a181c0c1bb1"},
{file = "tiktoken-0.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:45556bc41241e5294063508caf901bf92ba52d8ef9222023f83d2483a3055348"}, {file = "tiktoken-0.12.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:6e227c7f96925003487c33b1b32265fad2fbcec2b7cf4817afb76d416f40f6bb"},
{file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03935988a91d6d3216e2ec7c645afbb3d870b37bcb67ada1943ec48678e7ee33"}, {file = "tiktoken-0.12.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c06cf0fcc24c2cb2adb5e185c7082a82cba29c17575e828518c2f11a01f445aa"},
{file = "tiktoken-0.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b3d80aad8d2c6b9238fc1a5524542087c52b860b10cbf952429ffb714bc1136"}, {file = "tiktoken-0.12.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:f18f249b041851954217e9fd8e5c00b024ab2315ffda5ed77665a05fa91f42dc"},
{file = "tiktoken-0.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b2a21133be05dc116b1d0372af051cd2c6aa1d2188250c9b553f9fa49301b336"}, {file = "tiktoken-0.12.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:47a5bc270b8c3db00bb46ece01ef34ad050e364b51d406b6f9730b64ac28eded"},
{file = "tiktoken-0.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:11a20e67fdf58b0e2dea7b8654a288e481bb4fc0289d3ad21291f8d0849915fb"}, {file = "tiktoken-0.12.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:508fa71810c0efdcd1b898fda574889ee62852989f7c1667414736bcb2b9a4bd"},
{file = "tiktoken-0.9.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:e88f121c1c22b726649ce67c089b90ddda8b9662545a8aeb03cfef15967ddd03"}, {file = "tiktoken-0.12.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a1af81a6c44f008cba48494089dd98cccb8b313f55e961a52f5b222d1e507967"},
{file = "tiktoken-0.9.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a6600660f2f72369acb13a57fb3e212434ed38b045fd8cc6cdd74947b4b5d210"}, {file = "tiktoken-0.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:3e68e3e593637b53e56f7237be560f7a394451cb8c11079755e80ae64b9e6def"},
{file = "tiktoken-0.9.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95e811743b5dfa74f4b227927ed86cbc57cad4df859cb3b643be797914e41794"}, {file = "tiktoken-0.12.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:b97f74aca0d78a1ff21b8cd9e9925714c15a9236d6ceacf5c7327c117e6e21e8"},
{file = "tiktoken-0.9.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:99376e1370d59bcf6935c933cb9ba64adc29033b7e73f5f7569f3aad86552b22"}, {file = "tiktoken-0.12.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:2b90f5ad190a4bb7c3eb30c5fa32e1e182ca1ca79f05e49b448438c3e225a49b"},
{file = "tiktoken-0.9.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:badb947c32739fb6ddde173e14885fb3de4d32ab9d8c591cbd013c22b4c31dd2"}, {file = "tiktoken-0.12.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:65b26c7a780e2139e73acc193e5c63ac754021f160df919add909c1492c0fb37"},
{file = "tiktoken-0.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:5a62d7a25225bafed786a524c1b9f0910a1128f4232615bf3f8257a73aaa3b16"}, {file = "tiktoken-0.12.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:edde1ec917dfd21c1f2f8046b86348b0f54a2c0547f68149d8600859598769ad"},
{file = "tiktoken-0.9.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2b0e8e05a26eda1249e824156d537015480af7ae222ccb798e5234ae0285dbdb"}, {file = "tiktoken-0.12.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:35a2f8ddd3824608b3d650a000c1ef71f730d0c56486845705a8248da00f9fe5"},
{file = "tiktoken-0.9.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:27d457f096f87685195eea0165a1807fae87b97b2161fe8c9b1df5bd74ca6f63"}, {file = "tiktoken-0.12.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:83d16643edb7fa2c99eff2ab7733508aae1eebb03d5dfc46f5565862810f24e3"},
{file = "tiktoken-0.9.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cf8ded49cddf825390e36dd1ad35cd49589e8161fdcb52aa25f0583e90a3e01"}, {file = "tiktoken-0.12.0-cp312-cp312-win_amd64.whl", hash = "sha256:ffc5288f34a8bc02e1ea7047b8d041104791d2ddbf42d1e5fa07822cbffe16bd"},
{file = "tiktoken-0.9.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc156cb314119a8bb9748257a2eaebd5cc0753b6cb491d26694ed42fc7cb3139"}, {file = "tiktoken-0.12.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:775c2c55de2310cc1bc9a3ad8826761cbdc87770e586fd7b6da7d4589e13dab3"},
{file = "tiktoken-0.9.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:cd69372e8c9dd761f0ab873112aba55a0e3e506332dd9f7522ca466e817b1b7a"}, {file = "tiktoken-0.12.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a01b12f69052fbe4b080a2cfb867c4de12c704b56178edf1d1d7b273561db160"},
{file = "tiktoken-0.9.0-cp313-cp313-win_amd64.whl", hash = "sha256:5ea0edb6f83dc56d794723286215918c1cde03712cbbafa0348b33448faf5b95"}, {file = "tiktoken-0.12.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:01d99484dc93b129cd0964f9d34eee953f2737301f18b3c7257bf368d7615baa"},
{file = "tiktoken-0.9.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:c6386ca815e7d96ef5b4ac61e0048cd32ca5a92d5781255e13b31381d28667dc"}, {file = "tiktoken-0.12.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:4a1a4fcd021f022bfc81904a911d3df0f6543b9e7627b51411da75ff2fe7a1be"},
{file = "tiktoken-0.9.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:75f6d5db5bc2c6274b674ceab1615c1778e6416b14705827d19b40e6355f03e0"}, {file = "tiktoken-0.12.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:981a81e39812d57031efdc9ec59fa32b2a5a5524d20d4776574c4b4bd2e9014a"},
{file = "tiktoken-0.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e15b16f61e6f4625a57a36496d28dd182a8a60ec20a534c5343ba3cafa156ac7"}, {file = "tiktoken-0.12.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9baf52f84a3f42eef3ff4e754a0db79a13a27921b457ca9832cf944c6be4f8f3"},
{file = "tiktoken-0.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ebcec91babf21297022882344c3f7d9eed855931466c3311b1ad6b64befb3df"}, {file = "tiktoken-0.12.0-cp313-cp313-win_amd64.whl", hash = "sha256:b8a0cd0c789a61f31bf44851defbd609e8dd1e2c8589c614cc1060940ef1f697"},
{file = "tiktoken-0.9.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:e5fd49e7799579240f03913447c0cdfa1129625ebd5ac440787afc4345990427"}, {file = "tiktoken-0.12.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:d5f89ea5680066b68bcb797ae85219c72916c922ef0fcdd3480c7d2315ffff16"},
{file = "tiktoken-0.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:26242ca9dc8b58e875ff4ca078b9a94d2f0813e6a535dcd2205df5d49d927cc7"}, {file = "tiktoken-0.12.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:b4e7ed1c6a7a8a60a3230965bdedba8cc58f68926b835e519341413370e0399a"},
{file = "tiktoken-0.9.0.tar.gz", hash = "sha256:d02a5ca6a938e0490e1ff957bc48c8b078c88cb83977be1625b1fd8aac792c5d"}, {file = "tiktoken-0.12.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:fc530a28591a2d74bce821d10b418b26a094bf33839e69042a6e86ddb7a7fb27"},
{file = "tiktoken-0.12.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:06a9f4f49884139013b138920a4c393aa6556b2f8f536345f11819389c703ebb"},
{file = "tiktoken-0.12.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:04f0e6a985d95913cabc96a741c5ffec525a2c72e9df086ff17ebe35985c800e"},
{file = "tiktoken-0.12.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:0ee8f9ae00c41770b5f9b0bb1235474768884ae157de3beb5439ca0fd70f3e25"},
{file = "tiktoken-0.12.0-cp313-cp313t-win_amd64.whl", hash = "sha256:dc2dd125a62cb2b3d858484d6c614d136b5b848976794edfb63688d539b8b93f"},
{file = "tiktoken-0.12.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:a90388128df3b3abeb2bfd1895b0681412a8d7dc644142519e6f0a97c2111646"},
{file = "tiktoken-0.12.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:da900aa0ad52247d8794e307d6446bd3cdea8e192769b56276695d34d2c9aa88"},
{file = "tiktoken-0.12.0-cp314-cp314-manylinux_2_28_aarch64.whl", hash = "sha256:285ba9d73ea0d6171e7f9407039a290ca77efcdb026be7769dccc01d2c8d7fff"},
{file = "tiktoken-0.12.0-cp314-cp314-manylinux_2_28_x86_64.whl", hash = "sha256:d186a5c60c6a0213f04a7a802264083dea1bbde92a2d4c7069e1a56630aef830"},
{file = "tiktoken-0.12.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:604831189bd05480f2b885ecd2d1986dc7686f609de48208ebbbddeea071fc0b"},
{file = "tiktoken-0.12.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:8f317e8530bb3a222547b85a58583238c8f74fd7a7408305f9f63246d1a0958b"},
{file = "tiktoken-0.12.0-cp314-cp314-win_amd64.whl", hash = "sha256:399c3dd672a6406719d84442299a490420b458c44d3ae65516302a99675888f3"},
{file = "tiktoken-0.12.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:c2c714c72bc00a38ca969dae79e8266ddec999c7ceccd603cc4f0d04ccd76365"},
{file = "tiktoken-0.12.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:cbb9a3ba275165a2cb0f9a83f5d7025afe6b9d0ab01a22b50f0e74fee2ad253e"},
{file = "tiktoken-0.12.0-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:dfdfaa5ffff8993a3af94d1125870b1d27aed7cb97aa7eb8c1cefdbc87dbee63"},
{file = "tiktoken-0.12.0-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:584c3ad3d0c74f5269906eb8a659c8bfc6144a52895d9261cdaf90a0ae5f4de0"},
{file = "tiktoken-0.12.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:54c891b416a0e36b8e2045b12b33dd66fb34a4fe7965565f1b482da50da3e86a"},
{file = "tiktoken-0.12.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5edb8743b88d5be814b1a8a8854494719080c28faaa1ccbef02e87354fe71ef0"},
{file = "tiktoken-0.12.0-cp314-cp314t-win_amd64.whl", hash = "sha256:f61c0aea5565ac82e2ec50a05e02a6c44734e91b51c10510b084ea1b8e633a71"},
{file = "tiktoken-0.12.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:d51d75a5bffbf26f86554d28e78bfb921eae998edc2675650fd04c7e1f0cdc1e"},
{file = "tiktoken-0.12.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:09eb4eae62ae7e4c62364d9ec3a57c62eea707ac9a2b2c5d6bd05de6724ea179"},
{file = "tiktoken-0.12.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:df37684ace87d10895acb44b7f447d4700349b12197a526da0d4a4149fde074c"},
{file = "tiktoken-0.12.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:4c9614597ac94bb294544345ad8cf30dac2129c05e2db8dc53e082f355857af7"},
{file = "tiktoken-0.12.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:20cf97135c9a50de0b157879c3c4accbb29116bcf001283d26e073ff3b345946"},
{file = "tiktoken-0.12.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:15d875454bbaa3728be39880ddd11a5a2a9e548c29418b41e8fd8a767172b5ec"},
{file = "tiktoken-0.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:2cff3688ba3c639ebe816f8d58ffbbb0aa7433e23e08ab1cade5d175fc973fb3"},
{file = "tiktoken-0.12.0.tar.gz", hash = "sha256:b18ba7ee2b093863978fcb14f74b3707cdc8d4d4d3836853ce7ec60772139931"},
] ]
[package.dependencies] [package.dependencies]
@@ -8382,4 +8440,4 @@ cffi = ["cffi (>=1.17,<2.0) ; platform_python_implementation != \"PyPy\" and pyt
[metadata] [metadata]
lock-version = "2.1" lock-version = "2.1"
python-versions = ">=3.10,<3.14" python-versions = ">=3.10,<3.14"
content-hash = "40b2c87c3c86bd10214bd30ad291cead75da5060ab894105025ee4c0a3b3828e" content-hash = "14686ee0e2dc446a75d0db145b08dc410dc31c357e25085bb0f9b0174711c4b1"

View File

@@ -12,16 +12,16 @@ python = ">=3.10,<3.14"
aio-pika = "^9.5.5" aio-pika = "^9.5.5"
aiohttp = "^3.10.0" aiohttp = "^3.10.0"
aiodns = "^3.5.0" aiodns = "^3.5.0"
anthropic = "^0.59.0" anthropic = "^0.79.0"
apscheduler = "^3.11.1" apscheduler = "^3.11.1"
autogpt-libs = { path = "../autogpt_libs", develop = true } autogpt-libs = { path = "../autogpt_libs", develop = true }
bleach = { extras = ["css"], version = "^6.2.0" } bleach = { extras = ["css"], version = "^6.2.0" }
click = "^8.2.0" click = "^8.2.0"
cryptography = "^45.0" cryptography = "^46.0"
discord-py = "^2.5.2" discord-py = "^2.5.2"
e2b-code-interpreter = "^1.5.2" e2b-code-interpreter = "^1.5.2"
elevenlabs = "^1.50.0" elevenlabs = "^1.50.0"
fastapi = "^0.128.0" fastapi = "^0.128.5"
feedparser = "^6.0.11" feedparser = "^6.0.11"
flake8 = "^7.3.0" flake8 = "^7.3.0"
google-api-python-client = "^2.177.0" google-api-python-client = "^2.177.0"
@@ -38,7 +38,7 @@ langfuse = "^3.11.0"
launchdarkly-server-sdk = "^9.14.1" launchdarkly-server-sdk = "^9.14.1"
mem0ai = "^0.1.115" mem0ai = "^0.1.115"
moviepy = "^2.1.2" moviepy = "^2.1.2"
ollama = "^0.5.1" ollama = "^0.6.1"
openai = "^1.97.1" openai = "^1.97.1"
orjson = "^3.10.0" orjson = "^3.10.0"
pika = "^1.3.2" pika = "^1.3.2"
@@ -48,7 +48,7 @@ postmarker = "^1.0"
praw = "~7.8.1" praw = "~7.8.1"
prisma = "^0.15.0" prisma = "^0.15.0"
rank-bm25 = "^0.2.2" rank-bm25 = "^0.2.2"
prometheus-client = "^0.22.1" prometheus-client = "^0.24.1"
prometheus-fastapi-instrumentator = "^7.0.0" prometheus-fastapi-instrumentator = "^7.0.0"
psutil = "^7.0.0" psutil = "^7.0.0"
psycopg2-binary = "^2.9.10" psycopg2-binary = "^2.9.10"
@@ -57,7 +57,7 @@ pydantic-settings = "^2.12.0"
pytest = "^8.4.1" pytest = "^8.4.1"
pytest-asyncio = "^1.1.0" pytest-asyncio = "^1.1.0"
python-dotenv = "^1.1.1" python-dotenv = "^1.1.1"
python-multipart = "^0.0.20" python-multipart = "^0.0.22"
redis = "^6.2.0" redis = "^6.2.0"
regex = "^2025.9.18" regex = "^2025.9.18"
replicate = "^1.0.6" replicate = "^1.0.6"
@@ -65,8 +65,8 @@ sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlal
sqlalchemy = "^2.0.40" sqlalchemy = "^2.0.40"
strenum = "^0.4.9" strenum = "^0.4.9"
stripe = "^11.5.0" stripe = "^11.5.0"
supabase = "2.27.2" supabase = "2.27.3"
tenacity = "^9.1.2" tenacity = "^9.1.4"
todoist-api-python = "^2.1.7" todoist-api-python = "^2.1.7"
tweepy = "^4.16.0" tweepy = "^4.16.0"
uvicorn = { extras = ["standard"], version = "^0.40.0" } uvicorn = { extras = ["standard"], version = "^0.40.0" }
@@ -77,7 +77,7 @@ zerobouncesdk = "^1.1.2"
# NOTE: please insert new dependencies in their alphabetical location # NOTE: please insert new dependencies in their alphabetical location
pytest-snapshot = "^0.9.0" pytest-snapshot = "^0.9.0"
aiofiles = "^24.1.0" aiofiles = "^24.1.0"
tiktoken = "^0.9.0" tiktoken = "^0.12.0"
aioclamd = "^1.0.0" aioclamd = "^1.0.0"
setuptools = "^80.9.0" setuptools = "^80.9.0"
gcloud-aio-storage = "^9.5.0" gcloud-aio-storage = "^9.5.0"
@@ -95,13 +95,13 @@ black = "^24.10.0"
faker = "^38.2.0" faker = "^38.2.0"
httpx = "^0.28.1" httpx = "^0.28.1"
isort = "^5.13.2" isort = "^5.13.2"
poethepoet = "^0.37.0" poethepoet = "^0.41.0"
pre-commit = "^4.4.0" pre-commit = "^4.4.0"
pyright = "^1.1.407" pyright = "^1.1.407"
pytest-mock = "^3.15.1" pytest-mock = "^3.15.1"
pytest-watcher = "^0.4.2" pytest-watcher = "^0.6.3"
requests = "^2.32.5" requests = "^2.32.5"
ruff = "^0.14.5" ruff = "^0.15.0"
# NOTE: please insert new dependencies in their alphabetical location # NOTE: please insert new dependencies in their alphabetical location
[build-system] [build-system]

View File

@@ -102,7 +102,7 @@
"react-markdown": "9.0.3", "react-markdown": "9.0.3",
"react-modal": "3.16.3", "react-modal": "3.16.3",
"react-shepherd": "6.1.9", "react-shepherd": "6.1.9",
"react-window": "1.8.11", "react-window": "2.2.0",
"recharts": "3.3.0", "recharts": "3.3.0",
"rehype-autolink-headings": "7.1.0", "rehype-autolink-headings": "7.1.0",
"rehype-highlight": "7.0.2", "rehype-highlight": "7.0.2",
@@ -140,7 +140,7 @@
"@types/react": "18.3.17", "@types/react": "18.3.17",
"@types/react-dom": "18.3.5", "@types/react-dom": "18.3.5",
"@types/react-modal": "3.16.3", "@types/react-modal": "3.16.3",
"@types/react-window": "1.8.8", "@types/react-window": "2.0.0",
"@vitejs/plugin-react": "5.1.2", "@vitejs/plugin-react": "5.1.2",
"axe-playwright": "2.2.2", "axe-playwright": "2.2.2",
"chromatic": "13.3.3", "chromatic": "13.3.3",

View File

@@ -228,8 +228,8 @@ importers:
specifier: 6.1.9 specifier: 6.1.9
version: 6.1.9(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(typescript@5.9.3) version: 6.1.9(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(typescript@5.9.3)
react-window: react-window:
specifier: 1.8.11 specifier: 2.2.0
version: 1.8.11(react-dom@18.3.1(react@18.3.1))(react@18.3.1) version: 2.2.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
recharts: recharts:
specifier: 3.3.0 specifier: 3.3.0
version: 3.3.0(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react-is@18.3.1)(react@18.3.1)(redux@5.0.1) version: 3.3.0(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react-is@18.3.1)(react@18.3.1)(redux@5.0.1)
@@ -337,8 +337,8 @@ importers:
specifier: 3.16.3 specifier: 3.16.3
version: 3.16.3 version: 3.16.3
'@types/react-window': '@types/react-window':
specifier: 1.8.8 specifier: 2.0.0
version: 1.8.8 version: 2.0.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
'@vitejs/plugin-react': '@vitejs/plugin-react':
specifier: 5.1.2 specifier: 5.1.2
version: 5.1.2(vite@7.3.1(@types/node@24.10.0)(jiti@2.6.1)(terser@5.44.1)(yaml@2.8.2)) version: 5.1.2(vite@7.3.1(@types/node@24.10.0)(jiti@2.6.1)(terser@5.44.1)(yaml@2.8.2))
@@ -3469,8 +3469,9 @@ packages:
'@types/react-modal@3.16.3': '@types/react-modal@3.16.3':
resolution: {integrity: sha512-xXuGavyEGaFQDgBv4UVm8/ZsG+qxeQ7f77yNrW3n+1J6XAstUy5rYHeIHPh1KzsGc6IkCIdu6lQ2xWzu1jBTLg==} resolution: {integrity: sha512-xXuGavyEGaFQDgBv4UVm8/ZsG+qxeQ7f77yNrW3n+1J6XAstUy5rYHeIHPh1KzsGc6IkCIdu6lQ2xWzu1jBTLg==}
'@types/react-window@1.8.8': '@types/react-window@2.0.0':
resolution: {integrity: sha512-8Ls660bHR1AUA2kuRvVG9D/4XpRC6wjAaPT9dil7Ckc76eP9TKWZwwmgfq8Q1LANX3QNDnoU4Zp48A3w+zK69Q==} resolution: {integrity: sha512-E8hMDtImEpMk1SjswSvqoSmYvk7GEtyVaTa/GJV++FdDNuMVVEzpAClyJ0nqeKYBrMkGiyH6M1+rPLM0Nu1exQ==}
deprecated: This is a stub types definition. react-window provides its own type definitions, so you do not need this installed.
'@types/react@18.3.17': '@types/react@18.3.17':
resolution: {integrity: sha512-opAQ5no6LqJNo9TqnxBKsgnkIYHozW9KSTlFVoSUJYh1Fl/sswkEoqIugRSm7tbh6pABtYjGAjW+GOS23j8qbw==} resolution: {integrity: sha512-opAQ5no6LqJNo9TqnxBKsgnkIYHozW9KSTlFVoSUJYh1Fl/sswkEoqIugRSm7tbh6pABtYjGAjW+GOS23j8qbw==}
@@ -5976,9 +5977,6 @@ packages:
resolution: {integrity: sha512-UERzLsxzllchadvbPs5aolHh65ISpKpM+ccLbOJ8/vvpBKmAWf+la7dXFy7Mr0ySHbdHrFv5kGFCUHHe6GFEmw==} resolution: {integrity: sha512-UERzLsxzllchadvbPs5aolHh65ISpKpM+ccLbOJ8/vvpBKmAWf+la7dXFy7Mr0ySHbdHrFv5kGFCUHHe6GFEmw==}
engines: {node: '>= 4.0.0'} engines: {node: '>= 4.0.0'}
memoize-one@5.2.1:
resolution: {integrity: sha512-zYiwtZUcYyXKo/np96AGZAckk+FWWsUdJ3cHGGmld7+AhvcWmQyGCYUh1hc4Q/pkOhb65dQR/pqCyK0cOaHz4Q==}
merge-stream@2.0.0: merge-stream@2.0.0:
resolution: {integrity: sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==} resolution: {integrity: sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==}
@@ -6891,12 +6889,11 @@ packages:
'@types/react': '@types/react':
optional: true optional: true
react-window@1.8.11: react-window@2.2.0:
resolution: {integrity: sha512-+SRbUVT2scadgFSWx+R1P754xHPEqvcfSfVX10QYg6POOz+WNgkN48pS+BtZNIMGiL1HYrSEiCkwsMS15QogEQ==} resolution: {integrity: sha512-Y2L7yonHq6K1pQA2P98wT5QdIsEcjBTB7T8o6Mub12hH9eYppXoYu6vgClmcjlh3zfNcW2UrXiJJJqDxUY7GVw==}
engines: {node: '>8.0.0'}
peerDependencies: peerDependencies:
react: ^15.0.0 || ^16.0.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 react: ^18.0.0 || ^19.0.0
react-dom: ^15.0.0 || ^16.0.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 react-dom: ^18.0.0 || ^19.0.0
react@18.3.1: react@18.3.1:
resolution: {integrity: sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ==} resolution: {integrity: sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ==}
@@ -11603,9 +11600,12 @@ snapshots:
dependencies: dependencies:
'@types/react': 18.3.17 '@types/react': 18.3.17
'@types/react-window@1.8.8': '@types/react-window@2.0.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1)':
dependencies: dependencies:
'@types/react': 18.3.17 react-window: 2.2.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1)
transitivePeerDependencies:
- react
- react-dom
'@types/react@18.3.17': '@types/react@18.3.17':
dependencies: dependencies:
@@ -14545,8 +14545,6 @@ snapshots:
dependencies: dependencies:
fs-monkey: 1.1.0 fs-monkey: 1.1.0
memoize-one@5.2.1: {}
merge-stream@2.0.0: {} merge-stream@2.0.0: {}
merge2@1.4.1: {} merge2@1.4.1: {}
@@ -15592,10 +15590,8 @@ snapshots:
optionalDependencies: optionalDependencies:
'@types/react': 18.3.17 '@types/react': 18.3.17
react-window@1.8.11(react-dom@18.3.1(react@18.3.1))(react@18.3.1): react-window@2.2.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1):
dependencies: dependencies:
'@babel/runtime': 7.28.4
memoize-one: 5.2.1
react: 18.3.1 react: 18.3.1
react-dom: 18.3.1(react@18.3.1) react-dom: 18.3.1(react@18.3.1)

View File

@@ -0,0 +1,96 @@
import { NextResponse } from "next/server";
/**
* Safely encode a value as JSON for embedding in a script tag.
* Escapes characters that could break out of the script context to prevent XSS.
*/
function safeJsonStringify(value: unknown): string {
return JSON.stringify(value)
.replace(/</g, "\\u003c")
.replace(/>/g, "\\u003e")
.replace(/&/g, "\\u0026");
}
// MCP-specific OAuth callback route.
//
// Unlike the generic oauth_callback which relies on window.opener.postMessage,
// this route uses BroadcastChannel as the PRIMARY communication method.
// This is critical because cross-origin OAuth flows (e.g. Sentry → localhost)
// often lose window.opener due to COOP (Cross-Origin-Opener-Policy) headers.
//
// BroadcastChannel works across all same-origin tabs/popups regardless of opener.
export async function GET(request: Request) {
const { searchParams } = new URL(request.url);
const code = searchParams.get("code");
const state = searchParams.get("state");
const success = Boolean(code && state);
const message = success
? { success: true, code, state }
: {
success: false,
message: `Missing parameters: ${searchParams.toString()}`,
};
return new NextResponse(
`<!DOCTYPE html>
<html>
<head><title>MCP Sign-in</title></head>
<body style="font-family: system-ui, -apple-system, sans-serif; display: flex; align-items: center; justify-content: center; min-height: 100vh; margin: 0; background: #f9fafb;">
<div style="text-align: center; max-width: 400px; padding: 2rem;">
<div id="spinner" style="margin: 0 auto 1rem; width: 32px; height: 32px; border: 3px solid #e5e7eb; border-top-color: #3b82f6; border-radius: 50%; animation: spin 0.8s linear infinite;"></div>
<p id="status" style="color: #374151; font-size: 16px;">Completing sign-in...</p>
</div>
<style>@keyframes spin { to { transform: rotate(360deg); } }</style>
<script>
(function() {
var msg = ${safeJsonStringify(message)};
var sent = false;
// Method 1: BroadcastChannel (reliable across tabs/popups, no opener needed)
try {
var bc = new BroadcastChannel("mcp_oauth");
bc.postMessage({ type: "mcp_oauth_result", success: msg.success, code: msg.code, state: msg.state, message: msg.message });
bc.close();
sent = true;
} catch(e) { /* BroadcastChannel not supported */ }
// Method 2: window.opener.postMessage (fallback for same-origin popups)
try {
if (window.opener && !window.opener.closed) {
window.opener.postMessage(
{ message_type: "mcp_oauth_result", success: msg.success, code: msg.code, state: msg.state, message: msg.message },
window.location.origin
);
sent = true;
}
} catch(e) { /* opener not available (COOP) */ }
// Method 3: localStorage (most reliable cross-tab fallback)
try {
localStorage.setItem("mcp_oauth_result", JSON.stringify(msg));
sent = true;
} catch(e) { /* localStorage not available */ }
var statusEl = document.getElementById("status");
var spinnerEl = document.getElementById("spinner");
spinnerEl.style.display = "none";
if (msg.success && sent) {
statusEl.textContent = "Sign-in complete! This window will close.";
statusEl.style.color = "#059669";
setTimeout(function() { window.close(); }, 1500);
} else if (msg.success) {
statusEl.textContent = "Sign-in successful! You can close this tab and return to the builder.";
statusEl.style.color = "#059669";
} else {
statusEl.textContent = "Sign-in failed: " + (msg.message || "Unknown error");
statusEl.style.color = "#dc2626";
}
})();
</script>
</body>
</html>`,
{ headers: { "Content-Type": "text/html" } },
);
}

View File

@@ -47,7 +47,10 @@ export type CustomNode = XYNode<CustomNodeData, "custom">;
export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo( export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
({ data, id: nodeId, selected }) => { ({ data, id: nodeId, selected }) => {
const { inputSchema, outputSchema } = useCustomNode({ data, nodeId }); const { inputSchema, outputSchema, isMCPWithTool } = useCustomNode({
data,
nodeId,
});
const isAgent = data.uiType === BlockUIType.AGENT; const isAgent = data.uiType === BlockUIType.AGENT;
@@ -98,6 +101,7 @@ export const CustomNode: React.FC<NodeProps<CustomNode>> = React.memo(
jsonSchema={preprocessInputSchema(inputSchema)} jsonSchema={preprocessInputSchema(inputSchema)}
nodeId={nodeId} nodeId={nodeId}
uiType={data.uiType} uiType={data.uiType}
isMCPWithTool={isMCPWithTool}
className={cn( className={cn(
"bg-white px-4", "bg-white px-4",
isWebhook && "pointer-events-none opacity-50", isWebhook && "pointer-events-none opacity-50",

View File

@@ -6,6 +6,7 @@ import {
TooltipProvider, TooltipProvider,
TooltipTrigger, TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip"; } from "@/components/atoms/Tooltip/BaseTooltip";
import { SpecialBlockID } from "@/lib/autogpt-server-api";
import { beautifyString, cn } from "@/lib/utils"; import { beautifyString, cn } from "@/lib/utils";
import { useState } from "react"; import { useState } from "react";
import { CustomNodeData } from "../CustomNode"; import { CustomNodeData } from "../CustomNode";
@@ -20,8 +21,29 @@ type Props = {
export const NodeHeader = ({ data, nodeId }: Props) => { export const NodeHeader = ({ data, nodeId }: Props) => {
const updateNodeData = useNodeStore((state) => state.updateNodeData); const updateNodeData = useNodeStore((state) => state.updateNodeData);
const isMCPWithTool =
data.block_id === SpecialBlockID.MCP_TOOL &&
!!data.hardcodedValues?.selected_tool;
// Derive MCP server label: prefer server_name, fall back to URL hostname.
let mcpServerLabel = "MCP";
if (isMCPWithTool) {
mcpServerLabel =
data.hardcodedValues.server_name ||
(() => {
try {
return new URL(data.hardcodedValues.server_url).hostname;
} catch {
return "MCP";
}
})();
}
const title = const title =
(data.metadata?.customized_name as string) || (data.metadata?.customized_name as string) ||
(isMCPWithTool
? `${mcpServerLabel}: ${beautifyString(data.hardcodedValues.selected_tool)}`
: null) ||
data.hardcodedValues?.agent_name || data.hardcodedValues?.agent_name ||
data.title; data.title;

View File

@@ -3,6 +3,36 @@ import { CustomNodeData } from "./CustomNode";
import { BlockUIType } from "../../../types"; import { BlockUIType } from "../../../types";
import { useMemo } from "react"; import { useMemo } from "react";
import { mergeSchemaForResolution } from "./helpers"; import { mergeSchemaForResolution } from "./helpers";
import { SpecialBlockID } from "@/lib/autogpt-server-api";
/**
* Build a dynamic input schema for MCP blocks.
*
* When a tool has been selected (tool_input_schema is populated), the block
* renders the selected tool's input parameters *plus* the credentials field
* so users can select/change the OAuth credential used for execution.
*
* Static fields like server_url, selected_tool, available_tools, and
* tool_arguments are hidden because they're pre-configured from the dialog.
*/
function buildMCPInputSchema(
toolInputSchema: Record<string, any>,
blockInputSchema: Record<string, any>,
): Record<string, any> {
// Extract the credentials field from the block's original input schema
const credentialsSchema =
blockInputSchema?.properties?.credentials ?? undefined;
return {
type: "object",
properties: {
// Credentials field first so the dropdown appears at the top
...(credentialsSchema ? { credentials: credentialsSchema } : {}),
...(toolInputSchema.properties ?? {}),
},
required: [...(toolInputSchema.required ?? [])],
};
}
export const useCustomNode = ({ export const useCustomNode = ({
data, data,
@@ -19,10 +49,18 @@ export const useCustomNode = ({
); );
const isAgent = data.uiType === BlockUIType.AGENT; const isAgent = data.uiType === BlockUIType.AGENT;
const isMCPWithTool =
data.block_id === SpecialBlockID.MCP_TOOL &&
!!data.hardcodedValues?.tool_input_schema?.properties;
const currentInputSchema = isAgent const currentInputSchema = isAgent
? (data.hardcodedValues.input_schema ?? {}) ? (data.hardcodedValues.input_schema ?? {})
: data.inputSchema; : isMCPWithTool
? buildMCPInputSchema(
data.hardcodedValues.tool_input_schema,
data.inputSchema,
)
: data.inputSchema;
const currentOutputSchema = isAgent const currentOutputSchema = isAgent
? (data.hardcodedValues.output_schema ?? {}) ? (data.hardcodedValues.output_schema ?? {})
: data.outputSchema; : data.outputSchema;
@@ -54,5 +92,6 @@ export const useCustomNode = ({
return { return {
inputSchema, inputSchema,
outputSchema, outputSchema,
isMCPWithTool,
}; };
}; };

View File

@@ -9,39 +9,72 @@ interface FormCreatorProps {
jsonSchema: RJSFSchema; jsonSchema: RJSFSchema;
nodeId: string; nodeId: string;
uiType: BlockUIType; uiType: BlockUIType;
/** When true the block is an MCP Tool with a selected tool. */
isMCPWithTool?: boolean;
showHandles?: boolean; showHandles?: boolean;
className?: string; className?: string;
} }
export const FormCreator: React.FC<FormCreatorProps> = React.memo( export const FormCreator: React.FC<FormCreatorProps> = React.memo(
({ jsonSchema, nodeId, uiType, showHandles = true, className }) => { ({
jsonSchema,
nodeId,
uiType,
isMCPWithTool = false,
showHandles = true,
className,
}) => {
const updateNodeData = useNodeStore((state) => state.updateNodeData); const updateNodeData = useNodeStore((state) => state.updateNodeData);
const getHardCodedValues = useNodeStore( const getHardCodedValues = useNodeStore(
(state) => state.getHardCodedValues, (state) => state.getHardCodedValues,
); );
const isAgent = uiType === BlockUIType.AGENT;
const handleChange = ({ formData }: any) => { const handleChange = ({ formData }: any) => {
if ("credentials" in formData && !formData.credentials?.id) { if ("credentials" in formData && !formData.credentials?.id) {
delete formData.credentials; delete formData.credentials;
} }
const updatedValues = let updatedValues;
uiType === BlockUIType.AGENT if (isAgent) {
? { updatedValues = {
...getHardCodedValues(nodeId), ...getHardCodedValues(nodeId),
inputs: formData, inputs: formData,
} };
: formData; } else if (isMCPWithTool) {
// Separate credentials from tool arguments — credentials are stored
// at the top level of hardcodedValues, not inside tool_arguments.
const { credentials, ...toolArgs } = formData;
updatedValues = {
...getHardCodedValues(nodeId),
tool_arguments: toolArgs,
...(credentials?.id ? { credentials } : {}),
};
} else {
updatedValues = formData;
}
updateNodeData(nodeId, { hardcodedValues: updatedValues }); updateNodeData(nodeId, { hardcodedValues: updatedValues });
}; };
const hardcodedValues = getHardCodedValues(nodeId); const hardcodedValues = getHardCodedValues(nodeId);
const initialValues =
uiType === BlockUIType.AGENT let initialValues;
? (hardcodedValues.inputs ?? {}) if (isAgent) {
: hardcodedValues; initialValues = hardcodedValues.inputs ?? {};
} else if (isMCPWithTool) {
// Merge tool arguments with credentials for the form
initialValues = {
...(hardcodedValues.tool_arguments ?? {}),
...(hardcodedValues.credentials?.id
? { credentials: hardcodedValues.credentials }
: {}),
};
} else {
initialValues = hardcodedValues;
}
return ( return (
<div <div

View File

@@ -1,7 +1,7 @@
import { Button } from "@/components/__legacy__/ui/button"; import { Button } from "@/components/__legacy__/ui/button";
import { Skeleton } from "@/components/__legacy__/ui/skeleton"; import { Skeleton } from "@/components/__legacy__/ui/skeleton";
import { beautifyString, cn } from "@/lib/utils"; import { beautifyString, cn } from "@/lib/utils";
import React, { ButtonHTMLAttributes } from "react"; import React, { ButtonHTMLAttributes, useCallback, useState } from "react";
import { highlightText } from "./helpers"; import { highlightText } from "./helpers";
import { PlusIcon } from "@phosphor-icons/react"; import { PlusIcon } from "@phosphor-icons/react";
import { BlockInfo } from "@/app/api/__generated__/models/blockInfo"; import { BlockInfo } from "@/app/api/__generated__/models/blockInfo";
@@ -9,6 +9,12 @@ import { useControlPanelStore } from "../../../stores/controlPanelStore";
import { blockDragPreviewStyle } from "./style"; import { blockDragPreviewStyle } from "./style";
import { useReactFlow } from "@xyflow/react"; import { useReactFlow } from "@xyflow/react";
import { useNodeStore } from "../../../stores/nodeStore"; import { useNodeStore } from "../../../stores/nodeStore";
import { SpecialBlockID } from "@/lib/autogpt-server-api";
import {
MCPToolDialog,
type MCPToolDialogResult,
} from "@/app/(platform)/build/components/legacy-builder/MCPToolDialog";
interface Props extends ButtonHTMLAttributes<HTMLButtonElement> { interface Props extends ButtonHTMLAttributes<HTMLButtonElement> {
title?: string; title?: string;
description?: string; description?: string;
@@ -33,22 +39,76 @@ export const Block: BlockComponent = ({
); );
const { setViewport } = useReactFlow(); const { setViewport } = useReactFlow();
const { addBlock } = useNodeStore(); const { addBlock } = useNodeStore();
const [mcpDialogOpen, setMcpDialogOpen] = useState(false);
const isMCPBlock = blockData.id === SpecialBlockID.MCP_TOOL;
const addBlockAndCenter = useCallback(
(block: BlockInfo, hardcodedValues?: Record<string, any>) => {
const customNode = addBlock(block, hardcodedValues);
setTimeout(() => {
setViewport(
{
x: -customNode.position.x * 0.8 + window.innerWidth / 2,
y: -customNode.position.y * 0.8 + (window.innerHeight - 400) / 2,
zoom: 0.8,
},
{ duration: 500 },
);
}, 50);
return customNode;
},
[addBlock, setViewport],
);
const updateNodeData = useNodeStore((state) => state.updateNodeData);
const handleMCPToolConfirm = useCallback(
(result: MCPToolDialogResult) => {
// Derive a display label: prefer server name, fall back to URL hostname.
let serverLabel = result.serverName;
if (!serverLabel) {
try {
serverLabel = new URL(result.serverUrl).hostname;
} catch {
serverLabel = "MCP";
}
}
const customNode = addBlockAndCenter(blockData, {
server_url: result.serverUrl,
server_name: serverLabel,
selected_tool: result.selectedTool,
tool_input_schema: result.toolInputSchema,
available_tools: result.availableTools,
credentials: result.credentials ?? undefined,
});
// Persist the MCP title as customized_name in metadata so it survives
// save/load even if server_name is pruned from input_default.
if (customNode && result.selectedTool) {
const title = `${serverLabel}: ${beautifyString(result.selectedTool)}`;
updateNodeData(customNode.id, {
metadata: {
...customNode.data.metadata,
customized_name: title,
},
});
}
setMcpDialogOpen(false);
},
[addBlockAndCenter, blockData, updateNodeData],
);
const handleClick = () => { const handleClick = () => {
const customNode = addBlock(blockData); if (isMCPBlock) {
setTimeout(() => { setMcpDialogOpen(true);
setViewport( return;
{ }
x: -customNode.position.x * 0.8 + window.innerWidth / 2, addBlockAndCenter(blockData);
y: -customNode.position.y * 0.8 + (window.innerHeight - 400) / 2,
zoom: 0.8,
},
{ duration: 500 },
);
}, 50);
}; };
const handleDragStart = (e: React.DragEvent<HTMLButtonElement>) => { const handleDragStart = (e: React.DragEvent<HTMLButtonElement>) => {
if (isMCPBlock) return;
e.dataTransfer.effectAllowed = "copy"; e.dataTransfer.effectAllowed = "copy";
e.dataTransfer.setData("application/reactflow", JSON.stringify(blockData)); e.dataTransfer.setData("application/reactflow", JSON.stringify(blockData));
@@ -71,46 +131,56 @@ export const Block: BlockComponent = ({
: undefined; : undefined;
return ( return (
<Button <>
draggable={true} <Button
data-id={blockDataId} draggable={!isMCPBlock}
className={cn( data-id={blockDataId}
"group flex h-16 w-full min-w-[7.5rem] items-center justify-start space-x-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 px-[0.875rem] py-[0.625rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:cursor-not-allowed",
className,
)}
onDragStart={handleDragStart}
onClick={handleClick}
{...rest}
>
<div className="flex flex-1 flex-col items-start gap-0.5">
{title && (
<span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 group-disabled:text-zinc-400",
)}
>
{highlightText(beautifyString(title), highlightedText)}
</span>
)}
{description && (
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
{highlightText(description, highlightedText)}
</span>
)}
</div>
<div
className={cn( className={cn(
"flex h-7 w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400", "group flex h-16 w-full min-w-[7.5rem] items-center justify-start space-x-3 whitespace-normal rounded-[0.75rem] bg-zinc-50 px-[0.875rem] py-[0.625rem] text-start shadow-none",
"hover:cursor-default hover:bg-zinc-100 focus:ring-0 active:bg-zinc-100 active:ring-1 active:ring-zinc-300 disabled:cursor-not-allowed",
isMCPBlock && "hover:cursor-pointer",
className,
)} )}
onDragStart={handleDragStart}
onClick={handleClick}
{...rest}
> >
<PlusIcon className="h-5 w-5 text-zinc-50" /> <div className="flex flex-1 flex-col items-start gap-0.5">
</div> {title && (
</Button> <span
className={cn(
"line-clamp-1 font-sans text-sm font-medium leading-[1.375rem] text-zinc-800 group-disabled:text-zinc-400",
)}
>
{highlightText(beautifyString(title), highlightedText)}
</span>
)}
{description && (
<span
className={cn(
"line-clamp-1 font-sans text-xs font-normal leading-5 text-zinc-500 group-disabled:text-zinc-400",
)}
>
{highlightText(description, highlightedText)}
</span>
)}
</div>
<div
className={cn(
"flex h-7 w-7 items-center justify-center rounded-[0.5rem] bg-zinc-700 group-disabled:bg-zinc-400",
)}
>
<PlusIcon className="h-5 w-5 text-zinc-50" />
</div>
</Button>
{isMCPBlock && (
<MCPToolDialog
open={mcpDialogOpen}
onClose={() => setMcpDialogOpen(false)}
onConfirm={handleMCPToolConfirm}
/>
)}
</>
); );
}; };

View File

@@ -29,6 +29,10 @@ import {
TooltipTrigger, TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip"; } from "@/components/atoms/Tooltip/BaseTooltip";
import { GraphMeta } from "@/lib/autogpt-server-api"; import { GraphMeta } from "@/lib/autogpt-server-api";
import {
MCPToolDialog,
type MCPToolDialogResult,
} from "@/app/(platform)/build/components/legacy-builder/MCPToolDialog";
import jaro from "jaro-winkler"; import jaro from "jaro-winkler";
import { getV1GetSpecificGraph } from "@/app/api/__generated__/endpoints/graphs/graphs"; import { getV1GetSpecificGraph } from "@/app/api/__generated__/endpoints/graphs/graphs";
import { okData } from "@/app/api/helpers"; import { okData } from "@/app/api/helpers";
@@ -94,6 +98,7 @@ export function BlocksControl({
const [searchQuery, setSearchQuery] = useState(""); const [searchQuery, setSearchQuery] = useState("");
const deferredSearchQuery = useDeferredValue(searchQuery); const deferredSearchQuery = useDeferredValue(searchQuery);
const [selectedCategory, setSelectedCategory] = useState<string | null>(null); const [selectedCategory, setSelectedCategory] = useState<string | null>(null);
const [mcpDialogOpen, setMcpDialogOpen] = useState(false);
const blocks = useSearchableBlocks(_blocks); const blocks = useSearchableBlocks(_blocks);
@@ -186,11 +191,32 @@ export function BlocksControl({
setSelectedCategory(null); setSelectedCategory(null);
}, []); }, []);
const handleMCPToolConfirm = useCallback(
(result: MCPToolDialogResult) => {
addBlock(SpecialBlockID.MCP_TOOL, "MCPToolBlock", {
server_url: result.serverUrl,
server_name: result.serverName,
selected_tool: result.selectedTool,
tool_input_schema: result.toolInputSchema,
available_tools: result.availableTools,
credentials: result.credentials ?? undefined,
});
setMcpDialogOpen(false);
},
[addBlock],
);
// Handler to add a block, fetching graph data on-demand for agent blocks // Handler to add a block, fetching graph data on-demand for agent blocks
const handleAddBlock = useCallback( const handleAddBlock = useCallback(
async (block: _Block & { notAvailable: string | null }) => { async (block: _Block & { notAvailable: string | null }) => {
if (block.notAvailable) return; if (block.notAvailable) return;
// For MCP blocks, open the configuration dialog instead of placing directly
if (block.id === SpecialBlockID.MCP_TOOL) {
setMcpDialogOpen(true);
return;
}
// For agent blocks, fetch the full graph to get schemas // For agent blocks, fetch the full graph to get schemas
if (block.uiType === BlockUIType.AGENT && block.hardcodedValues) { if (block.uiType === BlockUIType.AGENT && block.hardcodedValues) {
const graphID = block.hardcodedValues.graph_id as string; const graphID = block.hardcodedValues.graph_id as string;
@@ -230,162 +256,179 @@ export function BlocksControl({
}, [blocks]); }, [blocks]);
return ( return (
<Popover <>
open={pinBlocksPopover ? true : undefined} <Popover
onOpenChange={(open) => open || resetFilters()} open={pinBlocksPopover ? true : undefined}
> onOpenChange={(open) => open || resetFilters()}
<Tooltip delayDuration={500}>
<TooltipTrigger asChild>
<PopoverTrigger asChild>
<Button
variant="ghost"
size="icon"
data-id="blocks-control-popover-trigger"
data-testid="blocks-control-blocks-button"
name="Blocks"
className="dark:hover:bg-slate-800"
>
<IconToyBrick />
</Button>
</PopoverTrigger>
</TooltipTrigger>
<TooltipContent side="right">Blocks</TooltipContent>
</Tooltip>
<PopoverContent
side="right"
sideOffset={22}
align="start"
className="absolute -top-3 w-[17rem] rounded-xl border-none p-0 shadow-none md:w-[30rem]"
data-id="blocks-control-popover-content"
> >
<Card className="p-3 pb-0 dark:bg-slate-900"> <Tooltip delayDuration={500}>
<CardHeader className="flex flex-col gap-x-8 gap-y-1 p-3 px-2"> <TooltipTrigger asChild>
<div className="items-center justify-between"> <PopoverTrigger asChild>
<Label <Button
htmlFor="search-blocks" variant="ghost"
className="whitespace-nowrap text-base font-bold text-black dark:text-white 2xl:text-xl" size="icon"
data-id="blocks-control-label" data-id="blocks-control-popover-trigger"
data-testid="blocks-control-blocks-label" data-testid="blocks-control-blocks-button"
name="Blocks"
className="dark:hover:bg-slate-800"
> >
Blocks <IconToyBrick />
</Label> </Button>
</div> </PopoverTrigger>
<div className="relative flex items-center"> </TooltipTrigger>
<MagnifyingGlassIcon className="absolute m-2 h-5 w-5 text-gray-500 dark:text-gray-400" /> <TooltipContent side="right">Blocks</TooltipContent>
<Input </Tooltip>
id="search-blocks" <PopoverContent
type="text" side="right"
placeholder="Search blocks" sideOffset={22}
value={searchQuery} align="start"
onChange={(e) => setSearchQuery(e.target.value)} className="absolute -top-3 w-[17rem] rounded-xl border-none p-0 shadow-none md:w-[30rem]"
className="rounded-lg px-8 py-5 dark:bg-slate-800 dark:text-white" data-id="blocks-control-popover-content"
data-id="blocks-control-search-input" >
autoComplete="off" <Card className="p-3 pb-0 dark:bg-slate-900">
/> <CardHeader className="flex flex-col gap-x-8 gap-y-1 p-3 px-2">
</div> <div className="items-center justify-between">
<div <Label
className="mt-2 flex flex-wrap gap-2" htmlFor="search-blocks"
data-testid="blocks-categories-list" className="whitespace-nowrap text-base font-bold text-black dark:text-white 2xl:text-xl"
> data-id="blocks-control-label"
{categories.map((category) => { data-testid="blocks-control-blocks-label"
const color = getPrimaryCategoryColor([
{ category: category || "All", description: "" },
]);
const colorClass =
selectedCategory === category ? `${color}` : "";
return (
<div
key={category}
data-testid="blocks-category"
role="button"
className={`cursor-pointer rounded-xl border px-2 py-2 text-xs font-medium dark:border-slate-700 dark:text-white ${colorClass}`}
onClick={() =>
setSelectedCategory(
selectedCategory === category ? null : category,
)
}
>
{beautifyString((category || "All").toLowerCase())}
</div>
);
})}
</div>
</CardHeader>
<CardContent className="overflow-scroll border-t border-t-gray-200 p-0 dark:border-t-slate-700">
<ScrollArea
className="h-[60vh] w-full"
data-id="blocks-control-scroll-area"
>
{filteredAvailableBlocks.map((block) => (
<Card
key={block.uiKey || block.id}
className={`m-2 my-4 flex h-20 shadow-none dark:border-slate-700 dark:bg-slate-800 dark:text-slate-100 dark:hover:bg-slate-700 ${
block.notAvailable
? "cursor-not-allowed opacity-50"
: "cursor-move hover:shadow-lg"
}`}
data-id={`block-card-${block.id}`}
draggable={!block.notAvailable}
onDragStart={(e) => {
if (block.notAvailable) return;
e.dataTransfer.effectAllowed = "copy";
e.dataTransfer.setData(
"application/reactflow",
JSON.stringify({
blockId: block.id,
blockName: block.name,
hardcodedValues: block?.hardcodedValues || {},
}),
);
}}
onClick={() => handleAddBlock(block)}
title={block.notAvailable ?? undefined}
> >
<div Blocks
className={`-ml-px h-full w-3 rounded-l-xl ${getPrimaryCategoryColor(block.categories)}`} </Label>
></div> </div>
<div className="relative flex items-center">
<div className="mx-3 flex flex-1 items-center justify-between"> <MagnifyingGlassIcon className="absolute m-2 h-5 w-5 text-gray-500 dark:text-gray-400" />
<div className="mr-2 min-w-0"> <Input
<span id="search-blocks"
className="block truncate pb-1 text-sm font-semibold dark:text-white" type="text"
data-id={`block-name-${block.id}`} placeholder="Search blocks"
data-type={block.uiType} value={searchQuery}
data-testid={`block-name-${block.id}`} onChange={(e) => setSearchQuery(e.target.value)}
> className="rounded-lg px-8 py-5 dark:bg-slate-800 dark:text-white"
<TextRenderer data-id="blocks-control-search-input"
value={beautifyString(block.name).replace( autoComplete="off"
/ Block$/, />
"", </div>
)} <div
truncateLengthLimit={45} className="mt-2 flex flex-wrap gap-2"
/> data-testid="blocks-categories-list"
</span> >
<span {categories.map((category) => {
className="block break-all text-xs font-normal text-gray-500 dark:text-gray-400" const color = getPrimaryCategoryColor([
data-testid={`block-description-${block.id}`} { category: category || "All", description: "" },
> ]);
<TextRenderer const colorClass =
value={block.description} selectedCategory === category ? `${color}` : "";
truncateLengthLimit={165} return (
/>
</span>
</div>
<div <div
className="flex flex-shrink-0 items-center gap-1" key={category}
data-id={`block-tooltip-${block.id}`} data-testid="blocks-category"
data-testid={`block-add`} role="button"
className={`cursor-pointer rounded-xl border px-2 py-2 text-xs font-medium dark:border-slate-700 dark:text-white ${colorClass}`}
onClick={() =>
setSelectedCategory(
selectedCategory === category ? null : category,
)
}
> >
<PlusIcon className="h-6 w-6 rounded-lg bg-gray-200 stroke-black stroke-[0.5px] p-1 dark:bg-gray-700 dark:stroke-white" /> {beautifyString((category || "All").toLowerCase())}
</div> </div>
</div> );
</Card> })}
))} </div>
</ScrollArea> </CardHeader>
</CardContent> <CardContent className="overflow-scroll border-t border-t-gray-200 p-0 dark:border-t-slate-700">
</Card> <ScrollArea
</PopoverContent> className="h-[60vh] w-full"
</Popover> data-id="blocks-control-scroll-area"
>
{filteredAvailableBlocks.map((block) => (
<Card
key={block.uiKey || block.id}
className={`m-2 my-4 flex h-20 shadow-none dark:border-slate-700 dark:bg-slate-800 dark:text-slate-100 dark:hover:bg-slate-700 ${
block.notAvailable
? "cursor-not-allowed opacity-50"
: block.id === SpecialBlockID.MCP_TOOL
? "cursor-pointer hover:shadow-lg"
: "cursor-move hover:shadow-lg"
}`}
data-id={`block-card-${block.id}`}
draggable={
!block.notAvailable &&
block.id !== SpecialBlockID.MCP_TOOL
}
onDragStart={(e) => {
if (
block.notAvailable ||
block.id === SpecialBlockID.MCP_TOOL
)
return;
e.dataTransfer.effectAllowed = "copy";
e.dataTransfer.setData(
"application/reactflow",
JSON.stringify({
blockId: block.id,
blockName: block.name,
hardcodedValues: block?.hardcodedValues || {},
}),
);
}}
onClick={() => handleAddBlock(block)}
title={block.notAvailable ?? undefined}
>
<div
className={`-ml-px h-full w-3 rounded-l-xl ${getPrimaryCategoryColor(block.categories)}`}
></div>
<div className="mx-3 flex flex-1 items-center justify-between">
<div className="mr-2 min-w-0">
<span
className="block truncate pb-1 text-sm font-semibold dark:text-white"
data-id={`block-name-${block.id}`}
data-type={block.uiType}
data-testid={`block-name-${block.id}`}
>
<TextRenderer
value={beautifyString(block.name).replace(
/ Block$/,
"",
)}
truncateLengthLimit={45}
/>
</span>
<span
className="block break-all text-xs font-normal text-gray-500 dark:text-gray-400"
data-testid={`block-description-${block.id}`}
>
<TextRenderer
value={block.description}
truncateLengthLimit={165}
/>
</span>
</div>
<div
className="flex flex-shrink-0 items-center gap-1"
data-id={`block-tooltip-${block.id}`}
data-testid={`block-add`}
>
<PlusIcon className="h-6 w-6 rounded-lg bg-gray-200 stroke-black stroke-[0.5px] p-1 dark:bg-gray-700 dark:stroke-white" />
</div>
</div>
</Card>
))}
</ScrollArea>
</CardContent>
</Card>
</PopoverContent>
</Popover>
<MCPToolDialog
open={mcpDialogOpen}
onClose={() => setMcpDialogOpen(false)}
onConfirm={handleMCPToolConfirm}
/>
</>
); );
} }

View File

@@ -21,6 +21,7 @@ import {
GraphInputSchema, GraphInputSchema,
GraphOutputSchema, GraphOutputSchema,
NodeExecutionResult, NodeExecutionResult,
SpecialBlockID,
} from "@/lib/autogpt-server-api"; } from "@/lib/autogpt-server-api";
import { import {
beautifyString, beautifyString,
@@ -215,6 +216,26 @@ export const CustomNode = React.memo(
} }
} }
// MCP Tool block: display the selected tool's dynamic schema
const isMCPWithTool =
data.block_id === SpecialBlockID.MCP_TOOL &&
!!data.hardcodedValues?.tool_input_schema?.properties;
if (isMCPWithTool) {
// Show only the tool's input parameters. Credentials are NOT included
// because authentication is handled by the MCP dialog's OAuth flow
// and stored server-side.
const toolSchema = data.hardcodedValues.tool_input_schema;
data.inputSchema = {
type: "object",
properties: {
...(toolSchema.properties ?? {}),
},
required: [...(toolSchema.required ?? [])],
} as BlockIORootSchema;
}
const setHardcodedValues = useCallback( const setHardcodedValues = useCallback(
(values: any) => { (values: any) => {
updateNodeData(id, { hardcodedValues: values }); updateNodeData(id, { hardcodedValues: values });
@@ -375,7 +396,9 @@ export const CustomNode = React.memo(
const displayTitle = const displayTitle =
customTitle || customTitle ||
beautifyString(data.blockType?.replace(/Block$/, "") || data.title); (isMCPWithTool
? `${data.hardcodedValues.server_name || "MCP"}: ${beautifyString(data.hardcodedValues.selected_tool || "")}`
: beautifyString(data.blockType?.replace(/Block$/, "") || data.title));
useEffect(() => { useEffect(() => {
isInitialSetup.current = false; isInitialSetup.current = false;
@@ -389,6 +412,15 @@ export const CustomNode = React.memo(
data.inputSchema, data.inputSchema,
), ),
}); });
} else if (isMCPWithTool) {
// MCP dialog already configured server_url, selected_tool, etc.
// Just ensure tool_arguments is initialized.
if (!data.hardcodedValues.tool_arguments) {
setHardcodedValues({
...data.hardcodedValues,
tool_arguments: {},
});
}
} else { } else {
setHardcodedValues( setHardcodedValues(
fillObjectDefaultsFromSchema(data.hardcodedValues, data.inputSchema), fillObjectDefaultsFromSchema(data.hardcodedValues, data.inputSchema),
@@ -525,8 +557,11 @@ export const CustomNode = React.memo(
); );
default: default:
const getInputPropKey = (key: string) => const getInputPropKey = (key: string) => {
nodeType == BlockUIType.AGENT ? `inputs.${key}` : key; if (nodeType == BlockUIType.AGENT) return `inputs.${key}`;
if (isMCPWithTool) return `tool_arguments.${key}`;
return key;
};
return keys.map(([propKey, propSchema]) => { return keys.map(([propKey, propSchema]) => {
const isRequired = data.inputSchema.required?.includes(propKey); const isRequired = data.inputSchema.required?.includes(propKey);

View File

@@ -748,6 +748,10 @@ const FlowEditor: React.FC<{
block_id: blockID, block_id: blockID,
isOutputStatic: nodeSchema.staticOutput, isOutputStatic: nodeSchema.staticOutput,
uiType: nodeSchema.uiType, uiType: nodeSchema.uiType,
// MCP blocks have optional credentials (public servers don't need auth)
...(blockID === SpecialBlockID.MCP_TOOL && {
metadata: { credentials_optional: true },
}),
}, },
}; };

View File

@@ -0,0 +1,534 @@
"use client";
import React, {
useState,
useCallback,
useRef,
useEffect,
useContext,
} from "react";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/__legacy__/ui/dialog";
import { Button } from "@/components/__legacy__/ui/button";
import { Input } from "@/components/__legacy__/ui/input";
import { Label } from "@/components/__legacy__/ui/label";
import { LoadingSpinner } from "@/components/__legacy__/ui/loading";
import { Badge } from "@/components/__legacy__/ui/badge";
import { ScrollArea } from "@/components/__legacy__/ui/scroll-area";
import { useBackendAPI } from "@/lib/autogpt-server-api/context";
import type { CredentialsMetaInput, MCPTool } from "@/lib/autogpt-server-api";
import { CaretDown } from "@phosphor-icons/react";
import { openOAuthPopup } from "@/lib/oauth-popup";
import { CredentialsProvidersContext } from "@/providers/agent-credentials/credentials-provider";
export type MCPToolDialogResult = {
serverUrl: string;
serverName: string | null;
selectedTool: string;
toolInputSchema: Record<string, any>;
availableTools: Record<string, any>;
/** Credentials meta from OAuth flow, null for public servers. */
credentials: CredentialsMetaInput | null;
};
interface MCPToolDialogProps {
open: boolean;
onClose: () => void;
onConfirm: (result: MCPToolDialogResult) => void;
}
type DialogStep = "url" | "tool";
export function MCPToolDialog({
open,
onClose,
onConfirm,
}: MCPToolDialogProps) {
const api = useBackendAPI();
const allProviders = useContext(CredentialsProvidersContext);
const [step, setStep] = useState<DialogStep>("url");
const [serverUrl, setServerUrl] = useState("");
const [tools, setTools] = useState<MCPTool[]>([]);
const [serverName, setServerName] = useState<string | null>(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const [authRequired, setAuthRequired] = useState(false);
const [oauthLoading, setOauthLoading] = useState(false);
const [showManualToken, setShowManualToken] = useState(false);
const [manualToken, setManualToken] = useState("");
const [selectedTool, setSelectedTool] = useState<MCPTool | null>(null);
const [credentials, setCredentials] = useState<CredentialsMetaInput | null>(
null,
);
const startOAuthRef = useRef(false);
const oauthAbortRef = useRef<((reason?: string) => void) | null>(null);
// Clean up on unmount
useEffect(() => {
return () => {
oauthAbortRef.current?.();
};
}, []);
const reset = useCallback(() => {
oauthAbortRef.current?.();
oauthAbortRef.current = null;
setStep("url");
setServerUrl("");
setManualToken("");
setTools([]);
setServerName(null);
setLoading(false);
setError(null);
setAuthRequired(false);
setOauthLoading(false);
setShowManualToken(false);
setSelectedTool(null);
setCredentials(null);
}, []);
const handleClose = useCallback(() => {
reset();
onClose();
}, [reset, onClose]);
const discoverTools = useCallback(
async (url: string, authToken?: string) => {
setLoading(true);
setError(null);
try {
const result = await api.mcpDiscoverTools(url, authToken);
setTools(result.tools);
setServerName(result.server_name);
setAuthRequired(false);
setShowManualToken(false);
setStep("tool");
} catch (e: any) {
if (e?.status === 401 || e?.status === 403) {
setAuthRequired(true);
setError(null);
// Automatically start OAuth sign-in instead of requiring a second click
setLoading(false);
startOAuthRef.current = true;
return;
} else {
const message =
e?.message || e?.detail || "Failed to connect to MCP server";
setError(
typeof message === "string" ? message : JSON.stringify(message),
);
}
} finally {
setLoading(false);
}
},
[api],
);
const handleDiscoverTools = useCallback(() => {
if (!serverUrl.trim()) return;
discoverTools(serverUrl.trim(), manualToken.trim() || undefined);
}, [serverUrl, manualToken, discoverTools]);
const handleOAuthSignIn = useCallback(async () => {
if (!serverUrl.trim()) return;
setError(null);
// Abort any previous OAuth flow
oauthAbortRef.current?.();
setOauthLoading(true);
try {
const { login_url, state_token } = await api.mcpOAuthLogin(
serverUrl.trim(),
);
const { promise, cleanup } = openOAuthPopup(login_url, {
stateToken: state_token,
useCrossOriginListeners: true,
});
oauthAbortRef.current = cleanup.abort;
const result = await promise;
// Exchange code for tokens via the credentials provider (updates cache)
setLoading(true);
setOauthLoading(false);
const mcpProvider = allProviders?.["mcp"];
const callbackResult = mcpProvider
? await mcpProvider.mcpOAuthCallback(result.code, state_token)
: await api.mcpOAuthCallback(result.code, state_token);
setCredentials({
id: callbackResult.id,
provider: callbackResult.provider,
type: callbackResult.type,
title: callbackResult.title,
});
setAuthRequired(false);
// Discover tools now that we're authenticated
const toolsResult = await api.mcpDiscoverTools(serverUrl.trim());
setTools(toolsResult.tools);
setServerName(toolsResult.server_name);
setStep("tool");
} catch (e: any) {
// If server doesn't support OAuth → show manual token entry
if (e?.status === 400) {
setShowManualToken(true);
setError(
"This server does not support OAuth sign-in. Please enter a token manually.",
);
} else if (e?.message === "OAuth flow timed out") {
setError("OAuth sign-in timed out. Please try again.");
} else {
const status = e?.status;
let message: string;
if (status === 401 || status === 403) {
message =
"Authentication succeeded but the server still rejected the request. " +
"The token audience may not match. Please try again.";
} else {
message = e?.message || e?.detail || "Failed to complete sign-in";
}
setError(
typeof message === "string" ? message : JSON.stringify(message),
);
}
} finally {
setOauthLoading(false);
setLoading(false);
oauthAbortRef.current = null;
}
}, [api, serverUrl, allProviders]);
// Auto-start OAuth sign-in when server returns 401/403
useEffect(() => {
if (authRequired && startOAuthRef.current) {
startOAuthRef.current = false;
handleOAuthSignIn();
}
}, [authRequired, handleOAuthSignIn]);
const handleConfirm = useCallback(() => {
if (!selectedTool) return;
const availableTools: Record<string, any> = {};
for (const t of tools) {
availableTools[t.name] = {
description: t.description,
input_schema: t.input_schema,
};
}
onConfirm({
serverUrl: serverUrl.trim(),
serverName,
selectedTool: selectedTool.name,
toolInputSchema: selectedTool.input_schema,
availableTools,
credentials,
});
reset();
}, [
selectedTool,
tools,
serverUrl,
serverName,
credentials,
onConfirm,
reset,
]);
return (
<Dialog open={open} onOpenChange={(isOpen) => !isOpen && handleClose()}>
<DialogContent className="max-w-lg">
<DialogHeader>
<DialogTitle>
{step === "url"
? "Connect to MCP Server"
: `Select a Tool${serverName ? `${serverName}` : ""}`}
</DialogTitle>
<DialogDescription>
{step === "url"
? "Enter the URL of an MCP server to discover its available tools."
: `Found ${tools.length} tool${tools.length !== 1 ? "s" : ""}. Select one to add to your agent.`}
</DialogDescription>
</DialogHeader>
{step === "url" && (
<div className="flex flex-col gap-4 py-2">
<div className="flex flex-col gap-2">
<Label htmlFor="mcp-server-url">Server URL</Label>
<Input
id="mcp-server-url"
type="url"
placeholder="https://mcp.example.com/mcp"
value={serverUrl}
onChange={(e) => setServerUrl(e.target.value)}
onKeyDown={(e) => e.key === "Enter" && handleDiscoverTools()}
autoFocus
/>
</div>
{/* Auth required: show manual token option */}
{authRequired && !showManualToken && (
<button
onClick={() => setShowManualToken(true)}
className="text-xs text-gray-500 underline hover:text-gray-700 dark:text-gray-400 dark:hover:text-gray-300"
>
or enter a token manually
</button>
)}
{/* Manual token entry — only visible when expanded */}
{showManualToken && (
<div className="flex flex-col gap-2">
<Label htmlFor="mcp-auth-token" className="text-sm">
Bearer Token
</Label>
<Input
id="mcp-auth-token"
type="password"
placeholder="Paste your auth token here"
value={manualToken}
onChange={(e) => setManualToken(e.target.value)}
onKeyDown={(e) => e.key === "Enter" && handleDiscoverTools()}
autoFocus
/>
</div>
)}
{error && <p className="text-sm text-red-500">{error}</p>}
</div>
)}
{step === "tool" && (
<ScrollArea className="max-h-[50vh] py-2">
<div className="flex flex-col gap-2 pr-3">
{tools.map((tool) => (
<MCPToolCard
key={tool.name}
tool={tool}
selected={selectedTool?.name === tool.name}
onSelect={() => setSelectedTool(tool)}
/>
))}
</div>
</ScrollArea>
)}
<DialogFooter>
{step === "tool" && (
<Button
variant="outline"
onClick={() => {
setStep("url");
setSelectedTool(null);
}}
>
Back
</Button>
)}
<Button variant="outline" onClick={handleClose}>
Cancel
</Button>
{step === "url" && (
<Button
onClick={
authRequired && !showManualToken
? handleOAuthSignIn
: handleDiscoverTools
}
disabled={!serverUrl.trim() || loading || oauthLoading}
>
{loading || oauthLoading ? (
<span className="flex items-center gap-2">
<LoadingSpinner className="size-4" />
{oauthLoading ? "Waiting for sign-in..." : "Connecting..."}
</span>
) : authRequired && !showManualToken ? (
"Sign in & Connect"
) : (
"Discover Tools"
)}
</Button>
)}
{step === "tool" && (
<Button onClick={handleConfirm} disabled={!selectedTool}>
Add Block
</Button>
)}
</DialogFooter>
</DialogContent>
</Dialog>
);
}
// --------------- Tool Card Component --------------- //
/** Truncate a description to a reasonable length for the collapsed view. */
function truncateDescription(text: string, maxLen = 120): string {
if (text.length <= maxLen) return text;
return text.slice(0, maxLen).trimEnd() + "…";
}
/** Pretty-print a JSON Schema type for a parameter. */
function schemaTypeLabel(schema: Record<string, any>): string {
if (schema.type) return schema.type;
if (schema.anyOf)
return schema.anyOf.map((s: any) => s.type ?? "any").join(" | ");
if (schema.oneOf)
return schema.oneOf.map((s: any) => s.type ?? "any").join(" | ");
return "any";
}
function MCPToolCard({
tool,
selected,
onSelect,
}: {
tool: MCPTool;
selected: boolean;
onSelect: () => void;
}) {
const [expanded, setExpanded] = useState(false);
const properties = tool.input_schema?.properties ?? {};
const required = new Set<string>(tool.input_schema?.required ?? []);
const paramNames = Object.keys(properties);
// Strip XML-like tags from description for cleaner display.
// Loop to handle nested tags like <scr<script>ipt> (CodeQL fix).
let cleanDescription = tool.description ?? "";
let prev = "";
while (prev !== cleanDescription) {
prev = cleanDescription;
cleanDescription = cleanDescription.replace(/<[^>]*>/g, "");
}
cleanDescription = cleanDescription.trim();
return (
<button
onClick={onSelect}
className={`group flex flex-col rounded-lg border text-left transition-colors ${
selected
? "border-blue-500 bg-blue-50 dark:border-blue-400 dark:bg-blue-950"
: "border-gray-200 hover:border-gray-300 hover:bg-gray-50 dark:border-slate-700 dark:hover:border-slate-600 dark:hover:bg-slate-800"
}`}
>
{/* Header */}
<div className="flex items-center gap-2 px-3 pb-1 pt-3">
<span className="flex-1 text-sm font-semibold dark:text-white">
{tool.name}
</span>
{paramNames.length > 0 && (
<Badge variant="secondary" className="text-[10px]">
{paramNames.length} param{paramNames.length !== 1 ? "s" : ""}
</Badge>
)}
</div>
{/* Description (collapsed: truncated) */}
{cleanDescription && (
<p className="px-3 pb-1 text-xs leading-relaxed text-gray-500 dark:text-gray-400">
{expanded ? cleanDescription : truncateDescription(cleanDescription)}
</p>
)}
{/* Parameter badges (collapsed view) */}
{!expanded && paramNames.length > 0 && (
<div className="flex flex-wrap gap-1 px-3 pb-2">
{paramNames.slice(0, 6).map((name) => (
<Badge
key={name}
variant="outline"
className="text-[10px] font-normal"
>
{name}
{required.has(name) && (
<span className="ml-0.5 text-red-400">*</span>
)}
</Badge>
))}
{paramNames.length > 6 && (
<Badge variant="outline" className="text-[10px] font-normal">
+{paramNames.length - 6} more
</Badge>
)}
</div>
)}
{/* Expanded: full parameter details */}
{expanded && paramNames.length > 0 && (
<div className="mx-3 mb-2 rounded border border-gray-100 bg-gray-50/50 dark:border-slate-700 dark:bg-slate-800/50">
<table className="w-full text-xs">
<thead>
<tr className="border-b border-gray-100 dark:border-slate-700">
<th className="px-2 py-1 text-left font-medium text-gray-500 dark:text-gray-400">
Parameter
</th>
<th className="px-2 py-1 text-left font-medium text-gray-500 dark:text-gray-400">
Type
</th>
<th className="px-2 py-1 text-left font-medium text-gray-500 dark:text-gray-400">
Description
</th>
</tr>
</thead>
<tbody>
{paramNames.map((name) => {
const prop = properties[name] ?? {};
return (
<tr
key={name}
className="border-b border-gray-50 last:border-0 dark:border-slate-700/50"
>
<td className="px-2 py-1 font-mono text-[11px] text-gray-700 dark:text-gray-300">
{name}
{required.has(name) && (
<span className="ml-0.5 text-red-400">*</span>
)}
</td>
<td className="px-2 py-1 text-gray-500 dark:text-gray-400">
{schemaTypeLabel(prop)}
</td>
<td className="max-w-[200px] truncate px-2 py-1 text-gray-500 dark:text-gray-400">
{prop.description ?? "—"}
</td>
</tr>
);
})}
</tbody>
</table>
</div>
)}
{/* Toggle details */}
{(paramNames.length > 0 || cleanDescription.length > 120) && (
<button
type="button"
onClick={(e) => {
e.stopPropagation();
setExpanded((prev) => !prev);
}}
className="flex w-full items-center justify-center gap-1 border-t border-gray-100 py-1.5 text-[10px] text-gray-400 hover:text-gray-600 dark:border-slate-700 dark:text-gray-500 dark:hover:text-gray-300"
>
{expanded ? "Hide details" : "Show details"}
<CaretDown
className={`h-3 w-3 transition-transform ${expanded ? "rotate-180" : ""}`}
/>
</button>
)}
</button>
);
}

View File

@@ -4237,6 +4237,128 @@
} }
} }
}, },
"/api/mcp/discover-tools": {
"post": {
"tags": ["v2", "mcp", "mcp"],
"summary": "Discover available tools on an MCP server",
"description": "Connect to an MCP server and return its available tools.\n\nIf the user has a stored MCP credential for this server URL, it will be\nused automatically — no need to pass an explicit auth token.",
"operationId": "postV2Discover available tools on an mcp server",
"requestBody": {
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/DiscoverToolsRequest" }
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/DiscoverToolsResponse"
}
}
}
},
"401": {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/HTTPValidationError" }
}
}
}
},
"security": [{ "HTTPBearerJWT": [] }]
}
},
"/api/mcp/oauth/callback": {
"post": {
"tags": ["v2", "mcp", "mcp"],
"summary": "Exchange OAuth code for MCP tokens",
"description": "Exchange the authorization code for tokens and store the credential.\n\nThe frontend calls this after receiving the OAuth code from the popup.\nOn success, subsequent ``/discover-tools`` calls for the same server URL\nwill automatically use the stored credential.",
"operationId": "postV2Exchange oauth code for mcp tokens",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/MCPOAuthCallbackRequest"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CredentialsMetaResponse"
}
}
}
},
"401": {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/HTTPValidationError" }
}
}
}
},
"security": [{ "HTTPBearerJWT": [] }]
}
},
"/api/mcp/oauth/login": {
"post": {
"tags": ["v2", "mcp", "mcp"],
"summary": "Initiate OAuth login for an MCP server",
"description": "Discover OAuth metadata from the MCP server and return a login URL.\n\n1. Discovers the protected-resource metadata (RFC 9728)\n2. Fetches the authorization server metadata (RFC 8414)\n3. Performs Dynamic Client Registration (RFC 7591) if available\n4. Returns the authorization URL for the frontend to open in a popup",
"operationId": "postV2Initiate oauth login for an mcp server",
"requestBody": {
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/MCPOAuthLoginRequest" }
}
},
"required": true
},
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/MCPOAuthLoginResponse"
}
}
}
},
"401": {
"$ref": "#/components/responses/HTTP401NotAuthenticatedError"
},
"422": {
"description": "Validation Error",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/HTTPValidationError" }
}
}
}
},
"security": [{ "HTTPBearerJWT": [] }]
}
},
"/api/oauth/app/{client_id}": { "/api/oauth/app/{client_id}": {
"get": { "get": {
"tags": ["oauth"], "tags": ["oauth"],
@@ -7175,7 +7297,7 @@
"host": { "host": {
"anyOf": [{ "type": "string" }, { "type": "null" }], "anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Host", "title": "Host",
"description": "Host pattern for host-scoped credentials" "description": "Host pattern for host-scoped or MCP server URL for MCP credentials"
} }
}, },
"type": "object", "type": "object",
@@ -7195,6 +7317,45 @@
"required": ["version_counts"], "required": ["version_counts"],
"title": "DeleteGraphResponse" "title": "DeleteGraphResponse"
}, },
"DiscoverToolsRequest": {
"properties": {
"server_url": {
"type": "string",
"title": "Server Url",
"description": "URL of the MCP server"
},
"auth_token": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Auth Token",
"description": "Optional Bearer token for authenticated MCP servers"
}
},
"type": "object",
"required": ["server_url"],
"title": "DiscoverToolsRequest",
"description": "Request to discover tools on an MCP server."
},
"DiscoverToolsResponse": {
"properties": {
"tools": {
"items": { "$ref": "#/components/schemas/MCPToolResponse" },
"type": "array",
"title": "Tools"
},
"server_name": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Server Name"
},
"protocol_version": {
"anyOf": [{ "type": "string" }, { "type": "null" }],
"title": "Protocol Version"
}
},
"type": "object",
"required": ["tools"],
"title": "DiscoverToolsResponse",
"description": "Response containing the list of tools available on an MCP server."
},
"Document": { "Document": {
"properties": { "properties": {
"url": { "type": "string", "title": "Url" }, "url": { "type": "string", "title": "Url" },
@@ -8562,6 +8723,62 @@
"required": ["login_url", "state_token"], "required": ["login_url", "state_token"],
"title": "LoginResponse" "title": "LoginResponse"
}, },
"MCPOAuthCallbackRequest": {
"properties": {
"code": {
"type": "string",
"title": "Code",
"description": "Authorization code from OAuth callback"
},
"state_token": {
"type": "string",
"title": "State Token",
"description": "State token for CSRF verification"
}
},
"type": "object",
"required": ["code", "state_token"],
"title": "MCPOAuthCallbackRequest",
"description": "Request to exchange an OAuth code for tokens."
},
"MCPOAuthLoginRequest": {
"properties": {
"server_url": {
"type": "string",
"title": "Server Url",
"description": "URL of the MCP server that requires OAuth"
}
},
"type": "object",
"required": ["server_url"],
"title": "MCPOAuthLoginRequest",
"description": "Request to start an OAuth flow for an MCP server."
},
"MCPOAuthLoginResponse": {
"properties": {
"login_url": { "type": "string", "title": "Login Url" },
"state_token": { "type": "string", "title": "State Token" }
},
"type": "object",
"required": ["login_url", "state_token"],
"title": "MCPOAuthLoginResponse",
"description": "Response with the OAuth login URL for the user to authenticate."
},
"MCPToolResponse": {
"properties": {
"name": { "type": "string", "title": "Name" },
"description": { "type": "string", "title": "Description" },
"input_schema": {
"additionalProperties": true,
"type": "object",
"title": "Input Schema"
}
},
"type": "object",
"required": ["name", "description", "input_schema"],
"title": "MCPToolResponse",
"description": "A single MCP tool returned by discovery."
},
"MarketplaceListing": { "MarketplaceListing": {
"properties": { "properties": {
"id": { "type": "string", "title": "Id" }, "id": { "type": "string", "title": "Id" },

View File

@@ -104,7 +104,31 @@ export function FileInput(props: Props) {
return false; return false;
} }
const getFileLabelFromValue = (val: string) => { const getFileLabelFromValue = (val: unknown): string => {
// Handle object format from external API: { name, type, size, data }
if (val && typeof val === "object") {
const obj = val as Record<string, unknown>;
if (typeof obj.name === "string") {
return getFileLabel(
obj.name,
typeof obj.type === "string" ? obj.type : "",
);
}
if (typeof obj.type === "string") {
const mimeParts = obj.type.split("/");
if (mimeParts.length > 1) {
return `${mimeParts[1].toUpperCase()} file`;
}
return `${obj.type} file`;
}
return "File";
}
// Handle string values (data URIs or file paths)
if (typeof val !== "string") {
return "File";
}
if (val.startsWith("data:")) { if (val.startsWith("data:")) {
const matches = val.match(/^data:([^;]+);/); const matches = val.match(/^data:([^;]+);/);
if (matches?.[1]) { if (matches?.[1]) {

View File

@@ -86,7 +86,7 @@ export function CredentialsInput({
handleCredentialSelect, handleCredentialSelect,
} = hookData; } = hookData;
const displayName = toDisplayName(provider); const displayName = (schema as any).display_name || toDisplayName(provider);
const selectedCredentialIsSystem = const selectedCredentialIsSystem =
selectedCredential && isSystemCredential(selectedCredential); selectedCredential && isSystemCredential(selectedCredential);

View File

@@ -38,13 +38,8 @@ export function CredentialsGroupedView({
const allProviders = useContext(CredentialsProvidersContext); const allProviders = useContext(CredentialsProvidersContext);
const { userCredentialFields, systemCredentialFields } = useMemo( const { userCredentialFields, systemCredentialFields } = useMemo(
() => () => splitCredentialFieldsBySystem(credentialFields, allProviders),
splitCredentialFieldsBySystem( [credentialFields, allProviders],
credentialFields,
allProviders,
inputCredentials,
),
[credentialFields, allProviders, inputCredentials],
); );
const hasSystemCredentials = systemCredentialFields.length > 0; const hasSystemCredentials = systemCredentialFields.length > 0;
@@ -86,11 +81,13 @@ export function CredentialsGroupedView({
const providerNames = schema.credentials_provider || []; const providerNames = schema.credentials_provider || [];
const credentialTypes = schema.credentials_types || []; const credentialTypes = schema.credentials_types || [];
const requiredScopes = schema.credentials_scopes; const requiredScopes = schema.credentials_scopes;
const discriminatorValues = schema.discriminator_values;
const savedCredential = findSavedCredentialByProviderAndType( const savedCredential = findSavedCredentialByProviderAndType(
providerNames, providerNames,
credentialTypes, credentialTypes,
requiredScopes, requiredScopes,
allProviders, allProviders,
discriminatorValues,
); );
if (savedCredential) { if (savedCredential) {

View File

@@ -23,10 +23,35 @@ function hasRequiredScopes(
return true; return true;
} }
/** Check if a credential matches the discriminator values (e.g. MCP server URL). */
function matchesDiscriminatorValues(
credential: { host?: string | null; provider: string; type: string },
discriminatorValues?: string[],
) {
// MCP OAuth2 credentials must match by server URL
if (credential.type === "oauth2" && credential.provider === "mcp") {
if (!discriminatorValues || discriminatorValues.length === 0) return false;
return (
credential.host != null && discriminatorValues.includes(credential.host)
);
}
// Host-scoped credentials match by host
if (credential.type === "host_scoped" && credential.host) {
if (!discriminatorValues || discriminatorValues.length === 0) return true;
return discriminatorValues.some((v) => {
try {
return new URL(v).hostname === credential.host;
} catch {
return false;
}
});
}
return true;
}
export function splitCredentialFieldsBySystem( export function splitCredentialFieldsBySystem(
credentialFields: CredentialField[], credentialFields: CredentialField[],
allProviders: CredentialsProvidersContextType | null, allProviders: CredentialsProvidersContextType | null,
inputCredentials?: Record<string, unknown>,
) { ) {
if (!allProviders || credentialFields.length === 0) { if (!allProviders || credentialFields.length === 0) {
return { return {
@@ -52,17 +77,9 @@ export function splitCredentialFieldsBySystem(
} }
} }
const sortByUnsetFirst = (a: CredentialField, b: CredentialField) => {
const aIsSet = Boolean(inputCredentials?.[a[0]]);
const bIsSet = Boolean(inputCredentials?.[b[0]]);
if (aIsSet === bIsSet) return 0;
return aIsSet ? 1 : -1;
};
return { return {
userCredentialFields: userFields.sort(sortByUnsetFirst), userCredentialFields: userFields,
systemCredentialFields: systemFields.sort(sortByUnsetFirst), systemCredentialFields: systemFields,
}; };
} }
@@ -160,6 +177,7 @@ export function findSavedCredentialByProviderAndType(
credentialTypes: string[], credentialTypes: string[],
requiredScopes: string[] | undefined, requiredScopes: string[] | undefined,
allProviders: CredentialsProvidersContextType | null, allProviders: CredentialsProvidersContextType | null,
discriminatorValues?: string[],
): SavedCredential | undefined { ): SavedCredential | undefined {
for (const providerName of providerNames) { for (const providerName of providerNames) {
const providerData = allProviders?.[providerName]; const providerData = allProviders?.[providerName];
@@ -176,9 +194,14 @@ export function findSavedCredentialByProviderAndType(
credentialTypes.length === 0 || credentialTypes.length === 0 ||
credentialTypes.includes(credential.type); credentialTypes.includes(credential.type);
const scopesMatch = hasRequiredScopes(credential, requiredScopes); const scopesMatch = hasRequiredScopes(credential, requiredScopes);
const hostMatches = matchesDiscriminatorValues(
credential,
discriminatorValues,
);
if (!typeMatches) continue; if (!typeMatches) continue;
if (!scopesMatch) continue; if (!scopesMatch) continue;
if (!hostMatches) continue;
matchingCredentials.push(credential as SavedCredential); matchingCredentials.push(credential as SavedCredential);
} }
@@ -190,9 +213,14 @@ export function findSavedCredentialByProviderAndType(
credentialTypes.length === 0 || credentialTypes.length === 0 ||
credentialTypes.includes(credential.type); credentialTypes.includes(credential.type);
const scopesMatch = hasRequiredScopes(credential, requiredScopes); const scopesMatch = hasRequiredScopes(credential, requiredScopes);
const hostMatches = matchesDiscriminatorValues(
credential,
discriminatorValues,
);
if (!typeMatches) continue; if (!typeMatches) continue;
if (!scopesMatch) continue; if (!scopesMatch) continue;
if (!hostMatches) continue;
matchingCredentials.push(credential as SavedCredential); matchingCredentials.push(credential as SavedCredential);
} }
@@ -214,6 +242,7 @@ export function findSavedUserCredentialByProviderAndType(
credentialTypes: string[], credentialTypes: string[],
requiredScopes: string[] | undefined, requiredScopes: string[] | undefined,
allProviders: CredentialsProvidersContextType | null, allProviders: CredentialsProvidersContextType | null,
discriminatorValues?: string[],
): SavedCredential | undefined { ): SavedCredential | undefined {
for (const providerName of providerNames) { for (const providerName of providerNames) {
const providerData = allProviders?.[providerName]; const providerData = allProviders?.[providerName];
@@ -230,9 +259,14 @@ export function findSavedUserCredentialByProviderAndType(
credentialTypes.length === 0 || credentialTypes.length === 0 ||
credentialTypes.includes(credential.type); credentialTypes.includes(credential.type);
const scopesMatch = hasRequiredScopes(credential, requiredScopes); const scopesMatch = hasRequiredScopes(credential, requiredScopes);
const hostMatches = matchesDiscriminatorValues(
credential,
discriminatorValues,
);
if (!typeMatches) continue; if (!typeMatches) continue;
if (!scopesMatch) continue; if (!scopesMatch) continue;
if (!hostMatches) continue;
matchingCredentials.push(credential as SavedCredential); matchingCredentials.push(credential as SavedCredential);
} }

View File

@@ -1,4 +1,5 @@
import { CredentialsMetaInput } from "@/app/api/__generated__/models/credentialsMetaInput"; import { CredentialsMetaInput } from "@/app/api/__generated__/models/credentialsMetaInput";
import { useEffect, useRef } from "react";
import { getCredentialDisplayName } from "../../helpers"; import { getCredentialDisplayName } from "../../helpers";
import { CredentialRow } from "../CredentialRow/CredentialRow"; import { CredentialRow } from "../CredentialRow/CredentialRow";
@@ -45,13 +46,18 @@ export function CredentialsSelect({
? credentials.find((c) => c.id === selectedCredentials.id) ? credentials.find((c) => c.id === selectedCredentials.id)
: null; : null;
const displayCredential = selectedCredential // When credentials exist and nothing is explicitly selected,
// default to the first credential instead of "None"
const effectiveCredential =
selectedCredential ?? (credentials.length > 0 ? credentials[0] : null);
const displayCredential = effectiveCredential
? { ? {
id: selectedCredential.id, id: effectiveCredential.id,
title: selectedCredential.title, title: effectiveCredential.title,
username: selectedCredential.username, username: effectiveCredential.username,
type: selectedCredential.type, type: effectiveCredential.type,
provider: selectedCredential.provider, provider: effectiveCredential.provider,
} }
: allowNone : allowNone
? { ? {
@@ -67,16 +73,37 @@ export function CredentialsSelect({
provider: provider, provider: provider,
}; };
// Default to the first credential when any are available
const defaultValue =
selectedCredentials?.id ??
(credentials.length > 0 ? credentials[0].id : "__none__");
// Notify parent when defaulting to a credential so the value is captured on submit
const hasNotifiedDefault = useRef(false);
useEffect(() => {
if (hasNotifiedDefault.current) return;
if (selectedCredentials?.id) return;
if (credentials.length > 0) {
hasNotifiedDefault.current = true;
onSelectCredential(credentials[0].id);
}
}, [credentials, selectedCredentials?.id, onSelectCredential]);
return ( return (
<div className="mb-4 w-full"> <div className="mb-4 w-full">
<div className="relative"> <div className="relative">
<select <select
value={selectedCredentials?.id ?? "__none__"} value={defaultValue}
onChange={handleValueChange} onChange={handleValueChange}
disabled={readOnly} disabled={readOnly}
className="absolute inset-0 z-10 cursor-pointer opacity-0" className="absolute inset-0 z-10 cursor-pointer opacity-0"
aria-label={`Select ${displayName} credential`} aria-label={`Select ${displayName} credential`}
> >
{credentials.map((credential) => (
<option key={credential.id} value={credential.id}>
{getCredentialDisplayName(credential, displayName)}
</option>
))}
{allowNone ? ( {allowNone ? (
<option value="__none__">None (skip this credential)</option> <option value="__none__">None (skip this credential)</option>
) : ( ) : (
@@ -84,11 +111,6 @@ export function CredentialsSelect({
Select a credential Select a credential
</option> </option>
)} )}
{credentials.map((credential) => (
<option key={credential.id} value={credential.id}>
{getCredentialDisplayName(credential, displayName)}
</option>
))}
</select> </select>
<div className="rounded-medium border border-zinc-200 bg-white"> <div className="rounded-medium border border-zinc-200 bg-white">
<CredentialRow <CredentialRow

View File

@@ -5,14 +5,13 @@ import {
BlockIOCredentialsSubSchema, BlockIOCredentialsSubSchema,
CredentialsMetaInput, CredentialsMetaInput,
} from "@/lib/autogpt-server-api/types"; } from "@/lib/autogpt-server-api/types";
import { openOAuthPopup } from "@/lib/oauth-popup";
import { useQueryClient } from "@tanstack/react-query"; import { useQueryClient } from "@tanstack/react-query";
import { useEffect, useRef, useState } from "react"; import { useEffect, useRef, useState } from "react";
import { import {
filterSystemCredentials, filterSystemCredentials,
getActionButtonText, getActionButtonText,
getSystemCredentials, getSystemCredentials,
OAUTH_TIMEOUT_MS,
OAuthPopupResultMessage,
} from "./helpers"; } from "./helpers";
export type CredentialsInputState = ReturnType<typeof useCredentialsInput>; export type CredentialsInputState = ReturnType<typeof useCredentialsInput>;
@@ -57,6 +56,14 @@ export function useCredentialsInput({
const queryClient = useQueryClient(); const queryClient = useQueryClient();
const credentials = useCredentials(schema, siblingInputs); const credentials = useCredentials(schema, siblingInputs);
const hasAttemptedAutoSelect = useRef(false); const hasAttemptedAutoSelect = useRef(false);
const oauthAbortRef = useRef<((reason?: string) => void) | null>(null);
// Clean up on unmount
useEffect(() => {
return () => {
oauthAbortRef.current?.();
};
}, []);
const deleteCredentialsMutation = useDeleteV1DeleteCredentials({ const deleteCredentialsMutation = useDeleteV1DeleteCredentials({
mutation: { mutation: {
@@ -81,11 +88,14 @@ export function useCredentialsInput({
} }
}, [credentials, onLoaded]); }, [credentials, onLoaded]);
// Unselect credential if not available // Unselect credential if not available in the loaded credential list.
// Skip when no credentials have been loaded yet (empty list could mean
// the provider data hasn't finished loading, not that the credential is invalid).
useEffect(() => { useEffect(() => {
if (readOnly) return; if (readOnly) return;
if (!credentials || !("savedCredentials" in credentials)) return; if (!credentials || !("savedCredentials" in credentials)) return;
const availableCreds = credentials.savedCredentials; const availableCreds = credentials.savedCredentials;
if (availableCreds.length === 0) return;
if ( if (
selectedCredential && selectedCredential &&
!availableCreds.some((c) => c.id === selectedCredential.id) !availableCreds.some((c) => c.id === selectedCredential.id)
@@ -110,7 +120,9 @@ export function useCredentialsInput({
if (hasAttemptedAutoSelect.current) return; if (hasAttemptedAutoSelect.current) return;
hasAttemptedAutoSelect.current = true; hasAttemptedAutoSelect.current = true;
if (isOptional) return; // Auto-select if exactly one credential matches.
// For optional fields with multiple options, let the user choose.
if (isOptional && savedCreds.length > 1) return;
const cred = savedCreds[0]; const cred = savedCreds[0];
onSelectCredential({ onSelectCredential({
@@ -148,7 +160,9 @@ export function useCredentialsInput({
supportsHostScoped, supportsHostScoped,
savedCredentials, savedCredentials,
oAuthCallback, oAuthCallback,
mcpOAuthCallback,
isSystemProvider, isSystemProvider,
discriminatorValue,
} = credentials; } = credentials;
// Split credentials into user and system // Split credentials into user and system
@@ -157,72 +171,64 @@ export function useCredentialsInput({
async function handleOAuthLogin() { async function handleOAuthLogin() {
setOAuthError(null); setOAuthError(null);
const { login_url, state_token } = await api.oAuthLogin(
provider,
schema.credentials_scopes,
);
setOAuth2FlowInProgress(true);
const popup = window.open(login_url, "_blank", "popup=true");
if (!popup) { // Abort any previous OAuth flow
throw new Error( oauthAbortRef.current?.();
"Failed to open popup window. Please allow popups for this site.",
// MCP uses dynamic OAuth discovery per server URL
const isMCP = provider === "mcp" && !!discriminatorValue;
try {
let login_url: string;
let state_token: string;
if (isMCP) {
({ login_url, state_token } = await api.mcpOAuthLogin(
discriminatorValue!,
));
} else {
({ login_url, state_token } = await api.oAuthLogin(
provider,
schema.credentials_scopes,
));
}
setOAuth2FlowInProgress(true);
const { promise, cleanup } = openOAuthPopup(login_url, {
stateToken: state_token,
useCrossOriginListeners: isMCP,
// Standard OAuth uses "oauth_popup_result", MCP uses "mcp_oauth_result"
acceptMessageTypes: isMCP
? ["mcp_oauth_result"]
: ["oauth_popup_result"],
});
oauthAbortRef.current = cleanup.abort;
// Expose abort signal for the waiting modal's cancel button
const controller = new AbortController();
cleanup.signal.addEventListener("abort", () =>
controller.abort("completed"),
); );
} setOAuthPopupController(controller);
const controller = new AbortController(); const result = await promise;
setOAuthPopupController(controller);
controller.signal.onabort = () => {
console.debug("OAuth flow aborted");
setOAuth2FlowInProgress(false);
popup.close();
};
const handleMessage = async (e: MessageEvent<OAuthPopupResultMessage>) => { // Exchange code for tokens via the provider (updates credential cache)
console.debug("Message received:", e.data); const credentialResult = isMCP
if ( ? await mcpOAuthCallback(result.code, state_token)
typeof e.data != "object" || : await oAuthCallback(result.code, result.state);
!("message_type" in e.data) ||
e.data.message_type !== "oauth_popup_result"
) {
console.debug("Ignoring irrelevant message");
return;
}
if (!e.data.success) { // Check if the credential's scopes match the required scopes (skip for MCP)
console.error("OAuth flow failed:", e.data.message); if (!isMCP) {
setOAuthError(`OAuth flow failed: ${e.data.message}`);
setOAuth2FlowInProgress(false);
return;
}
if (e.data.state !== state_token) {
console.error("Invalid state token received");
setOAuthError("Invalid state token received");
setOAuth2FlowInProgress(false);
return;
}
try {
console.debug("Processing OAuth callback");
const credentials = await oAuthCallback(e.data.code, e.data.state);
console.debug("OAuth callback processed successfully");
// Check if the credential's scopes match the required scopes
const requiredScopes = schema.credentials_scopes; const requiredScopes = schema.credentials_scopes;
if (requiredScopes && requiredScopes.length > 0) { if (requiredScopes && requiredScopes.length > 0) {
const grantedScopes = new Set(credentials.scopes || []); const grantedScopes = new Set(credentialResult.scopes || []);
const hasAllRequiredScopes = new Set(requiredScopes).isSubsetOf( const hasAllRequiredScopes = new Set(requiredScopes).isSubsetOf(
grantedScopes, grantedScopes,
); );
if (!hasAllRequiredScopes) { if (!hasAllRequiredScopes) {
console.error(
`Newly created OAuth credential for ${providerName} has insufficient scopes. Required:`,
requiredScopes,
"Granted:",
credentials.scopes,
);
setOAuthError( setOAuthError(
"Connection failed: the granted permissions don't match what's required. " + "Connection failed: the granted permissions don't match what's required. " +
"Please contact the application administrator.", "Please contact the application administrator.",
@@ -230,38 +236,28 @@ export function useCredentialsInput({
return; return;
} }
} }
}
onSelectCredential({ onSelectCredential({
id: credentials.id, id: credentialResult.id,
type: "oauth2", type: "oauth2",
title: credentials.title, title: credentialResult.title,
provider, provider,
}); });
} catch (error) { } catch (error) {
console.error("Error in OAuth callback:", error); if (error instanceof Error && error.message === "OAuth flow timed out") {
setOAuthError("OAuth flow timed out");
} else {
setOAuthError( setOAuthError(
`Error in OAuth callback: ${ `OAuth error: ${
error instanceof Error ? error.message : String(error) error instanceof Error ? error.message : String(error)
}`, }`,
); );
} finally {
console.debug("Finalizing OAuth flow");
setOAuth2FlowInProgress(false);
controller.abort("success");
} }
}; } finally {
console.debug("Adding message event listener");
window.addEventListener("message", handleMessage, {
signal: controller.signal,
});
setTimeout(() => {
console.debug("OAuth flow timed out");
controller.abort("timeout");
setOAuth2FlowInProgress(false); setOAuth2FlowInProgress(false);
setOAuthError("OAuth flow timed out"); oauthAbortRef.current = null;
}, OAUTH_TIMEOUT_MS); }
} }
function handleActionButtonClick() { function handleActionButtonClick() {

View File

@@ -4,7 +4,7 @@ import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input"; import { Input } from "@/components/atoms/Input/Input";
import { Text } from "@/components/atoms/Text/Text"; import { Text } from "@/components/atoms/Text/Text";
import { Bell, MagnifyingGlass, X } from "@phosphor-icons/react"; import { Bell, MagnifyingGlass, X } from "@phosphor-icons/react";
import { FixedSizeList as List } from "react-window"; import { List, type RowComponentProps } from "react-window";
import { AgentExecutionWithInfo } from "../../helpers"; import { AgentExecutionWithInfo } from "../../helpers";
import { ActivityItem } from "../ActivityItem"; import { ActivityItem } from "../ActivityItem";
import styles from "./styles.module.css"; import styles from "./styles.module.css";
@@ -19,14 +19,16 @@ interface Props {
recentFailures: AgentExecutionWithInfo[]; recentFailures: AgentExecutionWithInfo[];
} }
interface VirtualizedItemProps { interface ActivityRowProps {
index: number; executions: AgentExecutionWithInfo[];
style: React.CSSProperties;
data: AgentExecutionWithInfo[];
} }
function VirtualizedActivityItem({ index, style, data }: VirtualizedItemProps) { function VirtualizedActivityItem({
const execution = data[index]; index,
style,
executions,
}: RowComponentProps<ActivityRowProps>) {
const execution = executions[index];
return ( return (
<div style={style}> <div style={style}>
<ActivityItem execution={execution} /> <ActivityItem execution={execution} />
@@ -129,14 +131,13 @@ export function ActivityDropdown({
> >
{filteredExecutions.length > 0 ? ( {filteredExecutions.length > 0 ? (
<List <List
height={listHeight} defaultHeight={listHeight}
width={320} // Match dropdown width (w-80 = 20rem = 320px) rowCount={filteredExecutions.length}
itemCount={filteredExecutions.length} rowHeight={itemHeight}
itemSize={itemHeight} rowProps={{ executions: filteredExecutions }}
itemData={filteredExecutions} rowComponent={VirtualizedActivityItem}
> style={{ width: 320, height: listHeight }}
{VirtualizedActivityItem} />
</List>
) : ( ) : (
<div className="flex h-full flex-col items-center justify-center gap-5 pb-8 pt-6"> <div className="flex h-full flex-col items-center justify-center gap-5 pb-8 pt-6">
<div className="mx-auto inline-flex flex-col items-center justify-center rounded-full bg-bgLightGrey p-6"> <div className="mx-auto inline-flex flex-col items-center justify-center rounded-full bg-bgLightGrey p-6">

View File

@@ -100,6 +100,11 @@ export default function useCredentials(
return false; return false;
} }
// Filter MCP OAuth2 credentials by server URL matching
if (c.type === "oauth2" && c.provider === "mcp") {
return discriminatorValue != null && c.host === discriminatorValue;
}
// Filter by OAuth credentials that have sufficient scopes for this block // Filter by OAuth credentials that have sufficient scopes for this block
if (c.type === "oauth2") { if (c.type === "oauth2") {
const requiredScopes = credsInputSchema.credentials_scopes; const requiredScopes = credsInputSchema.credentials_scopes;

View File

@@ -33,6 +33,7 @@ import type {
GraphMeta, GraphMeta,
GraphUpdateable, GraphUpdateable,
HostScopedCredentials, HostScopedCredentials,
MCPDiscoverToolsResponse,
LibraryAgent, LibraryAgent,
LibraryAgentID, LibraryAgentID,
LibraryAgentPreset, LibraryAgentPreset,
@@ -792,6 +793,38 @@ export default class BackendAPI {
return this._request("POST", "/otto/ask", query); return this._request("POST", "/otto/ask", query);
} }
////////////////////////////////////////
///////////// MCP FUNCTIONS ////////////
////////////////////////////////////////
async mcpDiscoverTools(
serverUrl: string,
authToken?: string,
): Promise<MCPDiscoverToolsResponse> {
return this._request("POST", "/mcp/discover-tools", {
server_url: serverUrl,
auth_token: authToken || null,
});
}
async mcpOAuthLogin(
serverUrl: string,
): Promise<{ login_url: string; state_token: string }> {
return this._request("POST", "/mcp/oauth/login", {
server_url: serverUrl,
});
}
async mcpOAuthCallback(
code: string,
stateToken: string,
): Promise<CredentialsMetaResponse> {
return this._request("POST", "/mcp/oauth/callback", {
code,
state_token: stateToken,
});
}
//////////////////////////////////////// ////////////////////////////////////////
////////// INTERNAL FUNCTIONS ////////// ////////// INTERNAL FUNCTIONS //////////
//////////////////////////////////////// ////////////////////////////////////////

View File

@@ -753,10 +753,23 @@ export enum BlockUIType {
export enum SpecialBlockID { export enum SpecialBlockID {
AGENT = "e189baac-8c20-45a1-94a7-55177ea42565", AGENT = "e189baac-8c20-45a1-94a7-55177ea42565",
MCP_TOOL = "a0a4b1c2-d3e4-4f56-a7b8-c9d0e1f2a3b4",
SMART_DECISION = "3b191d9f-356f-482d-8238-ba04b6d18381", SMART_DECISION = "3b191d9f-356f-482d-8238-ba04b6d18381",
OUTPUT = "363ae599-353e-4804-937e-b2ee3cef3da4", OUTPUT = "363ae599-353e-4804-937e-b2ee3cef3da4",
} }
export type MCPTool = {
name: string;
description: string;
input_schema: Record<string, any>;
};
export type MCPDiscoverToolsResponse = {
tools: MCPTool[];
server_name: string | null;
protocol_version: string | null;
};
export type AnalyticsMetrics = { export type AnalyticsMetrics = {
metric_name: string; metric_name: string;
metric_value: number; metric_value: number;

View File

@@ -0,0 +1,177 @@
/**
* Shared utility for OAuth popup flows with cross-origin support.
*
* Handles BroadcastChannel, postMessage, and localStorage polling
* to reliably receive OAuth callback results even when COOP headers
* sever the window.opener relationship.
*/
const DEFAULT_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes
export type OAuthPopupResult = {
code: string;
state: string;
};
export type OAuthPopupOptions = {
/** State token to validate against incoming messages */
stateToken: string;
/**
* Use BroadcastChannel + localStorage polling for cross-origin OAuth (MCP).
* Standard OAuth only uses postMessage via window.opener.
*/
useCrossOriginListeners?: boolean;
/** BroadcastChannel name (default: "mcp_oauth") */
broadcastChannelName?: string;
/** localStorage key for cross-origin fallback (default: "mcp_oauth_result") */
localStorageKey?: string;
/** Message types to accept (default: ["oauth_popup_result", "mcp_oauth_result"]) */
acceptMessageTypes?: string[];
/** Timeout in ms (default: 5 minutes) */
timeout?: number;
};
type Cleanup = {
/** Abort the OAuth flow and close the popup */
abort: (reason?: string) => void;
/** The AbortController signal */
signal: AbortSignal;
};
/**
* Opens an OAuth popup and sets up listeners for the callback result.
*
* Opens a blank popup synchronously (to avoid popup blockers), then navigates
* it to the login URL. Returns a promise that resolves with the OAuth code/state.
*
* @param loginUrl - The OAuth authorization URL to navigate to
* @param options - Configuration for message handling
* @returns Object with `promise` (resolves with OAuth result) and `abort` (cancels flow)
*/
export function openOAuthPopup(
loginUrl: string,
options: OAuthPopupOptions,
): { promise: Promise<OAuthPopupResult>; cleanup: Cleanup } {
const {
stateToken,
useCrossOriginListeners = false,
broadcastChannelName = "mcp_oauth",
localStorageKey = "mcp_oauth_result",
acceptMessageTypes = ["oauth_popup_result", "mcp_oauth_result"],
timeout = DEFAULT_TIMEOUT_MS,
} = options;
const controller = new AbortController();
// Open popup synchronously (before any async work) to avoid browser popup blockers
const width = 500;
const height = 700;
const left = window.screenX + (window.outerWidth - width) / 2;
const top = window.screenY + (window.outerHeight - height) / 2;
const popup = window.open(
"about:blank",
"_blank",
`width=${width},height=${height},left=${left},top=${top},popup=true,scrollbars=yes`,
);
if (popup && !popup.closed) {
popup.location.href = loginUrl;
} else {
// Popup was blocked — open in new tab as fallback
window.open(loginUrl, "_blank");
}
// Close popup on abort
controller.signal.addEventListener("abort", () => {
if (popup && !popup.closed) popup.close();
});
// Clear any stale localStorage entry
if (useCrossOriginListeners) {
try {
localStorage.removeItem(localStorageKey);
} catch {}
}
const promise = new Promise<OAuthPopupResult>((resolve, reject) => {
let handled = false;
const handleResult = (data: any) => {
if (handled) return; // Prevent double-handling
// Validate message type
const messageType = data?.message_type ?? data?.type;
if (!messageType || !acceptMessageTypes.includes(messageType)) return;
// Validate state token
if (data.state !== stateToken) {
// State mismatch — this message is for a different listener. Ignore silently.
return;
}
handled = true;
if (!data.success) {
reject(new Error(data.message || "OAuth authentication failed"));
} else {
resolve({ code: data.code, state: data.state });
}
controller.abort("completed");
};
// Listener: postMessage (works for same-origin popups)
window.addEventListener(
"message",
(event: MessageEvent) => {
if (typeof event.data === "object") {
handleResult(event.data);
}
},
{ signal: controller.signal },
);
// Cross-origin listeners for MCP OAuth
if (useCrossOriginListeners) {
// Listener: BroadcastChannel (works across tabs/popups without opener)
try {
const bc = new BroadcastChannel(broadcastChannelName);
bc.onmessage = (event) => handleResult(event.data);
controller.signal.addEventListener("abort", () => bc.close());
} catch {}
// Listener: localStorage polling (most reliable cross-tab fallback)
const pollInterval = setInterval(() => {
try {
const stored = localStorage.getItem(localStorageKey);
if (stored) {
const data = JSON.parse(stored);
localStorage.removeItem(localStorageKey);
handleResult(data);
}
} catch {}
}, 500);
controller.signal.addEventListener("abort", () =>
clearInterval(pollInterval),
);
}
// Timeout
const timeoutId = setTimeout(() => {
if (!handled) {
handled = true;
reject(new Error("OAuth flow timed out"));
controller.abort("timeout");
}
}, timeout);
controller.signal.addEventListener("abort", () => clearTimeout(timeoutId));
});
return {
promise,
cleanup: {
abort: (reason?: string) => controller.abort(reason || "canceled"),
signal: controller.signal,
},
};
}

View File

@@ -18,6 +18,6 @@ export const config = {
* Note: /auth/authorize and /auth/integrations/* ARE protected and need * Note: /auth/authorize and /auth/integrations/* ARE protected and need
* middleware to run for authentication checks. * middleware to run for authentication checks.
*/ */
"/((?!_next/static|_next/image|favicon.ico|auth/callback|.*\\.(?:svg|png|jpg|jpeg|gif|webp)$).*)", "/((?!_next/static|_next/image|favicon.ico|auth/callback|auth/integrations/mcp_callback|.*\\.(?:svg|png|jpg|jpeg|gif|webp)$).*)",
], ],
}; };

View File

@@ -38,6 +38,11 @@ export type CredentialsProviderData = {
code: string, code: string,
state_token: string, state_token: string,
) => Promise<CredentialsMetaResponse>; ) => Promise<CredentialsMetaResponse>;
/** MCP-specific OAuth callback that uses dynamic per-server OAuth discovery. */
mcpOAuthCallback: (
code: string,
state_token: string,
) => Promise<CredentialsMetaResponse>;
createAPIKeyCredentials: ( createAPIKeyCredentials: (
credentials: APIKeyCredentialsCreatable, credentials: APIKeyCredentialsCreatable,
) => Promise<CredentialsMetaResponse>; ) => Promise<CredentialsMetaResponse>;
@@ -120,6 +125,24 @@ export default function CredentialsProvider({
[api, addCredentials, onFailToast], [api, addCredentials, onFailToast],
); );
/** Wraps `BackendAPI.mcpOAuthCallback`, and adds the result to the internal credentials store. */
const mcpOAuthCallback = useCallback(
async (
code: string,
state_token: string,
): Promise<CredentialsMetaResponse> => {
try {
const credsMeta = await api.mcpOAuthCallback(code, state_token);
addCredentials("mcp", credsMeta);
return credsMeta;
} catch (error) {
onFailToast("complete MCP OAuth authentication")(error);
throw error;
}
},
[api, addCredentials, onFailToast],
);
/** Wraps `BackendAPI.createAPIKeyCredentials`, and adds the result to the internal credentials store. */ /** Wraps `BackendAPI.createAPIKeyCredentials`, and adds the result to the internal credentials store. */
const createAPIKeyCredentials = useCallback( const createAPIKeyCredentials = useCallback(
async ( async (
@@ -258,6 +281,7 @@ export default function CredentialsProvider({
isSystemProvider: systemProviders.has(provider), isSystemProvider: systemProviders.has(provider),
oAuthCallback: (code: string, state_token: string) => oAuthCallback: (code: string, state_token: string) =>
oAuthCallback(provider, code, state_token), oAuthCallback(provider, code, state_token),
mcpOAuthCallback,
createAPIKeyCredentials: ( createAPIKeyCredentials: (
credentials: APIKeyCredentialsCreatable, credentials: APIKeyCredentialsCreatable,
) => createAPIKeyCredentials(provider, credentials), ) => createAPIKeyCredentials(provider, credentials),
@@ -286,6 +310,7 @@ export default function CredentialsProvider({
createHostScopedCredentials, createHostScopedCredentials,
deleteCredentials, deleteCredentials,
oAuthCallback, oAuthCallback,
mcpOAuthCallback,
onFailToast, onFailToast,
]); ]);

View File

@@ -520,6 +520,9 @@ export class BuildPage extends BasePage {
async getBlocksToSkip(): Promise<string[]> { async getBlocksToSkip(): Promise<string[]> {
return [ return [
(await this.getGithubTriggerBlockDetails()).map((b) => b.id), (await this.getGithubTriggerBlockDetails()).map((b) => b.id),
// MCP Tool block requires an interactive dialog (server URL + OAuth) before
// it can be placed, so it can't be tested via the standard "add block" flow.
"a0a4b1c2-d3e4-4f56-a7b8-c9d0e1f2a3b4",
].flat(); ].flat();
} }

View File

@@ -3,45 +3,45 @@ const MANIFEST = 'flutter-app-manifest';
const TEMP = 'flutter-temp-cache'; const TEMP = 'flutter-temp-cache';
const CACHE_NAME = 'flutter-app-cache'; const CACHE_NAME = 'flutter-app-cache';
const RESOURCES = {"flutter.js": "6fef97aeca90b426343ba6c5c9dc5d4a", const RESOURCES = {"canvaskit/skwasm.worker.js": "51253d3321b11ddb8d73fa8aa87d3b15",
"icons/Icon-512.png": "96e752610906ba2a93c65f8abe1645f1",
"icons/Icon-maskable-512.png": "301a7604d45b3e739efc881eb04896ea",
"icons/Icon-192.png": "ac9a721a12bbc803b44f645561ecb1e1",
"icons/Icon-maskable-192.png": "c457ef57daa1d16f64b27b786ec2ea3c",
"manifest.json": "0fa552613b8ec0fda5cda565914e3b16",
"index.html": "723819cb0266142234583e428fb84ed9",
"/": "723819cb0266142234583e428fb84ed9",
"assets/shaders/ink_sparkle.frag": "f8b80e740d33eb157090be4e995febdf",
"assets/assets/tree_structure.json": "cda9b1a239f956c547411efad9f7c794",
"assets/assets/coding_tree_structure.json": "017a857cf3e274346a0a7eab4ce02eed",
"assets/assets/general_tree_structure.json": "41dfbcdc2349dcdda2b082e597c6d5ee",
"assets/assets/github_logo.svg.png": "ba087b073efdc4996b035d3a12bad0e4",
"assets/assets/images/discord_logo.png": "0e4a4162c5de8665a7d63ae9665405ae",
"assets/assets/images/github_logo.svg.png": "ba087b073efdc4996b035d3a12bad0e4",
"assets/assets/images/twitter_logo.png": "af6c11b96a5e732b8dfda86a2351ecab",
"assets/assets/images/google_logo.svg.png": "0e29f8e1acfb8996437dbb2b0f591f19",
"assets/assets/images/autogpt_logo.png": "6a5362a7d1f2f840e43ee259e733476c",
"assets/assets/google_logo.svg.png": "0e29f8e1acfb8996437dbb2b0f591f19",
"assets/assets/scrape_synthesize_tree_structure.json": "a9665c1b465bb0cb939c7210f2bf0b13",
"assets/assets/data_tree_structure.json": "5f9627548304155821968182f3883ca7",
"assets/fonts/MaterialIcons-Regular.otf": "245e0462249d95ad589a087f1c9f58e1",
"assets/NOTICES": "28ba0c63fc6e4d1ef829af7441e27f78",
"assets/packages/fluttertoast/assets/toastify.css": "a85675050054f179444bc5ad70ffc635",
"assets/packages/fluttertoast/assets/toastify.js": "56e2c9cedd97f10e7e5f1cebd85d53e3",
"assets/packages/cupertino_icons/assets/CupertinoIcons.ttf": "055d9e87e4a40dbf72b2af1a20865d57",
"assets/FontManifest.json": "dc3d03800ccca4601324923c0b1d6d57",
"assets/AssetManifest.bin": "791447d17744ac2ade3999c1672fdbe8",
"assets/AssetManifest.json": "1b1e4a4276722b65eb1ef765e2991840",
"canvaskit/chromium/canvaskit.wasm": "393ec8fb05d94036734f8104fa550a67",
"canvaskit/chromium/canvaskit.js": "ffb2bb6484d5689d91f393b60664d530",
"canvaskit/skwasm.worker.js": "51253d3321b11ddb8d73fa8aa87d3b15",
"canvaskit/skwasm.js": "95f16c6690f955a45b2317496983dbe9", "canvaskit/skwasm.js": "95f16c6690f955a45b2317496983dbe9",
"canvaskit/canvaskit.wasm": "d9f69e0f428f695dc3d66b3a83a4aa8e", "canvaskit/canvaskit.wasm": "d9f69e0f428f695dc3d66b3a83a4aa8e",
"canvaskit/canvaskit.js": "5caccb235fad20e9b72ea6da5a0094e6",
"canvaskit/skwasm.wasm": "d1fde2560be92c0b07ad9cf9acb10d05", "canvaskit/skwasm.wasm": "d1fde2560be92c0b07ad9cf9acb10d05",
"canvaskit/canvaskit.js": "5caccb235fad20e9b72ea6da5a0094e6",
"canvaskit/chromium/canvaskit.wasm": "393ec8fb05d94036734f8104fa550a67",
"canvaskit/chromium/canvaskit.js": "ffb2bb6484d5689d91f393b60664d530",
"icons/Icon-maskable-192.png": "c457ef57daa1d16f64b27b786ec2ea3c",
"icons/Icon-maskable-512.png": "301a7604d45b3e739efc881eb04896ea",
"icons/Icon-512.png": "96e752610906ba2a93c65f8abe1645f1",
"icons/Icon-192.png": "ac9a721a12bbc803b44f645561ecb1e1",
"manifest.json": "0fa552613b8ec0fda5cda565914e3b16",
"favicon.png": "5dcef449791fa27946b3d35ad8803796", "favicon.png": "5dcef449791fa27946b3d35ad8803796",
"version.json": "46a52461e018faa623d9196334aa3f50", "version.json": "46a52461e018faa623d9196334aa3f50",
"main.dart.js": "6fcbf8bbcb0a76fae9029f72ac7fbdc3"}; "index.html": "e6981504a32bf86f892909c1875df208",
"/": "e6981504a32bf86f892909c1875df208",
"main.dart.js": "6fcbf8bbcb0a76fae9029f72ac7fbdc3",
"assets/AssetManifest.json": "1b1e4a4276722b65eb1ef765e2991840",
"assets/packages/cupertino_icons/assets/CupertinoIcons.ttf": "055d9e87e4a40dbf72b2af1a20865d57",
"assets/packages/fluttertoast/assets/toastify.js": "56e2c9cedd97f10e7e5f1cebd85d53e3",
"assets/packages/fluttertoast/assets/toastify.css": "a85675050054f179444bc5ad70ffc635",
"assets/shaders/ink_sparkle.frag": "f8b80e740d33eb157090be4e995febdf",
"assets/fonts/MaterialIcons-Regular.otf": "245e0462249d95ad589a087f1c9f58e1",
"assets/assets/images/twitter_logo.png": "af6c11b96a5e732b8dfda86a2351ecab",
"assets/assets/images/discord_logo.png": "0e4a4162c5de8665a7d63ae9665405ae",
"assets/assets/images/google_logo.svg.png": "0e29f8e1acfb8996437dbb2b0f591f19",
"assets/assets/images/autogpt_logo.png": "6a5362a7d1f2f840e43ee259e733476c",
"assets/assets/images/github_logo.svg.png": "ba087b073efdc4996b035d3a12bad0e4",
"assets/assets/scrape_synthesize_tree_structure.json": "a9665c1b465bb0cb939c7210f2bf0b13",
"assets/assets/coding_tree_structure.json": "017a857cf3e274346a0a7eab4ce02eed",
"assets/assets/general_tree_structure.json": "41dfbcdc2349dcdda2b082e597c6d5ee",
"assets/assets/google_logo.svg.png": "0e29f8e1acfb8996437dbb2b0f591f19",
"assets/assets/tree_structure.json": "cda9b1a239f956c547411efad9f7c794",
"assets/assets/data_tree_structure.json": "5f9627548304155821968182f3883ca7",
"assets/assets/github_logo.svg.png": "ba087b073efdc4996b035d3a12bad0e4",
"assets/NOTICES": "28ba0c63fc6e4d1ef829af7441e27f78",
"assets/AssetManifest.bin": "791447d17744ac2ade3999c1672fdbe8",
"assets/FontManifest.json": "dc3d03800ccca4601324923c0b1d6d57",
"flutter.js": "6fef97aeca90b426343ba6c5c9dc5d4a"};
// The application shell files that are downloaded before a service worker can // The application shell files that are downloaded before a service worker can
// start. // start.
const CORE = ["main.dart.js", const CORE = ["main.dart.js",

View File

@@ -35,7 +35,7 @@
<script> <script>
// The value below is injected by flutter build, do not touch. // The value below is injected by flutter build, do not touch.
const serviceWorkerVersion = "767375486"; const serviceWorkerVersion = "726743092";
</script> </script>
<!-- This script adds the flutter initialization JS code --> <!-- This script adds the flutter initialization JS code -->
<script src="flutter.js" defer></script> <script src="flutter.js" defer></script>

View File

@@ -467,6 +467,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [Github Update Comment](block-integrations/github/issues.md#github-update-comment) | A block that updates an existing comment on a GitHub issue or pull request | | [Github Update Comment](block-integrations/github/issues.md#github-update-comment) | A block that updates an existing comment on a GitHub issue or pull request |
| [Github Update File](block-integrations/github/repo.md#github-update-file) | This block updates an existing file in a GitHub repository | | [Github Update File](block-integrations/github/repo.md#github-update-file) | This block updates an existing file in a GitHub repository |
| [Instantiate Code Sandbox](block-integrations/misc.md#instantiate-code-sandbox) | Instantiate a sandbox environment with internet access in which you can execute code with the Execute Code Step block | | [Instantiate Code Sandbox](block-integrations/misc.md#instantiate-code-sandbox) | Instantiate a sandbox environment with internet access in which you can execute code with the Execute Code Step block |
| [MCP Tool](block-integrations/mcp/block.md#mcp-tool) | Connect to any MCP server and execute its tools |
| [Slant3D Order Webhook](block-integrations/slant3d/webhook.md#slant3d-order-webhook) | This block triggers on Slant3D order status updates and outputs the event details, including tracking information when orders are shipped | | [Slant3D Order Webhook](block-integrations/slant3d/webhook.md#slant3d-order-webhook) | This block triggers on Slant3D order status updates and outputs the event details, including tracking information when orders are shipped |
## Media Generation ## Media Generation

View File

@@ -84,6 +84,7 @@
* [Linear Projects](block-integrations/linear/projects.md) * [Linear Projects](block-integrations/linear/projects.md)
* [LLM](block-integrations/llm.md) * [LLM](block-integrations/llm.md)
* [Logic](block-integrations/logic.md) * [Logic](block-integrations/logic.md)
* [Mcp Block](block-integrations/mcp/block.md)
* [Misc](block-integrations/misc.md) * [Misc](block-integrations/misc.md)
* [Notion Create Page](block-integrations/notion/create_page.md) * [Notion Create Page](block-integrations/notion/create_page.md)
* [Notion Read Database](block-integrations/notion/read_database.md) * [Notion Read Database](block-integrations/notion/read_database.md)

View File

@@ -0,0 +1,36 @@
# Mcp Block
<!-- MANUAL: file_description -->
_Add a description of this category of blocks._
<!-- END MANUAL -->
## MCP Tool
### What it is
Connect to any MCP server and execute its tools. Provide a server URL, select a tool, and pass arguments dynamically.
### How it works
<!-- MANUAL: how_it_works -->
_Add technical explanation here._
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| server_url | URL of the MCP server (Streamable HTTP endpoint) | str | Yes |
| selected_tool | The MCP tool to execute | str | No |
| tool_arguments | Arguments to pass to the selected MCP tool. The fields here are defined by the tool's input schema. | Dict[str, Any] | No |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if the tool call failed | str |
| result | The result returned by the MCP tool | Result |
### Possible use case
<!-- MANUAL: use_case -->
_Add practical use case examples here._
<!-- END MANUAL -->
---