Compare commits

..

7 Commits

Author SHA1 Message Date
Otto-AGPT
bacbf1f0ab style(backend): Run poetry run format (ruff + black + isort) 2026-02-11 19:50:25 +00:00
Otto-AGPT
88d365b27d style(backend): Run black and isort formatting 2026-02-11 19:48:16 +00:00
Otto-AGPT
5ef820c473 fix(backend): Fix auth helpers tests and remove unnecessary comment
- Fixed 3 failing tests in helpers_test.py that were incorrectly trying
  to mock 'backend.api.auth.helpers.get_openapi' (which doesn't exist)
- Tests now properly mock the app's openapi method directly
- Removed unnecessary comment in dependabot.yml per review feedback
2026-02-11 19:14:22 +00:00
Otto-AGPT
40f51f4ac1 refactor(backend): Integrate autogpt_libs into backend structure (OPEN-2998)
Properly integrates autogpt_libs modules into the backend's existing
structure instead of just moving them wholesale.

Structure changes:
- auth/ → backend/api/auth/ (FastAPI auth dependencies)
- api_key/ → backend/api/auth/api_key/ (API key auth)
- logging/ → backend/logging/ (structured logging config)
- utils/synchronize → backend/util/synchronize.py

Removed (unused):
- rate_limit/ - backend has its own rate limiting
- supabase_integration_credentials_store/ - not imported anywhere

Import path changes:
- autogpt_libs.auth.* → backend.api.auth.*
- autogpt_libs.api_key.* → backend.api.auth.api_key.*
- autogpt_libs.logging.* → backend.logging.*
- autogpt_libs.utils.synchronize → backend.util.synchronize

Also updates:
- pyproject.toml (merged deps, removed path ref)
- Dockerfile (removed autogpt_libs copy)
- CI workflow (removed autogpt_libs paths)
- dependabot.yml (removed autogpt_libs entry)
- Docs (CLAUDE.md, TESTING.md)

Ticket: https://linear.app/autogpt/issue/OPEN-2998
2026-02-11 16:56:35 +00:00
Otto
36aeb0b2b3 docs(blocks): clarify HumanInTheLoop output descriptions for agent builder (#12069)
## Problem

The agent builder (LLM) misinterprets the HumanInTheLoop block outputs.
It thinks `approved_data` and `rejected_data` will yield status strings
like "APPROVED" or "REJECTED" instead of understanding that the actual
input data passes through.

This leads to unnecessary complexity - the agent builder adds comparison
blocks to check for status strings that don't exist.

## Solution

Enriched the block docstring and all input/output field descriptions to
make it explicit that:
1. The output is the actual data itself, not a status string
2. The routing is determined by which output pin fires
3. How to use the block correctly (connect downstream blocks to
appropriate output pins)

## Changes

- Updated block docstring with clear "How it works" and "Example usage"
sections
- Enhanced `data` input description to explain data flow
- Enhanced `name` input description for reviewer context
- Enhanced `approved_data` output to explicitly state it's NOT a status
string
- Enhanced `rejected_data` output to explicitly state it's NOT a status
string
- Enhanced `review_message` output for clarity

## Testing

Documentation-only change to schema descriptions. No functional changes.

Fixes SECRT-1930

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Enhanced documentation for the `HumanInTheLoopBlock` to clarify how
output pins work. The key improvement explicitly states that output pins
(`approved_data` and `rejected_data`) yield the actual input data, not
status strings like "APPROVED" or "REJECTED". This prevents the agent
builder (LLM) from misinterpreting the block's behavior and adding
unnecessary comparison blocks.

**Key changes:**
- Added "How it works" and "Example usage" sections to the block
docstring
- Clarified that routing is determined by which output pin fires, not by
comparing output values
- Enhanced all input/output field descriptions with explicit data flow
explanations
- Emphasized that downstream blocks should be connected to the
appropriate output pin based on desired workflow path

This is a documentation-only change with no functional modifications to
the code logic.
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- This PR is safe to merge with no risk
- Documentation-only change that accurately reflects the existing code
behavior. No functional changes, no runtime impact, and the enhanced
descriptions correctly explain how the block outputs work based on
verification of the implementation code.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2026-02-11 15:43:58 +00:00
Ubbe
2a189c44c4 fix(frontend): API stream issues leaking into prompt (#12063)
## Changes 🏗️

<img width="800" height="621" alt="Screenshot 2026-02-11 at 19 32 39"
src="https://github.com/user-attachments/assets/e97be1a7-972e-4ae0-8dfa-6ade63cf287b"
/>

When the BE API has an error, prevent it from leaking into the stream
and instead handle it gracefully via toast.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and trust the changes

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

This PR fixes an issue where backend API stream errors were leaking into
the chat prompt instead of being handled gracefully. The fix involves
both backend and frontend changes to ensure error events conform to the
AI SDK's strict schema.

**Key Changes:**
- **Backend (`response_model.py`)**: Added custom `to_sse()` method for
`StreamError` that only emits `type` and `errorText` fields, stripping
extra fields like `code` and `details` that cause AI SDK validation
failures
- **Backend (`prompt.py`)**: Added validation step after context
compression to remove orphaned tool responses without matching tool
calls, preventing "unexpected tool_use_id" API errors
- **Frontend (`route.ts`)**: Implemented SSE stream normalization with
`normalizeSSEStream()` and `normalizeSSEEvent()` functions to strip
non-conforming fields from error events before they reach the AI SDK
- **Frontend (`ChatMessagesContainer.tsx`)**: Added toast notifications
for errors and improved error display UI with deduplication logic

The changes ensure a clean separation between internal error metadata
(useful for logging/debugging) and the strict schema required by the AI
SDK on the frontend.
</details>


<details><summary><h3>Confidence Score: 4/5</h3></summary>

- This PR is safe to merge with low risk
- The changes are well-structured and address a specific bug with proper
error handling. The dual-layer approach (backend filtering in `to_sse()`
+ frontend normalization) provides defense-in-depth. However, the lack
of automated tests for the new error normalization logic and the
potential for edge cases in SSE parsing prevent a perfect score.
- Pay close attention to
`autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts`
- the SSE normalization logic should be tested with various error
scenarios
</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant Frontend as ChatMessagesContainer
    participant Proxy as /api/chat/.../stream
    participant Backend as Backend API
    participant AISDK as AI SDK

    User->>Frontend: Send message
    Frontend->>Proxy: POST with message
    Proxy->>Backend: Forward request with auth
    Backend->>Backend: Process message
    
    alt Success Path
        Backend->>Proxy: SSE stream (text-delta, etc.)
        Proxy->>Proxy: normalizeSSEStream (pass through)
        Proxy->>AISDK: Forward SSE events
        AISDK->>Frontend: Update messages
        Frontend->>User: Display response
    else Error Path
        Backend->>Backend: StreamError.to_sse()
        Note over Backend: Only emit {type, errorText}
        Backend->>Proxy: SSE error event
        Proxy->>Proxy: normalizeSSEEvent()
        Note over Proxy: Strip extra fields (code, details)
        Proxy->>AISDK: {type: "error", errorText: "..."}
        AISDK->>Frontend: error state updated
        Frontend->>Frontend: Toast notification (deduplicated)
        Frontend->>User: Show error UI + toast
    end
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Otto-AGPT <otto@agpt.co>
2026-02-11 22:46:37 +08:00
Abhimanyu Yadav
508759610f fix(frontend): add min-width-0 to ContentCard to prevent overflow (#12060)
### Changes 🏗️

Added `min-w-0` class to the ContentCard component in the ToolAccordion
to prevent content overflow issues. This CSS fix ensures that the card
properly respects its container width constraints and allows text
truncation to work correctly when content is too wide.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that tool content displays correctly in the accordion
- [x] Confirmed that long content properly truncates instead of
overflowing
  - [x] Tested with various screen sizes to ensure responsive behavior

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

<details><summary><h3>Greptile Summary</h3></summary>

Added `min-w-0` class to `ContentCard` component to fix text truncation
overflow in grid layouts. This is a standard CSS fix that allows grid
items to shrink below their content size, enabling `truncate` classes on
child elements (`ContentCardTitle`, `ContentCardSubtitle`) to work
correctly. The fix follows the same pattern already used in
`ContentCardHeader` (line 54) and `ToolAccordion` (line 54).
</details>


<details><summary><h3>Confidence Score: 5/5</h3></summary>

- Safe to merge with no risk
- Single-line CSS fix that addresses a well-known flexbox/grid layout
issue. The change follows existing patterns in the codebase and is
thoroughly tested. No logic changes, no breaking changes, no side
effects.
- No files require special attention
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
2026-02-11 21:09:21 +08:00
174 changed files with 2140 additions and 4847 deletions

View File

@@ -1,29 +1,5 @@
version: 2
updates:
# autogpt_libs (Poetry project)
- package-ecosystem: "pip"
directory: "autogpt_platform/autogpt_libs"
schedule:
interval: "weekly"
open-pull-requests-limit: 10
target-branch: "dev"
commit-message:
prefix: "chore(libs/deps)"
prefix-development: "chore(libs/deps-dev)"
ignore:
- dependency-name: "poetry"
groups:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
# backend (Poetry project)
- package-ecosystem: "pip"
directory: "autogpt_platform/backend"

View File

@@ -6,13 +6,11 @@ on:
paths:
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
- "autogpt_platform/autogpt_libs/**"
pull_request:
branches: [master, dev, release-*]
paths:
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
- "autogpt_platform/autogpt_libs/**"
merge_group:
concurrency:

View File

@@ -8,7 +8,7 @@ AutoGPT Platform is a monorepo containing:
- **Backend** (`backend`): Python FastAPI server with async support
- **Frontend** (`frontend`): Next.js React application
- **Shared Libraries** (`autogpt_libs`): Common Python utilities
- **Shared Libraries** (`backend/api/auth`, `backend/logging`): Auth, logging, and common utilities integrated into backend
## Component Documentation

View File

@@ -1,3 +0,0 @@
# AutoGPT Libs
This is a new project to store shared functionality across different services in the AutoGPT Platform (e.g. authentication)

View File

@@ -1,33 +0,0 @@
from typing import Optional
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class RateLimitSettings(BaseSettings):
redis_host: str = Field(
default="redis://localhost:6379",
description="Redis host",
validation_alias="REDIS_HOST",
)
redis_port: str = Field(
default="6379", description="Redis port", validation_alias="REDIS_PORT"
)
redis_password: Optional[str] = Field(
default=None,
description="Redis password",
validation_alias="REDIS_PASSWORD",
)
requests_per_minute: int = Field(
default=60,
description="Maximum number of requests allowed per minute per API key",
validation_alias="RATE_LIMIT_REQUESTS_PER_MINUTE",
)
model_config = SettingsConfigDict(case_sensitive=True, extra="ignore")
RATE_LIMIT_SETTINGS = RateLimitSettings()

View File

@@ -1,51 +0,0 @@
import time
from typing import Tuple
from redis import Redis
from .config import RATE_LIMIT_SETTINGS
class RateLimiter:
def __init__(
self,
redis_host: str = RATE_LIMIT_SETTINGS.redis_host,
redis_port: str = RATE_LIMIT_SETTINGS.redis_port,
redis_password: str | None = RATE_LIMIT_SETTINGS.redis_password,
requests_per_minute: int = RATE_LIMIT_SETTINGS.requests_per_minute,
):
self.redis = Redis(
host=redis_host,
port=int(redis_port),
password=redis_password,
decode_responses=True,
)
self.window = 60
self.max_requests = requests_per_minute
async def check_rate_limit(self, api_key_id: str) -> Tuple[bool, int, int]:
"""
Check if request is within rate limits.
Args:
api_key_id: The API key identifier to check
Returns:
Tuple of (is_allowed, remaining_requests, reset_time)
"""
now = time.time()
window_start = now - self.window
key = f"ratelimit:{api_key_id}:1min"
pipe = self.redis.pipeline()
pipe.zremrangebyscore(key, 0, window_start)
pipe.zadd(key, {str(now): now})
pipe.zcount(key, window_start, now)
pipe.expire(key, self.window)
_, _, request_count, _ = pipe.execute()
remaining = max(0, self.max_requests - request_count)
reset_time = int(now + self.window)
return request_count <= self.max_requests, remaining, reset_time

View File

@@ -1,32 +0,0 @@
from fastapi import HTTPException, Request
from starlette.middleware.base import RequestResponseEndpoint
from .limiter import RateLimiter
async def rate_limit_middleware(request: Request, call_next: RequestResponseEndpoint):
"""FastAPI middleware for rate limiting API requests."""
limiter = RateLimiter()
if not request.url.path.startswith("/api"):
return await call_next(request)
api_key = request.headers.get("Authorization")
if not api_key:
return await call_next(request)
api_key = api_key.replace("Bearer ", "")
is_allowed, remaining, reset_time = await limiter.check_rate_limit(api_key)
if not is_allowed:
raise HTTPException(
status_code=429, detail="Rate limit exceeded. Please try again later."
)
response = await call_next(request)
response.headers["X-RateLimit-Limit"] = str(limiter.max_requests)
response.headers["X-RateLimit-Remaining"] = str(remaining)
response.headers["X-RateLimit-Reset"] = str(reset_time)
return response

View File

@@ -1,76 +0,0 @@
from typing import Annotated, Any, Literal, Optional, TypedDict
from uuid import uuid4
from pydantic import BaseModel, Field, SecretStr, field_serializer
class _BaseCredentials(BaseModel):
id: str = Field(default_factory=lambda: str(uuid4()))
provider: str
title: Optional[str]
@field_serializer("*")
def dump_secret_strings(value: Any, _info):
if isinstance(value, SecretStr):
return value.get_secret_value()
return value
class OAuth2Credentials(_BaseCredentials):
type: Literal["oauth2"] = "oauth2"
username: Optional[str]
"""Username of the third-party service user that these credentials belong to"""
access_token: SecretStr
access_token_expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the access token expires (if at all)"""
refresh_token: Optional[SecretStr]
refresh_token_expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the refresh token expires (if at all)"""
scopes: list[str]
metadata: dict[str, Any] = Field(default_factory=dict)
def bearer(self) -> str:
return f"Bearer {self.access_token.get_secret_value()}"
class APIKeyCredentials(_BaseCredentials):
type: Literal["api_key"] = "api_key"
api_key: SecretStr
expires_at: Optional[int]
"""Unix timestamp (seconds) indicating when the API key expires (if at all)"""
def bearer(self) -> str:
return f"Bearer {self.api_key.get_secret_value()}"
Credentials = Annotated[
OAuth2Credentials | APIKeyCredentials,
Field(discriminator="type"),
]
CredentialsType = Literal["api_key", "oauth2"]
class OAuthState(BaseModel):
token: str
provider: str
expires_at: int
code_verifier: Optional[str] = None
scopes: list[str]
"""Unix timestamp (seconds) indicating when this OAuth state expires"""
class UserMetadata(BaseModel):
integration_credentials: list[Credentials] = Field(default_factory=list)
integration_oauth_states: list[OAuthState] = Field(default_factory=list)
class UserMetadataRaw(TypedDict, total=False):
integration_credentials: list[dict]
integration_oauth_states: list[dict]
class UserIntegrations(BaseModel):
credentials: list[Credentials] = Field(default_factory=list)
oauth_states: list[OAuthState] = Field(default_factory=list)

File diff suppressed because it is too large Load Diff

View File

@@ -1,40 +0,0 @@
[tool.poetry]
name = "autogpt-libs"
version = "0.2.0"
description = "Shared libraries across AutoGPT Platform"
authors = ["AutoGPT team <info@agpt.co>"]
readme = "README.md"
packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
cryptography = "^46.0"
expiringdict = "^1.2.2"
fastapi = "^0.128.0"
google-cloud-logging = "^3.13.0"
launchdarkly-server-sdk = "^9.14.1"
pydantic = "^2.12.5"
pydantic-settings = "^2.12.0"
pyjwt = { version = "^2.11.0", extras = ["crypto"] }
redis = "^6.2.0"
supabase = "^2.27.2"
uvicorn = "^0.40.0"
[tool.poetry.group.dev.dependencies]
pyright = "^1.1.408"
pytest = "^8.4.1"
pytest-asyncio = "^1.3.0"
pytest-mock = "^3.15.1"
pytest-cov = "^7.0.0"
ruff = "^0.15.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.ruff]
line-length = 88
[tool.ruff.lint]
extend-select = ["I"] # sort dependencies

View File

@@ -39,8 +39,7 @@ ENV PATH=/opt/poetry/bin:$PATH
RUN pip3 install poetry --break-system-packages
# Copy and install dependencies
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
# Copy and install dependencies (autogpt_libs merged into backend - OPEN-2998)
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/autogpt_platform/backend
RUN poetry install --no-ansi --no-root
@@ -83,11 +82,9 @@ COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-pyth
ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH"
RUN mkdir -p /app/autogpt_platform/autogpt_libs
# autogpt_libs merged into backend (OPEN-2998)
RUN mkdir -p /app/autogpt_platform/backend
COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs
COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/
WORKDIR /app/autogpt_platform/backend

View File

@@ -132,7 +132,7 @@ def test_endpoint_success(snapshot: Snapshot):
### Testing with Authentication
For the main API routes that use JWT authentication, auth is provided by the `autogpt_libs.auth` module. If the test actually uses the `user_id`, the recommended approach for testing is to mock the `get_jwt_payload` function, which underpins all higher-level auth functions used in the API (`requires_user`, `requires_admin_user`, `get_user_id`).
For the main API routes that use JWT authentication, auth is provided by the `backend.api.auth` module. If the test actually uses the `user_id`, the recommended approach for testing is to mock the `get_jwt_payload` function, which underpins all higher-level auth functions used in the API (`requires_user`, `requires_admin_user`, `get_user_id`).
If the test doesn't need the `user_id` specifically, mocking is not necessary as during tests auth is disabled anyway (see `conftest.py`).
@@ -158,7 +158,7 @@ client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user['get_jwt_payload']
yield
@@ -171,7 +171,7 @@ For admin-only endpoints, use `mock_jwt_admin` instead:
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_admin):
"""Setup auth overrides for admin tests"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_admin['get_jwt_payload']
yield

View File

@@ -1,6 +1,6 @@
import hashlib
from autogpt_libs.api_key.keysmith import APIKeySmith
from backend.api.auth.api_key.keysmith import APIKeySmith
def test_generate_api_key():

View File

@@ -9,7 +9,7 @@ import os
import pytest
from pytest_mock import MockerFixture
from autogpt_libs.auth.config import AuthConfigError, Settings
from backend.api.auth.config import AuthConfigError, Settings
def test_environment_variable_precedence(mocker: MockerFixture):
@@ -228,7 +228,7 @@ def test_no_crypto_warning(mocker: MockerFixture, caplog: pytest.LogCaptureFixtu
mocker.patch.dict(os.environ, {"JWT_VERIFY_KEY": secret}, clear=True)
# Mock has_crypto to return False
mocker.patch("autogpt_libs.auth.config.has_crypto", False)
mocker.patch("backend.api.auth.config.has_crypto", False)
with caplog.at_level(logging.WARNING):
Settings()

View File

@@ -43,7 +43,7 @@ def get_optional_user_id(
try:
# Parse JWT token to get user ID
from autogpt_libs.auth.jwt_utils import parse_jwt_token
from backend.api.auth.jwt_utils import parse_jwt_token
payload = parse_jwt_token(credentials.credentials)
return payload.get("sub")

View File

@@ -11,12 +11,12 @@ from fastapi import FastAPI, HTTPException, Request, Security
from fastapi.testclient import TestClient
from pytest_mock import MockerFixture
from autogpt_libs.auth.dependencies import (
from backend.api.auth.dependencies import (
get_user_id,
requires_admin_user,
requires_user,
)
from autogpt_libs.auth.models import User
from backend.api.auth.models import User
class TestAuthDependencies:
@@ -53,7 +53,7 @@ class TestAuthDependencies:
# Mock get_jwt_payload to return our test payload
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = await requires_user(jwt_payload)
assert isinstance(user, User)
@@ -70,7 +70,7 @@ class TestAuthDependencies:
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = await requires_user(jwt_payload)
assert user.user_id == "admin-456"
@@ -105,7 +105,7 @@ class TestAuthDependencies:
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = await requires_admin_user(jwt_payload)
assert user.user_id == "admin-789"
@@ -137,7 +137,7 @@ class TestAuthDependencies:
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
assert user_id == "user-id-xyz"
@@ -344,7 +344,7 @@ class TestAuthDependenciesEdgeCases:
):
"""Test that errors propagate correctly through dependencies."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
from backend.api.auth.jwt_utils import verify_user
with pytest.raises(HTTPException) as exc_info:
verify_user(payload, admin_only=admin_only)
@@ -354,7 +354,7 @@ class TestAuthDependenciesEdgeCases:
async def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user
from backend.api.auth.jwt_utils import verify_user
# Valid case
user = verify_user({"sub": "user", "role": "user"}, admin_only=False)
@@ -376,16 +376,16 @@ class TestAdminImpersonation:
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user = mocker.patch("backend.api.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-456", email="admin@example.com", role="admin"
)
# Mock logger to verify audit logging
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mock_logger = mocker.patch("backend.api.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
@@ -412,13 +412,13 @@ class TestAdminImpersonation:
}
# Mock verify_user to return regular user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user = mocker.patch("backend.api.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="regular-user", email="user@example.com", role="user"
)
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
with pytest.raises(HTTPException) as exc_info:
@@ -439,7 +439,7 @@ class TestAdminImpersonation:
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
@@ -459,7 +459,7 @@ class TestAdminImpersonation:
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
@@ -479,16 +479,16 @@ class TestAdminImpersonation:
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user = mocker.patch("backend.api.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-999", email="superadmin@company.com", role="admin"
)
# Mock logger to capture audit trail
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mock_logger = mocker.patch("backend.api.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
@@ -515,7 +515,7 @@ class TestAdminImpersonation:
}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)
@@ -535,16 +535,16 @@ class TestAdminImpersonation:
}
# Mock verify_user to return admin user data
mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user")
mock_verify_user = mocker.patch("backend.api.auth.dependencies.verify_user")
mock_verify_user.return_value = Mock(
user_id="admin-456", email="admin@example.com", role="admin"
)
# Mock logger
mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger")
mock_logger = mocker.patch("backend.api.auth.dependencies.logger")
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
"backend.api.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = await get_user_id(request, jwt_payload)

View File

@@ -3,13 +3,11 @@ Comprehensive tests for auth helpers module to achieve 100% coverage.
Tests OpenAPI schema generation and authentication response handling.
"""
from unittest import mock
from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from autogpt_libs.auth.helpers import add_auth_responses_to_openapi
from autogpt_libs.auth.jwt_utils import bearer_jwt_auth
from backend.api.auth.helpers import add_auth_responses_to_openapi
from backend.api.auth.jwt_utils import bearer_jwt_auth
def test_add_auth_responses_to_openapi_basic():
@@ -19,7 +17,7 @@ def test_add_auth_responses_to_openapi_basic():
# Add some test endpoints with authentication
from fastapi import Depends
from autogpt_libs.auth.dependencies import requires_user
from backend.api.auth.dependencies import requires_user
@app.get("/protected", dependencies=[Depends(requires_user)])
def protected_endpoint():
@@ -64,7 +62,7 @@ def test_add_auth_responses_to_openapi_with_security():
# Mock endpoint with security
from fastapi import Security
from autogpt_libs.auth.dependencies import get_user_id
from backend.api.auth.dependencies import get_user_id
@app.get("/secured")
def secured_endpoint(user_id: str = Security(get_user_id)):
@@ -130,7 +128,7 @@ def test_add_auth_responses_to_openapi_existing_responses():
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
@app.get(
"/with-responses",
@@ -197,8 +195,8 @@ def test_add_auth_responses_to_openapi_multiple_security_schemes():
from fastapi import Security
from autogpt_libs.auth.dependencies import requires_admin_user, requires_user
from autogpt_libs.auth.models import User
from backend.api.auth.dependencies import requires_admin_user, requires_user
from backend.api.auth.models import User
@app.get("/multi-auth")
def multi_auth(
@@ -227,26 +225,29 @@ def test_add_auth_responses_to_openapi_empty_components():
"""Test when OpenAPI schema has no components section initially."""
app = FastAPI()
# Mock get_openapi to return schema without components
original_get_openapi = get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove components if it exists
def mock_openapi():
schema = get_openapi(
title=app.title,
version=app.version,
routes=app.routes,
)
# Remove components if it exists to test component creation
if "components" in schema:
del schema["components"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
# Replace app's openapi method
app.openapi = mock_openapi
schema = app.openapi()
# Apply customization (this wraps our mock)
add_auth_responses_to_openapi(app)
# Components should be created
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
schema = app.openapi()
# Components should be created
assert "components" in schema
assert "responses" in schema["components"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
def test_add_auth_responses_to_openapi_all_http_methods():
@@ -255,7 +256,7 @@ def test_add_auth_responses_to_openapi_all_http_methods():
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
@app.get("/resource")
def get_resource(jwt: dict = Security(get_jwt_payload)):
@@ -333,53 +334,59 @@ def test_endpoint_without_responses_section():
app = FastAPI()
from fastapi import Security
from fastapi.openapi.utils import get_openapi as original_get_openapi
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
# Create endpoint
@app.get("/no-responses")
def endpoint_without_responses(jwt: dict = Security(get_jwt_payload)):
return {"data": "test"}
# Mock get_openapi to remove responses from the endpoint
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Remove responses from our endpoint to trigger line 40
# Create a mock openapi method that removes responses from the endpoint
def mock_openapi():
schema = get_openapi(
title=app.title,
version=app.version,
routes=app.routes,
)
# Remove responses from our endpoint to test response creation
if "/no-responses" in schema.get("paths", {}):
if "get" in schema["paths"]["/no-responses"]:
# Delete responses to force the code to create it
if "responses" in schema["paths"]["/no-responses"]["get"]:
del schema["paths"]["/no-responses"]["get"]["responses"]
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
# Replace app's openapi method
app.openapi = mock_openapi
# Get schema and verify 401 was added
schema = app.openapi()
# Apply customization (this wraps our mock)
add_auth_responses_to_openapi(app)
# The endpoint should now have 401 response
if "/no-responses" in schema["paths"]:
if "get" in schema["paths"]["/no-responses"]:
responses = schema["paths"]["/no-responses"]["get"].get("responses", {})
assert "401" in responses
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
# Get schema and verify 401 was added
schema = app.openapi()
# The endpoint should now have 401 response
if "/no-responses" in schema["paths"]:
if "get" in schema["paths"]["/no-responses"]:
responses = schema["paths"]["/no-responses"]["get"].get("responses", {})
assert "401" in responses
assert (
responses["401"]["$ref"]
== "#/components/responses/HTTP401NotAuthenticatedError"
)
def test_components_with_existing_responses():
"""Test when components already has a responses section."""
app = FastAPI()
# Mock get_openapi to return schema with existing components/responses
from fastapi.openapi.utils import get_openapi as original_get_openapi
def mock_get_openapi(*args, **kwargs):
schema = original_get_openapi(*args, **kwargs)
# Create a mock openapi method that adds existing components/responses
def mock_openapi():
schema = get_openapi(
title=app.title,
version=app.version,
routes=app.routes,
)
# Add existing components/responses
if "components" not in schema:
schema["components"] = {}
@@ -388,21 +395,21 @@ def test_components_with_existing_responses():
}
return schema
with mock.patch("autogpt_libs.auth.helpers.get_openapi", mock_get_openapi):
# Apply customization
add_auth_responses_to_openapi(app)
# Replace app's openapi method
app.openapi = mock_openapi
schema = app.openapi()
# Apply customization (this wraps our mock)
add_auth_responses_to_openapi(app)
# Both responses should exist
assert "ExistingResponse" in schema["components"]["responses"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
schema = app.openapi()
# Verify our 401 response structure
error_response = schema["components"]["responses"][
"HTTP401NotAuthenticatedError"
]
assert error_response["description"] == "Authentication required"
# Both responses should exist
assert "ExistingResponse" in schema["components"]["responses"]
assert "HTTP401NotAuthenticatedError" in schema["components"]["responses"]
# Verify our 401 response structure
error_response = schema["components"]["responses"]["HTTP401NotAuthenticatedError"]
assert error_response["description"] == "Authentication required"
def test_openapi_schema_persistence():
@@ -411,7 +418,7 @@ def test_openapi_schema_persistence():
from fastapi import Security
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
@app.get("/test")
def test_endpoint(jwt: dict = Security(get_jwt_payload)):

View File

@@ -12,9 +12,9 @@ from fastapi import HTTPException
from fastapi.security import HTTPAuthorizationCredentials
from pytest_mock import MockerFixture
from autogpt_libs.auth import config, jwt_utils
from autogpt_libs.auth.config import Settings
from autogpt_libs.auth.models import User
from backend.api.auth import config, jwt_utils
from backend.api.auth.config import Settings
from backend.api.auth.models import User
MOCK_JWT_SECRET = "test-secret-key-with-at-least-32-characters"
TEST_USER_PAYLOAD = {

View File

@@ -1,10 +1,10 @@
import logging
import typing
from autogpt_libs.auth import get_user_id, requires_admin_user
from fastapi import APIRouter, Body, Security
from prisma.enums import CreditTransactionType
from backend.api.auth import get_user_id, requires_admin_user
from backend.data.credit import admin_get_user_history, get_user_credit_model
from backend.util.json import SafeJson

View File

@@ -6,9 +6,9 @@ import fastapi.testclient
import prisma.enums
import pytest
import pytest_mock
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from pytest_snapshot.plugin import Snapshot
from backend.api.auth.jwt_utils import get_jwt_payload
from backend.data.model import UserTransaction
from backend.util.json import SafeJson
from backend.util.models import Pagination

View File

@@ -3,10 +3,10 @@ import logging
from datetime import datetime
from typing import Optional
from autogpt_libs.auth import get_user_id, requires_admin_user
from fastapi import APIRouter, HTTPException, Security
from pydantic import BaseModel, Field
from backend.api.auth import get_user_id, requires_admin_user
from backend.blocks.llm import LlmModel
from backend.data.analytics import (
AccuracyTrendsResponse,

View File

@@ -2,11 +2,11 @@ import logging
import tempfile
import typing
import autogpt_libs.auth
import fastapi
import fastapi.responses
import prisma.enums
import backend.api.auth
import backend.api.features.store.cache as store_cache
import backend.api.features.store.db as store_db
import backend.api.features.store.model as store_model
@@ -17,7 +17,7 @@ logger = logging.getLogger(__name__)
router = fastapi.APIRouter(
prefix="/admin",
tags=["store", "admin"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_admin_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_admin_user)],
)
@@ -73,7 +73,7 @@ async def get_admin_listings_with_versions(
async def review_submission(
store_listing_version_id: str,
request: store_model.ReviewSubmissionRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Review a store listing submission.
@@ -117,7 +117,7 @@ async def review_submission(
tags=["store", "admin"],
)
async def admin_download_agent_file(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
store_listing_version_id: str = fastapi.Path(
..., description="The ID of the agent to download"
),

View File

@@ -5,10 +5,10 @@ from typing import Annotated
import fastapi
import pydantic
from autogpt_libs.auth import get_user_id
from autogpt_libs.auth.dependencies import requires_user
import backend.data.analytics
from backend.api.auth import get_user_id
from backend.api.auth.dependencies import requires_user
router = fastapi.APIRouter(dependencies=[fastapi.Security(requires_user)])
logger = logging.getLogger(__name__)

View File

@@ -20,7 +20,7 @@ client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module."""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield

View File

@@ -2,8 +2,8 @@ import logging
from typing import Annotated, Sequence
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
from backend.api.auth.dependencies import get_user_id, requires_user
from backend.integrations.providers import ProviderName
from backend.util.models import Pagination

View File

@@ -2,7 +2,7 @@ import asyncio
import logging
import uuid
from datetime import UTC, datetime
from typing import Any, cast
from typing import Any
from weakref import WeakValueDictionary
from openai.types.chat import (
@@ -104,26 +104,6 @@ class ChatSession(BaseModel):
successful_agent_runs: dict[str, int] = {}
successful_agent_schedules: dict[str, int] = {}
def add_tool_call_to_current_turn(self, tool_call: dict) -> None:
"""Attach a tool_call to the current turn's assistant message.
Searches backwards for the most recent assistant message (stopping at
any user message boundary). If found, appends the tool_call to it.
Otherwise creates a new assistant message with the tool_call.
"""
for msg in reversed(self.messages):
if msg.role == "user":
break
if msg.role == "assistant":
if not msg.tool_calls:
msg.tool_calls = []
msg.tool_calls.append(tool_call)
return
self.messages.append(
ChatMessage(role="assistant", content="", tool_calls=[tool_call])
)
@staticmethod
def new(user_id: str) -> "ChatSession":
return ChatSession(
@@ -192,47 +172,6 @@ class ChatSession(BaseModel):
successful_agent_schedules=successful_agent_schedules,
)
@staticmethod
def _merge_consecutive_assistant_messages(
messages: list[ChatCompletionMessageParam],
) -> list[ChatCompletionMessageParam]:
"""Merge consecutive assistant messages into single messages.
Long-running tool flows can create split assistant messages: one with
text content and another with tool_calls. Anthropic's API requires
tool_result blocks to reference a tool_use in the immediately preceding
assistant message, so these splits cause 400 errors via OpenRouter.
"""
if len(messages) < 2:
return messages
result: list[ChatCompletionMessageParam] = [messages[0]]
for msg in messages[1:]:
prev = result[-1]
if prev.get("role") != "assistant" or msg.get("role") != "assistant":
result.append(msg)
continue
prev = cast(ChatCompletionAssistantMessageParam, prev)
curr = cast(ChatCompletionAssistantMessageParam, msg)
curr_content = curr.get("content") or ""
if curr_content:
prev_content = prev.get("content") or ""
prev["content"] = (
f"{prev_content}\n{curr_content}" if prev_content else curr_content
)
curr_tool_calls = curr.get("tool_calls")
if curr_tool_calls:
prev_tool_calls = prev.get("tool_calls")
prev["tool_calls"] = (
list(prev_tool_calls) + list(curr_tool_calls)
if prev_tool_calls
else list(curr_tool_calls)
)
return result
def to_openai_messages(self) -> list[ChatCompletionMessageParam]:
messages = []
for message in self.messages:
@@ -319,7 +258,7 @@ class ChatSession(BaseModel):
name=message.name or "",
)
)
return self._merge_consecutive_assistant_messages(messages)
return messages
async def _get_session_from_cache(session_id: str) -> ChatSession | None:

View File

@@ -1,16 +1,4 @@
from typing import cast
import pytest
from openai.types.chat import (
ChatCompletionAssistantMessageParam,
ChatCompletionMessageParam,
ChatCompletionToolMessageParam,
ChatCompletionUserMessageParam,
)
from openai.types.chat.chat_completion_message_tool_call_param import (
ChatCompletionMessageToolCallParam,
Function,
)
from .model import (
ChatMessage,
@@ -129,205 +117,3 @@ async def test_chatsession_db_storage(setup_test_user, test_user_id):
loaded.tool_calls is not None
), f"Tool calls missing for {orig.role} message"
assert len(orig.tool_calls) == len(loaded.tool_calls)
# --------------------------------------------------------------------------- #
# _merge_consecutive_assistant_messages #
# --------------------------------------------------------------------------- #
_tc = ChatCompletionMessageToolCallParam(
id="tc1", type="function", function=Function(name="do_stuff", arguments="{}")
)
_tc2 = ChatCompletionMessageToolCallParam(
id="tc2", type="function", function=Function(name="other", arguments="{}")
)
def test_merge_noop_when_no_consecutive_assistants():
"""Messages without consecutive assistants are returned unchanged."""
msgs = [
ChatCompletionUserMessageParam(role="user", content="hi"),
ChatCompletionAssistantMessageParam(role="assistant", content="hello"),
ChatCompletionUserMessageParam(role="user", content="bye"),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs)
assert len(merged) == 3
assert [m["role"] for m in merged] == ["user", "assistant", "user"]
def test_merge_splits_text_and_tool_calls():
"""The exact bug scenario: text-only assistant followed by tool_calls-only assistant."""
msgs = [
ChatCompletionUserMessageParam(role="user", content="build agent"),
ChatCompletionAssistantMessageParam(
role="assistant", content="Let me build that"
),
ChatCompletionAssistantMessageParam(
role="assistant", content="", tool_calls=[_tc]
),
ChatCompletionToolMessageParam(role="tool", content="ok", tool_call_id="tc1"),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs)
assert len(merged) == 3
assert merged[0]["role"] == "user"
assert merged[2]["role"] == "tool"
a = cast(ChatCompletionAssistantMessageParam, merged[1])
assert a["role"] == "assistant"
assert a.get("content") == "Let me build that"
assert a.get("tool_calls") == [_tc]
def test_merge_combines_tool_calls_from_both():
"""Both consecutive assistants have tool_calls — they get merged."""
msgs: list[ChatCompletionAssistantMessageParam] = [
ChatCompletionAssistantMessageParam(
role="assistant", content="text", tool_calls=[_tc]
),
ChatCompletionAssistantMessageParam(
role="assistant", content="", tool_calls=[_tc2]
),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs) # type: ignore[arg-type]
assert len(merged) == 1
a = cast(ChatCompletionAssistantMessageParam, merged[0])
assert a.get("tool_calls") == [_tc, _tc2]
assert a.get("content") == "text"
def test_merge_three_consecutive_assistants():
"""Three consecutive assistants collapse into one."""
msgs: list[ChatCompletionAssistantMessageParam] = [
ChatCompletionAssistantMessageParam(role="assistant", content="a"),
ChatCompletionAssistantMessageParam(role="assistant", content="b"),
ChatCompletionAssistantMessageParam(
role="assistant", content="", tool_calls=[_tc]
),
]
merged = ChatSession._merge_consecutive_assistant_messages(msgs) # type: ignore[arg-type]
assert len(merged) == 1
a = cast(ChatCompletionAssistantMessageParam, merged[0])
assert a.get("content") == "a\nb"
assert a.get("tool_calls") == [_tc]
def test_merge_empty_and_single_message():
"""Edge cases: empty list and single message."""
assert ChatSession._merge_consecutive_assistant_messages([]) == []
single: list[ChatCompletionMessageParam] = [
ChatCompletionUserMessageParam(role="user", content="hi")
]
assert ChatSession._merge_consecutive_assistant_messages(single) == single
# --------------------------------------------------------------------------- #
# add_tool_call_to_current_turn #
# --------------------------------------------------------------------------- #
_raw_tc = {
"id": "tc1",
"type": "function",
"function": {"name": "f", "arguments": "{}"},
}
_raw_tc2 = {
"id": "tc2",
"type": "function",
"function": {"name": "g", "arguments": "{}"},
}
def test_add_tool_call_appends_to_existing_assistant():
"""When the last assistant is from the current turn, tool_call is added to it."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="hi"),
ChatMessage(role="assistant", content="working on it"),
]
session.add_tool_call_to_current_turn(_raw_tc)
assert len(session.messages) == 2 # no new message created
assert session.messages[1].tool_calls == [_raw_tc]
def test_add_tool_call_creates_assistant_when_none_exists():
"""When there's no current-turn assistant, a new one is created."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="hi"),
]
session.add_tool_call_to_current_turn(_raw_tc)
assert len(session.messages) == 2
assert session.messages[1].role == "assistant"
assert session.messages[1].tool_calls == [_raw_tc]
def test_add_tool_call_does_not_cross_user_boundary():
"""A user message acts as a boundary — previous assistant is not modified."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="assistant", content="old turn"),
ChatMessage(role="user", content="new message"),
]
session.add_tool_call_to_current_turn(_raw_tc)
assert len(session.messages) == 3 # new assistant was created
assert session.messages[0].tool_calls is None # old assistant untouched
assert session.messages[2].role == "assistant"
assert session.messages[2].tool_calls == [_raw_tc]
def test_add_tool_call_multiple_times():
"""Multiple long-running tool calls accumulate on the same assistant."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="hi"),
ChatMessage(role="assistant", content="doing stuff"),
]
session.add_tool_call_to_current_turn(_raw_tc)
# Simulate a pending tool result in between (like _yield_tool_call does)
session.messages.append(
ChatMessage(role="tool", content="pending", tool_call_id="tc1")
)
session.add_tool_call_to_current_turn(_raw_tc2)
assert len(session.messages) == 3 # user, assistant, tool — no extra assistant
assert session.messages[1].tool_calls == [_raw_tc, _raw_tc2]
def test_to_openai_messages_merges_split_assistants():
"""End-to-end: session with split assistants produces valid OpenAI messages."""
session = ChatSession.new(user_id="u")
session.messages = [
ChatMessage(role="user", content="build agent"),
ChatMessage(role="assistant", content="Let me build that"),
ChatMessage(
role="assistant",
content="",
tool_calls=[
{
"id": "tc1",
"type": "function",
"function": {"name": "create_agent", "arguments": "{}"},
}
],
),
ChatMessage(role="tool", content="done", tool_call_id="tc1"),
ChatMessage(role="assistant", content="Saved!"),
ChatMessage(role="user", content="show me an example run"),
]
openai_msgs = session.to_openai_messages()
# The two consecutive assistants at index 1,2 should be merged
roles = [m["role"] for m in openai_msgs]
assert roles == ["user", "assistant", "tool", "assistant", "user"]
# The merged assistant should have both content and tool_calls
merged = cast(ChatCompletionAssistantMessageParam, openai_msgs[1])
assert merged.get("content") == "Let me build that"
tc_list = merged.get("tool_calls")
assert tc_list is not None and len(list(tc_list)) == 1
assert list(tc_list)[0]["id"] == "tc1"

View File

@@ -10,6 +10,8 @@ from typing import Any
from pydantic import BaseModel, Field
from backend.util.json import dumps as json_dumps
class ResponseType(str, Enum):
"""Types of streaming responses following AI SDK protocol."""
@@ -193,6 +195,18 @@ class StreamError(StreamBaseResponse):
default=None, description="Additional error details"
)
def to_sse(self) -> str:
"""Convert to SSE format, only emitting fields required by AI SDK protocol.
The AI SDK uses z.strictObject({type, errorText}) which rejects
any extra fields like `code` or `details`.
"""
data = {
"type": self.type.value,
"errorText": self.errorText,
}
return f"data: {json_dumps(data)}\n\n"
class StreamHeartbeat(StreamBaseResponse):
"""Heartbeat to keep SSE connection alive during long-running operations.

View File

@@ -5,11 +5,11 @@ import uuid as uuid_module
from collections.abc import AsyncGenerator
from typing import Annotated
from autogpt_libs import auth
from fastapi import APIRouter, Depends, Header, HTTPException, Query, Response, Security
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from backend.api import auth
from backend.util.exceptions import NotFoundError
from . import service as chat_service
@@ -303,7 +303,7 @@ async def stream_chat_post(
session = await _validate_and_get_session(session_id, user_id)
logger.info(
f"[TIMING] session validated in {(time.perf_counter() - stream_start_time)*1000:.1f}ms",
f"[TIMING] session validated in {(time.perf_counter() - stream_start_time) * 1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
@@ -327,7 +327,7 @@ async def stream_chat_post(
operation_id=operation_id,
)
logger.info(
f"[TIMING] create_task completed in {(time.perf_counter() - task_create_start)*1000:.1f}ms",
f"[TIMING] create_task completed in {(time.perf_counter() - task_create_start) * 1000:.1f}ms",
extra={
"json_fields": {
**log_meta,
@@ -377,7 +377,7 @@ async def stream_chat_post(
gen_end_time = time_module.perf_counter()
total_time = (gen_end_time - gen_start_time) * 1000
logger.info(
f"[TIMING] run_ai_generation FINISHED in {total_time/1000:.1f}s; "
f"[TIMING] run_ai_generation FINISHED in {total_time / 1000:.1f}s; "
f"task={task_id}, session={session_id}, "
f"ttfc={ttfc or -1:.2f}s, n_chunks={chunk_count}",
extra={

View File

@@ -800,13 +800,9 @@ async def stream_chat_completion(
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any.
# Use extend (not assign) to preserve tool_calls already added by
# _yield_tool_call for long-running tools.
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
if not assistant_response.tool_calls:
assistant_response.tool_calls = []
assistant_response.tool_calls.extend(accumulated_tool_calls)
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
@@ -1237,7 +1233,7 @@ async def _stream_chat_chunks(
total_time = (time_module.perf_counter() - stream_chunks_start) * 1000
logger.info(
f"[TIMING] _stream_chat_chunks COMPLETED in {total_time/1000:.1f}s; "
f"[TIMING] _stream_chat_chunks COMPLETED in {total_time / 1000:.1f}s; "
f"session={session.session_id}, user={session.user_id}",
extra={"json_fields": {**log_meta, "total_time_ms": total_time}},
)
@@ -1408,9 +1404,13 @@ async def _yield_tool_call(
operation_id=operation_id,
)
# Attach the tool_call to the current turn's assistant message
# (or create one if this is a tool-only response with no text).
session.add_tool_call_to_current_turn(tool_calls[yield_idx])
# Save assistant message with tool_call FIRST (required by LLM)
assistant_message = ChatMessage(
role="assistant",
content="",
tool_calls=[tool_calls[yield_idx]],
)
session.messages.append(assistant_message)
# Then save pending tool result
pending_message = ChatMessage(

View File

@@ -569,7 +569,7 @@ async def _stream_listener(
if isinstance(chunk, StreamFinish):
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] StreamFinish received in {total_time/1000:.1f}s; delivered={messages_delivered}",
f"[TIMING] StreamFinish received in {total_time / 1000:.1f}s; delivered={messages_delivered}",
extra={
"json_fields": {
**log_meta,
@@ -620,7 +620,7 @@ async def _stream_listener(
# Clean up listener task mapping on exit
total_time = (time.perf_counter() - start_time) * 1000
logger.info(
f"[TIMING] _stream_listener FINISHED in {total_time/1000:.1f}s; task={task_id}, "
f"[TIMING] _stream_listener FINISHED in {total_time / 1000:.1f}s; task={task_id}, "
f"delivered={messages_delivered}, xread_count={xread_count}",
extra={
"json_fields": {

View File

@@ -151,9 +151,10 @@ class RunBlockTool(BaseTool):
logger.info(f"Executing block {block.name} ({block_id}) for user {user_id}")
creds_manager = IntegrationCredentialsManager()
matched_credentials, missing_credentials = (
await self._resolve_block_credentials(user_id, block, input_data)
)
(
matched_credentials,
missing_credentials,
) = await self._resolve_block_credentials(user_id, block, input_data)
if missing_credentials:
# Return setup requirements response with missing credentials

View File

@@ -25,7 +25,7 @@ FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc)
@pytest_asyncio.fixture(loop_scope="session")
async def client(server, mock_jwt_user) -> AsyncGenerator[httpx.AsyncClient, None]:
"""Create async HTTP client with auth overrides"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
# Override get_jwt_payload dependency to return our test user
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]

View File

@@ -2,10 +2,10 @@ import asyncio
import logging
from typing import Any, List
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, HTTPException, Query, Security, status
from prisma.enums import ReviewStatus
import backend.api.auth as autogpt_auth_lib
from backend.data.execution import (
ExecutionContext,
ExecutionStatus,

View File

@@ -3,7 +3,6 @@ import logging
from datetime import datetime, timedelta, timezone
from typing import TYPE_CHECKING, Annotated, List, Literal
from autogpt_libs.auth import get_user_id
from fastapi import (
APIRouter,
Body,
@@ -17,6 +16,7 @@ from fastapi import (
from pydantic import BaseModel, Field, SecretStr
from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY
from backend.api.auth import get_user_id
from backend.api.features.library.db import set_preset_webhook, update_preset
from backend.api.features.library.model import LibraryAgentPreset
from backend.data.graph import NodeModel, get_graph, set_node_webhook

View File

@@ -1,10 +1,10 @@
from typing import Literal, Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
from fastapi.responses import Response
from prisma.enums import OnboardingStep
import backend.api.auth as autogpt_auth_lib
from backend.data.onboarding import complete_onboarding_step
from .. import db as library_db

View File

@@ -1,9 +1,9 @@
import logging
from typing import Any, Optional
import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, Body, HTTPException, Query, Security, status
import backend.api.auth as autogpt_auth_lib
from backend.data.execution import GraphExecutionMeta
from backend.data.graph import get_graph
from backend.data.integrations import get_webhook

View File

@@ -23,7 +23,7 @@ FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield

View File

@@ -21,13 +21,13 @@ from datetime import datetime
from typing import Literal, Optional
from urllib.parse import urlencode
from autogpt_libs.auth import get_user_id
from fastapi import APIRouter, Body, HTTPException, Security, UploadFile, status
from gcloud.aio import storage as async_storage
from PIL import Image
from prisma.enums import APIKeyPermission
from pydantic import BaseModel, Field
from backend.api.auth import get_user_id
from backend.data.auth.oauth import (
InvalidClientError,
InvalidGrantError,

View File

@@ -21,7 +21,6 @@ from typing import AsyncGenerator
import httpx
import pytest
import pytest_asyncio
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission
from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken
from prisma.models import OAuthApplication as PrismaOAuthApplication
@@ -29,6 +28,7 @@ from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode
from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken
from prisma.models import User as PrismaUser
from backend.api.auth.api_key.keysmith import APIKeySmith
from backend.api.rest_api import app
keysmith = APIKeySmith()
@@ -134,7 +134,7 @@ async def client(server, test_user: str) -> AsyncGenerator[httpx.AsyncClient, No
Depends on `server` to ensure the DB is connected and `test_user` to ensure
the user exists in the database before running tests.
"""
from autogpt_libs.auth import get_user_id
from backend.api.auth import get_user_id
# Override get_user_id dependency to return our test user
def override_get_user_id():

View File

@@ -1,8 +1,9 @@
import logging
from autogpt_libs.auth import get_user_id, requires_user
from fastapi import APIRouter, HTTPException, Security
from backend.api.auth import get_user_id, requires_user
from .models import ApiResponse, ChatRequest
from .service import OttoService

View File

@@ -19,7 +19,7 @@ client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield

View File

@@ -57,7 +57,7 @@ async def postmark_webhook_handler(
webhook: Annotated[
PostmarkWebhook,
Body(discriminator="RecordType"),
]
],
):
logger.info(f"Received webhook from Postmark: {webhook}")
match webhook:

View File

@@ -164,7 +164,7 @@ class BlockHandler(ContentHandler):
block_ids = list(all_blocks.keys())
# Query for existing embeddings
placeholders = ",".join([f"${i+1}" for i in range(len(block_ids))])
placeholders = ",".join([f"${i + 1}" for i in range(len(block_ids))])
existing_result = await query_raw_with_schema(
f"""
SELECT "contentId"
@@ -265,7 +265,7 @@ class BlockHandler(ContentHandler):
return {"total": 0, "with_embeddings": 0, "without_embeddings": 0}
block_ids = enabled_block_ids
placeholders = ",".join([f"${i+1}" for i in range(len(block_ids))])
placeholders = ",".join([f"${i + 1}" for i in range(len(block_ids))])
embedded_result = await query_raw_with_schema(
f"""
@@ -508,7 +508,7 @@ class DocumentationHandler(ContentHandler):
]
# Check which ones have embeddings
placeholders = ",".join([f"${i+1}" for i in range(len(section_content_ids))])
placeholders = ",".join([f"${i + 1}" for i in range(len(section_content_ids))])
existing_result = await query_raw_with_schema(
f"""
SELECT "contentId"

View File

@@ -47,7 +47,7 @@ def mock_storage_client(mocker):
async def test_upload_media_success(mock_settings, mock_storage_client):
# Create test JPEG data with valid signature
test_data = b"\xFF\xD8\xFF" + b"test data"
test_data = b"\xff\xd8\xff" + b"test data"
test_file = fastapi.UploadFile(
filename="laptop.jpeg",
@@ -85,7 +85,7 @@ async def test_upload_media_missing_credentials(monkeypatch):
test_file = fastapi.UploadFile(
filename="laptop.jpeg",
file=io.BytesIO(b"\xFF\xD8\xFF" + b"test data"), # Valid JPEG signature
file=io.BytesIO(b"\xff\xd8\xff" + b"test data"), # Valid JPEG signature
headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}),
)
@@ -110,7 +110,7 @@ async def test_upload_media_video_type(mock_settings, mock_storage_client):
async def test_upload_media_file_too_large(mock_settings, mock_storage_client):
large_data = b"\xFF\xD8\xFF" + b"x" * (
large_data = b"\xff\xd8\xff" + b"x" * (
50 * 1024 * 1024 + 1
) # 50MB + 1 byte with valid JPEG signature
test_file = fastapi.UploadFile(

View File

@@ -4,11 +4,11 @@ import typing
import urllib.parse
from typing import Literal
import autogpt_libs.auth
import fastapi
import fastapi.responses
import prisma.enums
import backend.api.auth
import backend.data.graph
import backend.util.json
from backend.util.models import Pagination
@@ -34,11 +34,11 @@ router = fastapi.APIRouter()
"/profile",
summary="Get user profile",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.ProfileDetails,
)
async def get_profile(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Get the profile details for the authenticated user.
@@ -57,12 +57,12 @@ async def get_profile(
"/profile",
summary="Update user profile",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.CreatorDetails,
)
async def update_or_create_profile(
profile: store_model.Profile,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Update the store profile for the authenticated user.
@@ -169,7 +169,7 @@ async def unified_search(
page: int = 1,
page_size: int = 20,
user_id: str | None = fastapi.Security(
autogpt_libs.auth.get_optional_user_id, use_cache=False
backend.api.auth.get_optional_user_id, use_cache=False
),
):
"""
@@ -274,7 +274,7 @@ async def get_agent(
"/graph/{store_listing_version_id}",
summary="Get agent graph",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
)
async def get_graph_meta_by_store_listing_version_id(
store_listing_version_id: str,
@@ -290,7 +290,7 @@ async def get_graph_meta_by_store_listing_version_id(
"/agents/{store_listing_version_id}",
summary="Get agent by version",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.StoreAgentDetails,
)
async def get_store_agent(store_listing_version_id: str):
@@ -306,14 +306,14 @@ async def get_store_agent(store_listing_version_id: str):
"/agents/{username}/{agent_name}/review",
summary="Create agent review",
tags=["store"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.StoreReview,
)
async def create_review(
username: str,
agent_name: str,
review: store_model.StoreReviewCreate,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Create a review for a store agent.
@@ -417,11 +417,11 @@ async def get_creator(
"/myagents",
summary="Get my agents",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.MyAgentsResponse,
)
async def get_my_agents(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
page: typing.Annotated[int, fastapi.Query(ge=1)] = 1,
page_size: typing.Annotated[int, fastapi.Query(ge=1)] = 20,
):
@@ -436,12 +436,12 @@ async def get_my_agents(
"/submissions/{submission_id}",
summary="Delete store submission",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=bool,
)
async def delete_submission(
submission_id: str,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Delete a store listing submission.
@@ -465,11 +465,11 @@ async def delete_submission(
"/submissions",
summary="List my submissions",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.StoreSubmissionsResponse,
)
async def get_submissions(
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
page: int = 1,
page_size: int = 20,
):
@@ -508,12 +508,12 @@ async def get_submissions(
"/submissions",
summary="Create store submission",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.StoreSubmission,
)
async def create_submission(
submission_request: store_model.StoreSubmissionRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Create a new store listing submission.
@@ -552,13 +552,13 @@ async def create_submission(
"/submissions/{store_listing_version_id}",
summary="Edit store submission",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
response_model=store_model.StoreSubmission,
)
async def edit_submission(
store_listing_version_id: str,
submission_request: store_model.StoreSubmissionEditRequest,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Edit an existing store listing submission.
@@ -596,11 +596,11 @@ async def edit_submission(
"/submissions/media",
summary="Upload submission media",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
)
async def upload_submission_media(
file: fastapi.UploadFile,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
):
"""
Upload media (images/videos) for a store listing submission.
@@ -623,11 +623,11 @@ async def upload_submission_media(
"/submissions/generate_image",
summary="Generate submission image",
tags=["store", "private"],
dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)],
dependencies=[fastapi.Security(backend.api.auth.requires_user)],
)
async def generate_image(
agent_id: str,
user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id),
user_id: str = fastapi.Security(backend.api.auth.get_user_id),
) -> fastapi.responses.Response:
"""
Generate an image for a store listing submission.

View File

@@ -24,7 +24,7 @@ client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"]
yield

View File

@@ -9,8 +9,6 @@ from typing import Annotated, Any, Sequence, get_args
import pydantic
import stripe
from autogpt_libs.auth import get_user_id, requires_user
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from fastapi import (
APIRouter,
Body,
@@ -28,6 +26,8 @@ from pydantic import BaseModel
from starlette.status import HTTP_204_NO_CONTENT, HTTP_404_NOT_FOUND
from typing_extensions import Optional, TypedDict
from backend.api.auth import get_user_id, requires_user
from backend.api.auth.jwt_utils import get_jwt_payload
from backend.api.model import (
CreateAPIKeyRequest,
CreateAPIKeyResponse,

View File

@@ -25,7 +25,7 @@ client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user, setup_test_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
from backend.api.auth.jwt_utils import get_jwt_payload
# setup_test_user fixture already executed and user is created in database
# It returns the user_id which we don't need to await
@@ -499,10 +499,12 @@ async def test_upload_file_success(test_user_id: str):
)
# Mock dependencies
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
with (
patch("backend.api.features.v1.scan_content_safe") as mock_scan,
patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter,
):
mock_scan.return_value = None
mock_handler = AsyncMock()
mock_handler.store_file.return_value = "gcs://test-bucket/uploads/123/test.txt"
@@ -551,10 +553,12 @@ async def test_upload_file_no_filename(test_user_id: str):
),
)
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
with (
patch("backend.api.features.v1.scan_content_safe") as mock_scan,
patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter,
):
mock_scan.return_value = None
mock_handler = AsyncMock()
mock_handler.store_file.return_value = (
@@ -632,10 +636,12 @@ async def test_upload_file_cloud_storage_failure(test_user_id: str):
headers=starlette.datastructures.Headers({"content-type": "text/plain"}),
)
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
with (
patch("backend.api.features.v1.scan_content_safe") as mock_scan,
patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter,
):
mock_scan.return_value = None
mock_handler = AsyncMock()
mock_handler.store_file.side_effect = RuntimeError("Storage error!")
@@ -679,10 +685,12 @@ async def test_upload_file_gcs_not_configured_fallback(test_user_id: str):
headers=starlette.datastructures.Headers({"content-type": "text/plain"}),
)
with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter:
with (
patch("backend.api.features.v1.scan_content_safe") as mock_scan,
patch(
"backend.api.features.v1.get_cloud_storage_handler"
) as mock_handler_getter,
):
mock_scan.return_value = None
mock_handler = AsyncMock()
mock_handler.config.gcs_bucket_name = "" # Simulate no GCS bucket configured

View File

@@ -8,9 +8,9 @@ from typing import Annotated
from urllib.parse import quote
import fastapi
from autogpt_libs.auth.dependencies import get_user_id, requires_user
from fastapi.responses import Response
from backend.api.auth.dependencies import get_user_id, requires_user
from backend.data.workspace import get_workspace, get_workspace_file
from backend.util.workspace_storage import get_workspace_storage

View File

@@ -9,8 +9,6 @@ import fastapi.responses
import pydantic
import starlette.middleware.cors
import uvicorn
from autogpt_libs.auth import add_auth_responses_to_openapi
from autogpt_libs.auth import verify_settings as verify_auth_settings
from fastapi.exceptions import RequestValidationError
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.routing import APIRoute
@@ -40,6 +38,8 @@ import backend.data.user
import backend.integrations.webhooks.utils
import backend.util.service
import backend.util.settings
from backend.api.auth import add_auth_responses_to_openapi
from backend.api.auth import verify_settings as verify_auth_settings
from backend.api.features.chat.completion_consumer import (
start_completion_consumer,
stop_completion_consumer,
@@ -69,7 +69,7 @@ from .utils.openapi import sort_openapi
settings = backend.util.settings.Settings()
logger = logging.getLogger(__name__)
logging.getLogger("autogpt_libs").setLevel(logging.INFO)
logging.getLogger("backend.api.auth").setLevel(logging.INFO)
@contextlib.contextmanager

View File

@@ -457,7 +457,8 @@ async def test_api_key_with_unicode_characters_normalization_attack(mock_request
"""Test that Unicode normalization doesn't bypass validation."""
# Create auth with composed Unicode character
auth = APIKeyAuthenticator(
header_name="X-API-Key", expected_token="café" # é is composed
header_name="X-API-Key",
expected_token="café", # é is composed
)
# Try with decomposed version (c + a + f + e + ´)
@@ -522,8 +523,8 @@ async def test_api_keys_with_newline_variations(mock_request):
"valid\r\ntoken", # Windows newline
"valid\rtoken", # Mac newline
"valid\x85token", # NEL (Next Line)
"valid\x0Btoken", # Vertical Tab
"valid\x0Ctoken", # Form Feed
"valid\x0btoken", # Vertical Tab
"valid\x0ctoken", # Form Feed
]
for api_key in newline_variations:

View File

@@ -5,10 +5,10 @@ from typing import Protocol
import pydantic
import uvicorn
from autogpt_libs.auth.jwt_utils import parse_jwt_token
from fastapi import Depends, FastAPI, WebSocket, WebSocketDisconnect
from starlette.middleware.cors import CORSMiddleware
from backend.api.auth.jwt_utils import parse_jwt_token
from backend.api.conn_manager import ConnectionManager
from backend.api.model import (
WSMessage,

View File

@@ -44,9 +44,12 @@ def test_websocket_server_uses_cors_helper(mocker) -> None:
"backend.api.ws_api.build_cors_params", return_value=cors_params
)
with override_config(
settings, "backend_cors_allow_origins", cors_params["allow_origins"]
), override_config(settings, "app_env", AppEnvironment.LOCAL):
with (
override_config(
settings, "backend_cors_allow_origins", cors_params["allow_origins"]
),
override_config(settings, "app_env", AppEnvironment.LOCAL),
):
WebsocketServer().run()
build_cors.assert_called_once_with(
@@ -65,9 +68,12 @@ def test_websocket_server_uses_cors_helper(mocker) -> None:
def test_websocket_server_blocks_localhost_in_production(mocker) -> None:
mocker.patch("backend.api.ws_api.uvicorn.run")
with override_config(
settings, "backend_cors_allow_origins", ["http://localhost:3000"]
), override_config(settings, "app_env", AppEnvironment.PRODUCTION):
with (
override_config(
settings, "backend_cors_allow_origins", ["http://localhost:3000"]
),
override_config(settings, "app_env", AppEnvironment.PRODUCTION),
):
with pytest.raises(ValueError):
WebsocketServer().run()

View File

@@ -174,7 +174,9 @@ class AIImageGeneratorBlock(Block):
],
test_mock={
# Return a data URI directly so store_media_file doesn't need to download
"_run_client": lambda *args, **kwargs: "data:image/webp;base64,UklGRiQAAABXRUJQVlA4IBgAAAAwAQCdASoBAAEAAQAcJYgCdAEO"
"_run_client": lambda *args, **kwargs: (
"data:image/webp;base64,UklGRiQAAABXRUJQVlA4IBgAAAAwAQCdASoBAAEAAQAcJYgCdAEO"
)
},
)

View File

@@ -142,7 +142,9 @@ class AIMusicGeneratorBlock(Block):
),
],
test_mock={
"run_model": lambda api_key, music_gen_model_version, prompt, duration, temperature, top_k, top_p, classifier_free_guidance, output_format, normalization_strategy: "https://replicate.com/output/generated-audio-url.wav",
"run_model": lambda api_key, music_gen_model_version, prompt, duration, temperature, top_k, top_p, classifier_free_guidance, output_format, normalization_strategy: (
"https://replicate.com/output/generated-audio-url.wav"
),
},
test_credentials=TEST_CREDENTIALS,
)

View File

@@ -69,12 +69,18 @@ class PostToBlueskyBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate character limit for Bluesky
if len(input_data.post) > 300:
yield "error", f"Post text exceeds Bluesky's 300 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"Post text exceeds Bluesky's 300 character limit ({len(input_data.post)} characters)",
)
return
# Validate media constraints for Bluesky

View File

@@ -131,7 +131,10 @@ class PostToFacebookBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Convert datetime to ISO format if provided

View File

@@ -120,12 +120,18 @@ class PostToGMBBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate GMB constraints
if len(input_data.media_urls) > 1:
yield "error", "Google My Business supports only one image or video per post"
yield (
"error",
"Google My Business supports only one image or video per post",
)
return
# Validate offer coupon code length

View File

@@ -123,16 +123,25 @@ class PostToInstagramBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate Instagram constraints
if len(input_data.post) > 2200:
yield "error", f"Instagram post text exceeds 2,200 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"Instagram post text exceeds 2,200 character limit ({len(input_data.post)} characters)",
)
return
if len(input_data.media_urls) > 10:
yield "error", "Instagram supports a maximum of 10 images/videos in a carousel"
yield (
"error",
"Instagram supports a maximum of 10 images/videos in a carousel",
)
return
if len(input_data.collaborators) > 3:
@@ -147,7 +156,10 @@ class PostToInstagramBlock(Block):
]
if any(reel_options) and not all(reel_options):
yield "error", "When posting a reel, all reel options must be set: share_reels_feed, audio_name, and either thumbnail or thumbnail_offset"
yield (
"error",
"When posting a reel, all reel options must be set: share_reels_feed, audio_name, and either thumbnail or thumbnail_offset",
)
return
# Count hashtags and mentions
@@ -155,11 +167,17 @@ class PostToInstagramBlock(Block):
mention_count = input_data.post.count("@")
if hashtag_count > 30:
yield "error", f"Instagram allows maximum 30 hashtags ({hashtag_count} found)"
yield (
"error",
f"Instagram allows maximum 30 hashtags ({hashtag_count} found)",
)
return
if mention_count > 3:
yield "error", f"Instagram allows maximum 3 @mentions ({mention_count} found)"
yield (
"error",
f"Instagram allows maximum 3 @mentions ({mention_count} found)",
)
return
# Convert datetime to ISO format if provided
@@ -191,7 +209,10 @@ class PostToInstagramBlock(Block):
# Validate alt text length
for i, alt in enumerate(input_data.alt_text):
if len(alt) > 1000:
yield "error", f"Alt text {i+1} exceeds 1,000 character limit ({len(alt)} characters)"
yield (
"error",
f"Alt text {i + 1} exceeds 1,000 character limit ({len(alt)} characters)",
)
return
instagram_options["altText"] = input_data.alt_text
@@ -206,13 +227,19 @@ class PostToInstagramBlock(Block):
try:
tag_obj = InstagramUserTag(**tag)
except Exception as e:
yield "error", f"Invalid user tag: {e}, tages need to be a dictionary with a 3 items: username (str), x (float) and y (float)"
yield (
"error",
f"Invalid user tag: {e}, tages need to be a dictionary with a 3 items: username (str), x (float) and y (float)",
)
return
tag_dict: dict[str, float | str] = {"username": tag_obj.username}
if tag_obj.x is not None and tag_obj.y is not None:
# Validate coordinates
if not (0.0 <= tag_obj.x <= 1.0) or not (0.0 <= tag_obj.y <= 1.0):
yield "error", f"User tag coordinates must be between 0.0 and 1.0 (user: {tag_obj.username})"
yield (
"error",
f"User tag coordinates must be between 0.0 and 1.0 (user: {tag_obj.username})",
)
return
tag_dict["x"] = tag_obj.x
tag_dict["y"] = tag_obj.y

View File

@@ -123,12 +123,18 @@ class PostToLinkedInBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate LinkedIn constraints
if len(input_data.post) > 3000:
yield "error", f"LinkedIn post text exceeds 3,000 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"LinkedIn post text exceeds 3,000 character limit ({len(input_data.post)} characters)",
)
return
if len(input_data.media_urls) > 9:
@@ -136,13 +142,19 @@ class PostToLinkedInBlock(Block):
return
if input_data.document_title and len(input_data.document_title) > 400:
yield "error", f"LinkedIn document title exceeds 400 character limit ({len(input_data.document_title)} characters)"
yield (
"error",
f"LinkedIn document title exceeds 400 character limit ({len(input_data.document_title)} characters)",
)
return
# Validate visibility option
valid_visibility = ["public", "connections", "loggedin"]
if input_data.visibility not in valid_visibility:
yield "error", f"LinkedIn visibility must be one of: {', '.join(valid_visibility)}"
yield (
"error",
f"LinkedIn visibility must be one of: {', '.join(valid_visibility)}",
)
return
# Check for document extensions

View File

@@ -103,20 +103,32 @@ class PostToPinterestBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate Pinterest constraints
if len(input_data.post) > 500:
yield "error", f"Pinterest pin description exceeds 500 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"Pinterest pin description exceeds 500 character limit ({len(input_data.post)} characters)",
)
return
if len(input_data.pin_title) > 100:
yield "error", f"Pinterest pin title exceeds 100 character limit ({len(input_data.pin_title)} characters)"
yield (
"error",
f"Pinterest pin title exceeds 100 character limit ({len(input_data.pin_title)} characters)",
)
return
if len(input_data.link) > 2048:
yield "error", f"Pinterest link URL exceeds 2048 character limit ({len(input_data.link)} characters)"
yield (
"error",
f"Pinterest link URL exceeds 2048 character limit ({len(input_data.link)} characters)",
)
return
if len(input_data.media_urls) == 0:
@@ -141,7 +153,10 @@ class PostToPinterestBlock(Block):
# Validate alt text length
for i, alt in enumerate(input_data.alt_text):
if len(alt) > 500:
yield "error", f"Pinterest alt text {i+1} exceeds 500 character limit ({len(alt)} characters)"
yield (
"error",
f"Pinterest alt text {i + 1} exceeds 500 character limit ({len(alt)} characters)",
)
return
# Convert datetime to ISO format if provided

View File

@@ -73,7 +73,10 @@ class PostToSnapchatBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate Snapchat constraints
@@ -88,7 +91,10 @@ class PostToSnapchatBlock(Block):
# Validate story type
valid_story_types = ["story", "saved_story", "spotlight"]
if input_data.story_type not in valid_story_types:
yield "error", f"Snapchat story type must be one of: {', '.join(valid_story_types)}"
yield (
"error",
f"Snapchat story type must be one of: {', '.join(valid_story_types)}",
)
return
# Convert datetime to ISO format if provided

View File

@@ -68,7 +68,10 @@ class PostToTelegramBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate Telegram constraints

View File

@@ -61,22 +61,34 @@ class PostToThreadsBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate Threads constraints
if len(input_data.post) > 500:
yield "error", f"Threads post text exceeds 500 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"Threads post text exceeds 500 character limit ({len(input_data.post)} characters)",
)
return
if len(input_data.media_urls) > 20:
yield "error", "Threads supports a maximum of 20 images/videos in a carousel"
yield (
"error",
"Threads supports a maximum of 20 images/videos in a carousel",
)
return
# Count hashtags (only 1 allowed)
hashtag_count = input_data.post.count("#")
if hashtag_count > 1:
yield "error", f"Threads allows only 1 hashtag per post ({hashtag_count} found)"
yield (
"error",
f"Threads allows only 1 hashtag per post ({hashtag_count} found)",
)
return
# Convert datetime to ISO format if provided

View File

@@ -123,16 +123,25 @@ class PostToTikTokBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate TikTok constraints
if len(input_data.post) > 2200:
yield "error", f"TikTok post text exceeds 2,200 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"TikTok post text exceeds 2,200 character limit ({len(input_data.post)} characters)",
)
return
if not input_data.media_urls:
yield "error", "TikTok requires at least one media URL (either 1 video or up to 35 images)"
yield (
"error",
"TikTok requires at least one media URL (either 1 video or up to 35 images)",
)
return
# Check for video vs image constraints
@@ -150,7 +159,10 @@ class PostToTikTokBlock(Block):
)
if has_video and has_images:
yield "error", "TikTok does not support mixing video and images in the same post"
yield (
"error",
"TikTok does not support mixing video and images in the same post",
)
return
if has_video and len(input_data.media_urls) > 1:
@@ -163,13 +175,19 @@ class PostToTikTokBlock(Block):
# Validate image cover index
if has_images and input_data.image_cover_index >= len(input_data.media_urls):
yield "error", f"Image cover index {input_data.image_cover_index} is out of range (max: {len(input_data.media_urls) - 1})"
yield (
"error",
f"Image cover index {input_data.image_cover_index} is out of range (max: {len(input_data.media_urls) - 1})",
)
return
# Check for PNG files (not supported)
has_png = any(url.lower().endswith(".png") for url in input_data.media_urls)
if has_png:
yield "error", "TikTok does not support PNG files. Please use JPG, JPEG, or WEBP for images."
yield (
"error",
"TikTok does not support PNG files. Please use JPG, JPEG, or WEBP for images.",
)
return
# Convert datetime to ISO format if provided

View File

@@ -126,16 +126,25 @@ class PostToXBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate X constraints
if not input_data.long_post and len(input_data.post) > 280:
yield "error", f"X post text exceeds 280 character limit ({len(input_data.post)} characters). Enable 'long_post' for Premium accounts."
yield (
"error",
f"X post text exceeds 280 character limit ({len(input_data.post)} characters). Enable 'long_post' for Premium accounts.",
)
return
if input_data.long_post and len(input_data.post) > 25000:
yield "error", f"X long post text exceeds 25,000 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"X long post text exceeds 25,000 character limit ({len(input_data.post)} characters)",
)
return
if len(input_data.media_urls) > 4:
@@ -149,14 +158,20 @@ class PostToXBlock(Block):
return
if input_data.poll_duration < 1 or input_data.poll_duration > 10080:
yield "error", "X poll duration must be between 1 and 10,080 minutes (7 days)"
yield (
"error",
"X poll duration must be between 1 and 10,080 minutes (7 days)",
)
return
# Validate alt text
if input_data.alt_text:
for i, alt in enumerate(input_data.alt_text):
if len(alt) > 1000:
yield "error", f"X alt text {i+1} exceeds 1,000 character limit ({len(alt)} characters)"
yield (
"error",
f"X alt text {i + 1} exceeds 1,000 character limit ({len(alt)} characters)",
)
return
# Validate subtitle settings
@@ -168,7 +183,10 @@ class PostToXBlock(Block):
return
if len(input_data.subtitle_name) > 150:
yield "error", f"Subtitle name exceeds 150 character limit ({len(input_data.subtitle_name)} characters)"
yield (
"error",
f"Subtitle name exceeds 150 character limit ({len(input_data.subtitle_name)} characters)",
)
return
# Convert datetime to ISO format if provided

View File

@@ -149,7 +149,10 @@ class PostToYouTubeBlock(Block):
client = create_ayrshare_client()
if not client:
yield "error", "Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY."
yield (
"error",
"Ayrshare integration is not configured. Please set up the AYRSHARE_API_KEY.",
)
return
# Validate YouTube constraints
@@ -158,11 +161,17 @@ class PostToYouTubeBlock(Block):
return
if len(input_data.title) > 100:
yield "error", f"YouTube title exceeds 100 character limit ({len(input_data.title)} characters)"
yield (
"error",
f"YouTube title exceeds 100 character limit ({len(input_data.title)} characters)",
)
return
if len(input_data.post) > 5000:
yield "error", f"YouTube description exceeds 5,000 character limit ({len(input_data.post)} characters)"
yield (
"error",
f"YouTube description exceeds 5,000 character limit ({len(input_data.post)} characters)",
)
return
# Check for forbidden characters
@@ -186,7 +195,10 @@ class PostToYouTubeBlock(Block):
# Validate visibility option
valid_visibility = ["private", "public", "unlisted"]
if input_data.visibility not in valid_visibility:
yield "error", f"YouTube visibility must be one of: {', '.join(valid_visibility)}"
yield (
"error",
f"YouTube visibility must be one of: {', '.join(valid_visibility)}",
)
return
# Validate thumbnail URL format
@@ -202,12 +214,18 @@ class PostToYouTubeBlock(Block):
if input_data.tags:
total_tag_length = sum(len(tag) for tag in input_data.tags)
if total_tag_length > 500:
yield "error", f"YouTube tags total length exceeds 500 characters ({total_tag_length} characters)"
yield (
"error",
f"YouTube tags total length exceeds 500 characters ({total_tag_length} characters)",
)
return
for tag in input_data.tags:
if len(tag) < 2:
yield "error", f"YouTube tag '{tag}' is too short (minimum 2 characters)"
yield (
"error",
f"YouTube tag '{tag}' is too short (minimum 2 characters)",
)
return
# Validate subtitle URL
@@ -225,12 +243,18 @@ class PostToYouTubeBlock(Block):
return
if input_data.subtitle_name and len(input_data.subtitle_name) > 150:
yield "error", f"YouTube subtitle name exceeds 150 character limit ({len(input_data.subtitle_name)} characters)"
yield (
"error",
f"YouTube subtitle name exceeds 150 character limit ({len(input_data.subtitle_name)} characters)",
)
return
# Validate publish_at format if provided
if input_data.publish_at and input_data.schedule_date:
yield "error", "Cannot use both 'publish_at' and 'schedule_date'. Use 'publish_at' for YouTube-controlled publishing."
yield (
"error",
"Cannot use both 'publish_at' and 'schedule_date'. Use 'publish_at' for YouTube-controlled publishing.",
)
return
# Convert datetime to ISO format if provided (only if not using publish_at)

View File

@@ -59,10 +59,13 @@ class FileStoreBlock(Block):
# for_block_output: smart format - workspace:// in CoPilot, data URI in graphs
return_format = "for_external_api" if input_data.base_64 else "for_block_output"
yield "file_out", await store_media_file(
file=input_data.file_in,
execution_context=execution_context,
return_format=return_format,
yield (
"file_out",
await store_media_file(
file=input_data.file_in,
execution_context=execution_context,
return_format=return_format,
),
)

View File

@@ -728,9 +728,12 @@ class ConcatenateListsBlock(Block):
# Type validation: each item must be a list
# Strings are iterable and would cause extend() to iterate character-by-character
# Non-iterable types would raise TypeError
yield "error", (
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
yield (
"error",
(
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
),
)
return
concatenated.extend(lst)

View File

@@ -110,8 +110,10 @@ class DataForSeoKeywordSuggestionsBlock(Block):
test_output=[
(
"suggestion",
lambda x: hasattr(x, "keyword")
and x.keyword == "digital marketing strategy",
lambda x: (
hasattr(x, "keyword")
and x.keyword == "digital marketing strategy"
),
),
("suggestions", lambda x: isinstance(x, list) and len(x) == 1),
("total_count", 1),

View File

@@ -137,47 +137,71 @@ class SendEmailBlock(Block):
)
yield "status", status
except socket.gaierror:
yield "error", (
f"Cannot connect to SMTP server '{input_data.config.smtp_server}'. "
"Please verify the server address is correct."
yield (
"error",
(
f"Cannot connect to SMTP server '{input_data.config.smtp_server}'. "
"Please verify the server address is correct."
),
)
except socket.timeout:
yield "error", (
f"Connection timeout to '{input_data.config.smtp_server}' "
f"on port {input_data.config.smtp_port}. "
"The server may be down or unreachable."
yield (
"error",
(
f"Connection timeout to '{input_data.config.smtp_server}' "
f"on port {input_data.config.smtp_port}. "
"The server may be down or unreachable."
),
)
except ConnectionRefusedError:
yield "error", (
f"Connection refused to '{input_data.config.smtp_server}' "
f"on port {input_data.config.smtp_port}. "
"Common SMTP ports are: 587 (TLS), 465 (SSL), 25 (plain). "
"Please verify the port is correct."
yield (
"error",
(
f"Connection refused to '{input_data.config.smtp_server}' "
f"on port {input_data.config.smtp_port}. "
"Common SMTP ports are: 587 (TLS), 465 (SSL), 25 (plain). "
"Please verify the port is correct."
),
)
except smtplib.SMTPNotSupportedError:
yield "error", (
f"STARTTLS not supported by server '{input_data.config.smtp_server}'. "
"Try using port 465 for SSL or port 25 for unencrypted connection."
yield (
"error",
(
f"STARTTLS not supported by server '{input_data.config.smtp_server}'. "
"Try using port 465 for SSL or port 25 for unencrypted connection."
),
)
except ssl.SSLError as e:
yield "error", (
f"SSL/TLS error when connecting to '{input_data.config.smtp_server}': {str(e)}. "
"The server may require a different security protocol."
yield (
"error",
(
f"SSL/TLS error when connecting to '{input_data.config.smtp_server}': {str(e)}. "
"The server may require a different security protocol."
),
)
except smtplib.SMTPAuthenticationError:
yield "error", (
"Authentication failed. Please verify your username and password are correct."
yield (
"error",
(
"Authentication failed. Please verify your username and password are correct."
),
)
except smtplib.SMTPRecipientsRefused:
yield "error", (
f"Recipient email address '{input_data.to_email}' was rejected by the server. "
"Please verify the email address is valid."
yield (
"error",
(
f"Recipient email address '{input_data.to_email}' was rejected by the server. "
"Please verify the email address is valid."
),
)
except smtplib.SMTPSenderRefused:
yield "error", (
"Sender email address defined in the credentials that where used"
"was rejected by the server. "
"Please verify your account is authorized to send emails."
yield (
"error",
(
"Sender email address defined in the credentials that where used"
"was rejected by the server. "
"Please verify your account is authorized to send emails."
),
)
except smtplib.SMTPDataError as e:
yield "error", f"Email data rejected by server: {str(e)}"

View File

@@ -490,7 +490,9 @@ class GetLinkedinProfilePictureBlock(Block):
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"_get_profile_picture": lambda *args, **kwargs: "https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk",
"_get_profile_picture": lambda *args, **kwargs: (
"https://media.licdn.com/dms/image/C4D03AQFj-xjuXrLFSQ/profile-displayphoto-shrink_800_800/0/1576881858598?e=1686787200&v=beta&t=zrQC76QwsfQQIWthfOnrKRBMZ5D-qIAvzLXLmWgYvTk"
),
},
)

View File

@@ -319,7 +319,7 @@ class CostDollars(BaseModel):
# Helper functions for payload processing
def process_text_field(
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None]
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None],
) -> Optional[Union[bool, Dict[str, Any]]]:
"""Process text field for API payload."""
if text is None:
@@ -400,7 +400,7 @@ def process_contents_settings(contents: Optional[ContentSettings]) -> Dict[str,
def process_context_field(
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None]
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None],
) -> Optional[Union[bool, Dict[str, int]]]:
"""Process context field for API payload."""
if context is None:

View File

@@ -566,8 +566,9 @@ class ExaUpdateWebsetBlock(Block):
yield "status", status_str
yield "external_id", sdk_webset.external_id
yield "metadata", sdk_webset.metadata or {}
yield "updated_at", (
sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""
yield (
"updated_at",
(sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""),
)
@@ -706,11 +707,13 @@ class ExaGetWebsetBlock(Block):
yield "enrichments", enrichments_data
yield "monitors", monitors_data
yield "metadata", sdk_webset.metadata or {}
yield "created_at", (
sdk_webset.created_at.isoformat() if sdk_webset.created_at else ""
yield (
"created_at",
(sdk_webset.created_at.isoformat() if sdk_webset.created_at else ""),
)
yield "updated_at", (
sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""
yield (
"updated_at",
(sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else ""),
)

View File

@@ -523,16 +523,20 @@ class ExaWaitForEnrichmentBlock(Block):
items_enriched = 0
if input_data.sample_results and status == "completed":
sample_data, items_enriched = (
await self._get_sample_enrichments(
input_data.webset_id, input_data.enrichment_id, aexa
)
(
sample_data,
items_enriched,
) = await self._get_sample_enrichments(
input_data.webset_id, input_data.enrichment_id, aexa
)
yield "enrichment_id", input_data.enrichment_id
yield "final_status", status
yield "items_enriched", items_enriched
yield "enrichment_title", enrichment.title or enrichment.description or ""
yield (
"enrichment_title",
enrichment.title or enrichment.description or "",
)
yield "elapsed_time", elapsed
if input_data.sample_results:
yield "sample_data", sample_data

View File

@@ -127,7 +127,9 @@ class AIImageEditorBlock(Block):
],
test_mock={
# Use data URI to avoid HTTP requests during tests
"run_model": lambda *args, **kwargs: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg==",
"run_model": lambda *args, **kwargs: (
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg=="
),
},
test_credentials=TEST_CREDENTIALS,
)

View File

@@ -798,7 +798,9 @@ class GithubUnassignIssueBlock(Block):
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Issue unassigned successfully")],
test_mock={
"unassign_issue": lambda *args, **kwargs: "Issue unassigned successfully"
"unassign_issue": lambda *args, **kwargs: (
"Issue unassigned successfully"
)
},
)

View File

@@ -261,7 +261,9 @@ class GithubReadPullRequestBlock(Block):
"This is the body of the pull request.",
"username",
),
"read_pr_changes": lambda *args, **kwargs: "List of changes made in the pull request.",
"read_pr_changes": lambda *args, **kwargs: (
"List of changes made in the pull request."
),
},
)
@@ -365,7 +367,9 @@ class GithubAssignPRReviewerBlock(Block):
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Reviewer assigned successfully")],
test_mock={
"assign_reviewer": lambda *args, **kwargs: "Reviewer assigned successfully"
"assign_reviewer": lambda *args, **kwargs: (
"Reviewer assigned successfully"
)
},
)
@@ -432,7 +436,9 @@ class GithubUnassignPRReviewerBlock(Block):
test_credentials=TEST_CREDENTIALS,
test_output=[("status", "Reviewer unassigned successfully")],
test_mock={
"unassign_reviewer": lambda *args, **kwargs: "Reviewer unassigned successfully"
"unassign_reviewer": lambda *args, **kwargs: (
"Reviewer unassigned successfully"
)
},
)

View File

@@ -341,14 +341,17 @@ class GoogleDocsCreateBlock(Block):
)
doc_id = result["document_id"]
doc_url = result["document_url"]
yield "document", GoogleDriveFile(
id=doc_id,
name=input_data.title,
mimeType="application/vnd.google-apps.document",
url=doc_url,
iconUrl="https://www.gstatic.com/images/branding/product/1x/docs_48dp.png",
isFolder=False,
_credentials_id=input_data.credentials.id,
yield (
"document",
GoogleDriveFile(
id=doc_id,
name=input_data.title,
mimeType="application/vnd.google-apps.document",
url=doc_url,
iconUrl="https://www.gstatic.com/images/branding/product/1x/docs_48dp.png",
isFolder=False,
_credentials_id=input_data.credentials.id,
),
)
yield "document_id", doc_id
yield "document_url", doc_url
@@ -815,7 +818,10 @@ class GoogleDocsGetMetadataBlock(Block):
yield "title", result["title"]
yield "document_id", input_data.document.id
yield "revision_id", result["revision_id"]
yield "document_url", f"https://docs.google.com/document/d/{input_data.document.id}/edit"
yield (
"document_url",
f"https://docs.google.com/document/d/{input_data.document.id}/edit",
)
yield "document", _make_document_output(input_data.document)
except Exception as e:
yield "error", f"Failed to get metadata: {str(e)}"

View File

@@ -278,11 +278,13 @@ class GmailBase(Block, ABC):
"""Download attachment content when email body is stored as attachment."""
try:
attachment = await asyncio.to_thread(
lambda: service.users()
.messages()
.attachments()
.get(userId="me", messageId=msg_id, id=attachment_id)
.execute()
lambda: (
service.users()
.messages()
.attachments()
.get(userId="me", messageId=msg_id, id=attachment_id)
.execute()
)
)
return attachment.get("data")
except Exception:
@@ -304,11 +306,13 @@ class GmailBase(Block, ABC):
async def download_attachment(self, service, message_id: str, attachment_id: str):
attachment = await asyncio.to_thread(
lambda: service.users()
.messages()
.attachments()
.get(userId="me", messageId=message_id, id=attachment_id)
.execute()
lambda: (
service.users()
.messages()
.attachments()
.get(userId="me", messageId=message_id, id=attachment_id)
.execute()
)
)
file_data = base64.urlsafe_b64decode(attachment["data"].encode("UTF-8"))
return file_data
@@ -466,10 +470,12 @@ class GmailReadBlock(GmailBase):
else "full"
)
msg = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(userId="me", id=message["id"], format=format_type)
.execute()
lambda: (
service.users()
.messages()
.get(userId="me", id=message["id"], format=format_type)
.execute()
)
)
headers = {
@@ -602,10 +608,12 @@ class GmailSendBlock(GmailBase):
)
raw_message = await create_mime_message(input_data, execution_context)
sent_message = await asyncio.to_thread(
lambda: service.users()
.messages()
.send(userId="me", body={"raw": raw_message})
.execute()
lambda: (
service.users()
.messages()
.send(userId="me", body={"raw": raw_message})
.execute()
)
)
return {"id": sent_message["id"], "status": "sent"}
@@ -699,8 +707,13 @@ class GmailCreateDraftBlock(GmailBase):
input_data,
execution_context,
)
yield "result", GmailDraftResult(
id=result["id"], message_id=result["message"]["id"], status="draft_created"
yield (
"result",
GmailDraftResult(
id=result["id"],
message_id=result["message"]["id"],
status="draft_created",
),
)
async def _create_draft(
@@ -713,10 +726,12 @@ class GmailCreateDraftBlock(GmailBase):
raw_message = await create_mime_message(input_data, execution_context)
draft = await asyncio.to_thread(
lambda: service.users()
.drafts()
.create(userId="me", body={"message": {"raw": raw_message}})
.execute()
lambda: (
service.users()
.drafts()
.create(userId="me", body={"message": {"raw": raw_message}})
.execute()
)
)
return draft
@@ -840,10 +855,12 @@ class GmailAddLabelBlock(GmailBase):
async def _add_label(self, service, message_id: str, label_name: str) -> dict:
label_id = await self._get_or_create_label(service, label_name)
result = await asyncio.to_thread(
lambda: service.users()
.messages()
.modify(userId="me", id=message_id, body={"addLabelIds": [label_id]})
.execute()
lambda: (
service.users()
.messages()
.modify(userId="me", id=message_id, body={"addLabelIds": [label_id]})
.execute()
)
)
if not result.get("labelIds"):
return {
@@ -857,10 +874,12 @@ class GmailAddLabelBlock(GmailBase):
label_id = await self._get_label_id(service, label_name)
if not label_id:
label = await asyncio.to_thread(
lambda: service.users()
.labels()
.create(userId="me", body={"name": label_name})
.execute()
lambda: (
service.users()
.labels()
.create(userId="me", body={"name": label_name})
.execute()
)
)
label_id = label["id"]
return label_id
@@ -927,10 +946,14 @@ class GmailRemoveLabelBlock(GmailBase):
label_id = await self._get_label_id(service, label_name)
if label_id:
result = await asyncio.to_thread(
lambda: service.users()
.messages()
.modify(userId="me", id=message_id, body={"removeLabelIds": [label_id]})
.execute()
lambda: (
service.users()
.messages()
.modify(
userId="me", id=message_id, body={"removeLabelIds": [label_id]}
)
.execute()
)
)
if not result.get("labelIds"):
return {
@@ -1048,10 +1071,12 @@ class GmailGetThreadBlock(GmailBase):
else "full"
)
thread = await asyncio.to_thread(
lambda: service.users()
.threads()
.get(userId="me", id=thread_id, format=format_type)
.execute()
lambda: (
service.users()
.threads()
.get(userId="me", id=thread_id, format=format_type)
.execute()
)
)
parsed_messages = []
@@ -1106,23 +1131,25 @@ async def _build_reply_message(
"""
# Get parent message for reply context
parent = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
lambda: (
service.users()
.messages()
.get(
userId="me",
id=input_data.parentMessageId,
format="metadata",
metadataHeaders=[
"Subject",
"References",
"Message-ID",
"From",
"To",
"Cc",
"Reply-To",
],
)
.execute()
)
.execute()
)
# Build headers dictionary, preserving all values for duplicate headers
@@ -1346,10 +1373,12 @@ class GmailReplyBlock(GmailBase):
# Send the message
return await asyncio.to_thread(
lambda: service.users()
.messages()
.send(userId="me", body={"threadId": thread_id, "raw": raw})
.execute()
lambda: (
service.users()
.messages()
.send(userId="me", body={"threadId": thread_id, "raw": raw})
.execute()
)
)
@@ -1459,18 +1488,20 @@ class GmailDraftReplyBlock(GmailBase):
# Create draft with proper thread association
draft = await asyncio.to_thread(
lambda: service.users()
.drafts()
.create(
userId="me",
body={
"message": {
"threadId": thread_id,
"raw": raw,
}
},
lambda: (
service.users()
.drafts()
.create(
userId="me",
body={
"message": {
"threadId": thread_id,
"raw": raw,
}
},
)
.execute()
)
.execute()
)
return draft
@@ -1642,10 +1673,12 @@ class GmailForwardBlock(GmailBase):
# Get the original message
original = await asyncio.to_thread(
lambda: service.users()
.messages()
.get(userId="me", id=input_data.messageId, format="full")
.execute()
lambda: (
service.users()
.messages()
.get(userId="me", id=input_data.messageId, format="full")
.execute()
)
)
headers = {
@@ -1735,8 +1768,10 @@ To: {original_to}
# Send the forwarded message
raw = base64.urlsafe_b64encode(msg.as_bytes()).decode("utf-8")
return await asyncio.to_thread(
lambda: service.users()
.messages()
.send(userId="me", body={"raw": raw})
.execute()
lambda: (
service.users()
.messages()
.send(userId="me", body={"raw": raw})
.execute()
)
)

View File

@@ -345,14 +345,17 @@ class GoogleSheetsReadBlock(Block):
)
yield "result", data
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=spreadsheet_id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{spreadsheet_id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=spreadsheet_id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{spreadsheet_id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", _handle_sheets_api_error(str(e), "read")
@@ -466,9 +469,12 @@ class GoogleSheetsWriteBlock(Block):
if validation_error:
# Customize message for write operations on CSV files
if "CSV file" in validation_error:
yield "error", validation_error.replace(
"Please use a CSV reader block instead, or",
"CSV files are read-only through Google Drive. Please",
yield (
"error",
validation_error.replace(
"Please use a CSV reader block instead, or",
"CSV files are read-only through Google Drive. Please",
),
)
else:
yield "error", validation_error
@@ -485,14 +491,17 @@ class GoogleSheetsWriteBlock(Block):
)
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", _handle_sheets_api_error(str(e), "write")
@@ -614,14 +623,17 @@ class GoogleSheetsAppendRowBlock(Block):
input_data.value_input_option,
)
yield "result", result
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to append row: {str(e)}"
@@ -744,14 +756,17 @@ class GoogleSheetsClearBlock(Block):
)
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to clear Google Sheet range: {str(e)}"
@@ -854,14 +869,17 @@ class GoogleSheetsMetadataBlock(Block):
)
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to get spreadsheet metadata: {str(e)}"
@@ -984,14 +1002,17 @@ class GoogleSheetsManageSheetBlock(Block):
)
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to manage sheet: {str(e)}"
@@ -1141,14 +1162,17 @@ class GoogleSheetsBatchOperationsBlock(Block):
)
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to perform batch operations: {str(e)}"
@@ -1306,14 +1330,17 @@ class GoogleSheetsFindReplaceBlock(Block):
)
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to find/replace in Google Sheet: {str(e)}"
@@ -1488,14 +1515,17 @@ class GoogleSheetsFindBlock(Block):
yield "locations", result["locations"]
yield "result", {"success": True}
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to find text in Google Sheet: {str(e)}"
@@ -1754,14 +1784,17 @@ class GoogleSheetsFormatBlock(Block):
else:
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to format Google Sheet cells: {str(e)}"
@@ -1928,14 +1961,17 @@ class GoogleSheetsCreateSpreadsheetBlock(Block):
spreadsheet_id = result["spreadsheetId"]
spreadsheet_url = result["spreadsheetUrl"]
# Output the GoogleDriveFile for chaining (includes credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=spreadsheet_id,
name=result.get("title", input_data.title),
mimeType="application/vnd.google-apps.spreadsheet",
url=spreadsheet_url,
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.credentials.id, # Preserve credentials for chaining
yield (
"spreadsheet",
GoogleDriveFile(
id=spreadsheet_id,
name=result.get("title", input_data.title),
mimeType="application/vnd.google-apps.spreadsheet",
url=spreadsheet_url,
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.credentials.id, # Preserve credentials for chaining
),
)
yield "spreadsheet_id", spreadsheet_id
yield "spreadsheet_url", spreadsheet_url
@@ -2113,14 +2149,17 @@ class GoogleSheetsUpdateCellBlock(Block):
yield "result", result
# Output the GoogleDriveFile for chaining (preserves credentials_id)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", _handle_sheets_api_error(str(e), "update")
@@ -2379,14 +2418,17 @@ class GoogleSheetsFilterRowsBlock(Block):
yield "rows", result["rows"]
yield "row_indices", result["row_indices"]
yield "count", result["count"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to filter rows: {str(e)}"
@@ -2596,14 +2638,17 @@ class GoogleSheetsLookupRowBlock(Block):
yield "row_dict", result["row_dict"]
yield "row_index", result["row_index"]
yield "found", result["found"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to lookup row: {str(e)}"
@@ -2817,14 +2862,17 @@ class GoogleSheetsDeleteRowsBlock(Block):
)
yield "result", {"success": True}
yield "deleted_count", result["deleted_count"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to delete rows: {str(e)}"
@@ -2995,14 +3043,17 @@ class GoogleSheetsGetColumnBlock(Block):
yield "values", result["values"]
yield "count", result["count"]
yield "column_index", result["column_index"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to get column: {str(e)}"
@@ -3176,14 +3227,17 @@ class GoogleSheetsSortBlock(Block):
input_data.has_header,
)
yield "result", result
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to sort sheet: {str(e)}"
@@ -3439,14 +3493,17 @@ class GoogleSheetsGetUniqueValuesBlock(Block):
yield "values", result["values"]
yield "counts", result["counts"]
yield "total_unique", result["total_unique"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to get unique values: {str(e)}"
@@ -3620,14 +3677,17 @@ class GoogleSheetsInsertRowBlock(Block):
input_data.value_input_option,
)
yield "result", result
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to insert row: {str(e)}"
@@ -3793,14 +3853,17 @@ class GoogleSheetsAddColumnBlock(Block):
yield "result", {"success": True}
yield "column_letter", result["column_letter"]
yield "column_index", result["column_index"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to add column: {str(e)}"
@@ -3998,14 +4061,17 @@ class GoogleSheetsGetRowCountBlock(Block):
yield "data_rows", result["data_rows"]
yield "last_row", result["last_row"]
yield "column_count", result["column_count"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to get row count: {str(e)}"
@@ -4176,14 +4242,17 @@ class GoogleSheetsRemoveDuplicatesBlock(Block):
yield "result", {"success": True}
yield "removed_count", result["removed_count"]
yield "remaining_rows", result["remaining_rows"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to remove duplicates: {str(e)}"
@@ -4426,14 +4495,17 @@ class GoogleSheetsUpdateRowBlock(Block):
input_data.dict_values,
)
yield "result", result
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to update row: {str(e)}"
@@ -4615,14 +4687,17 @@ class GoogleSheetsGetRowBlock(Block):
)
yield "row", result["row"]
yield "row_dict", result["row_dict"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to get row: {str(e)}"
@@ -4753,14 +4828,17 @@ class GoogleSheetsDeleteColumnBlock(Block):
input_data.column,
)
yield "result", result
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to delete column: {str(e)}"
@@ -4931,14 +5009,17 @@ class GoogleSheetsCreateNamedRangeBlock(Block):
)
yield "result", {"success": True}
yield "named_range_id", result["named_range_id"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to create named range: {str(e)}"
@@ -5104,14 +5185,17 @@ class GoogleSheetsListNamedRangesBlock(Block):
)
yield "named_ranges", result["named_ranges"]
yield "count", result["count"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to list named ranges: {str(e)}"
@@ -5264,14 +5348,17 @@ class GoogleSheetsAddDropdownBlock(Block):
input_data.show_dropdown,
)
yield "result", result
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to add dropdown: {str(e)}"
@@ -5436,14 +5523,17 @@ class GoogleSheetsCopyToSpreadsheetBlock(Block):
yield "result", {"success": True}
yield "new_sheet_id", result["new_sheet_id"]
yield "new_sheet_name", result["new_sheet_name"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.source_spreadsheet.id,
name=input_data.source_spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.source_spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.source_spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.source_spreadsheet.id,
name=input_data.source_spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.source_spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.source_spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to copy sheet: {str(e)}"
@@ -5588,14 +5678,17 @@ class GoogleSheetsProtectRangeBlock(Block):
)
yield "result", {"success": True}
yield "protection_id", result["protection_id"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to protect range: {str(e)}"
@@ -5752,14 +5845,17 @@ class GoogleSheetsExportCsvBlock(Block):
)
yield "csv_data", result["csv_data"]
yield "row_count", result["row_count"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to export CSV: {str(e)}"
@@ -5895,14 +5991,17 @@ class GoogleSheetsImportCsvBlock(Block):
)
yield "result", {"success": True}
yield "rows_imported", result["rows_imported"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to import CSV: {str(e)}"
@@ -6032,14 +6131,17 @@ class GoogleSheetsAddNoteBlock(Block):
input_data.note,
)
yield "result", {"success": True}
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to add note: {str(e)}"
@@ -6185,14 +6287,17 @@ class GoogleSheetsGetNotesBlock(Block):
notes = result["notes"]
yield "notes", notes
yield "count", len(notes)
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to get notes: {str(e)}"
@@ -6347,14 +6452,17 @@ class GoogleSheetsShareSpreadsheetBlock(Block):
)
yield "result", {"success": True}
yield "share_link", result["share_link"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to share spreadsheet: {str(e)}"
@@ -6491,14 +6599,17 @@ class GoogleSheetsSetPublicAccessBlock(Block):
)
yield "result", {"success": True, "is_public": result["is_public"]}
yield "share_link", result["share_link"]
yield "spreadsheet", GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
yield (
"spreadsheet",
GoogleDriveFile(
id=input_data.spreadsheet.id,
name=input_data.spreadsheet.name,
mimeType="application/vnd.google-apps.spreadsheet",
url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit",
iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png",
isFolder=False,
_credentials_id=input_data.spreadsheet.credentials_id,
),
)
except Exception as e:
yield "error", f"Failed to set public access: {str(e)}"

View File

@@ -21,43 +21,71 @@ logger = logging.getLogger(__name__)
class HumanInTheLoopBlock(Block):
"""
This block pauses execution and waits for human approval or modification of the data.
Pauses execution and waits for human approval or rejection of the data.
When executed, it creates a pending review entry and sets the node execution status
to REVIEW. The execution will remain paused until a human user either:
- Approves the data (with or without modifications)
- Rejects the data
When executed, this block creates a pending review entry and sets the node execution
status to REVIEW. The execution remains paused until a human user either approves
or rejects the data.
This is useful for workflows that require human validation or intervention before
proceeding to the next steps.
**How it works:**
- The input data is presented to a human reviewer
- The reviewer can approve or reject (and optionally modify the data if editable)
- On approval: the data flows out through the `approved_data` output pin
- On rejection: the data flows out through the `rejected_data` output pin
**Important:** The output pins yield the actual data itself, NOT status strings.
The approval/rejection decision determines WHICH output pin fires, not the value.
You do NOT need to compare the output to "APPROVED" or "REJECTED" - simply connect
downstream blocks to the appropriate output pin for each case.
**Example usage:**
- Connect `approved_data` → next step in your workflow (data was approved)
- Connect `rejected_data` → error handling or notification (data was rejected)
"""
class Input(BlockSchemaInput):
data: Any = SchemaField(description="The data to be reviewed by a human user")
data: Any = SchemaField(
description="The data to be reviewed by a human user. "
"This exact data will be passed through to either approved_data or "
"rejected_data output based on the reviewer's decision."
)
name: str = SchemaField(
description="A descriptive name for what this data represents",
description="A descriptive name for what this data represents. "
"This helps the reviewer understand what they are reviewing.",
)
editable: bool = SchemaField(
description="Whether the human reviewer can edit the data",
description="Whether the human reviewer can edit the data before "
"approving or rejecting it",
default=True,
advanced=True,
)
class Output(BlockSchemaOutput):
approved_data: Any = SchemaField(
description="The data when approved (may be modified by reviewer)"
description="Outputs the input data when the reviewer APPROVES it. "
"The value is the actual data itself (not a status string like 'APPROVED'). "
"If the reviewer edited the data, this contains the modified version. "
"Connect downstream blocks here for the 'approved' workflow path."
)
rejected_data: Any = SchemaField(
description="The data when rejected (may be modified by reviewer)"
description="Outputs the input data when the reviewer REJECTS it. "
"The value is the actual data itself (not a status string like 'REJECTED'). "
"If the reviewer edited the data, this contains the modified version. "
"Connect downstream blocks here for the 'rejected' workflow path."
)
review_message: str = SchemaField(
description="Any message provided by the reviewer", default=""
description="Optional message provided by the reviewer explaining their "
"decision. Only outputs when the reviewer provides a message; "
"this pin does not fire if no message was given.",
default="",
)
def __init__(self):
super().__init__(
id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d",
description="Pause execution and wait for human approval or modification of data",
description="Pause execution for human review. Data flows through "
"approved_data or rejected_data output based on the reviewer's decision. "
"Outputs contain the actual data, not status strings.",
categories={BlockCategory.BASIC},
input_schema=HumanInTheLoopBlock.Input,
output_schema=HumanInTheLoopBlock.Output,

View File

@@ -195,8 +195,12 @@ class IdeogramModelBlock(Block):
),
],
test_mock={
"run_model": lambda api_key, model_name, prompt, seed, aspect_ratio, magic_prompt_option, style_type, negative_prompt, color_palette_name, custom_colors: "https://ideogram.ai/api/images/test-generated-image-url.png",
"upscale_image": lambda api_key, image_url: "https://ideogram.ai/api/images/test-upscaled-image-url.png",
"run_model": lambda api_key, model_name, prompt, seed, aspect_ratio, magic_prompt_option, style_type, negative_prompt, color_palette_name, custom_colors: (
"https://ideogram.ai/api/images/test-generated-image-url.png"
),
"upscale_image": lambda api_key, image_url: (
"https://ideogram.ai/api/images/test-upscaled-image-url.png"
),
},
test_credentials=TEST_CREDENTIALS,
)

View File

@@ -210,8 +210,11 @@ class AgentOutputBlock(Block):
if input_data.format:
try:
formatter = TextFormatter(autoescape=input_data.escape_html)
yield "output", formatter.format_string(
input_data.format, {input_data.name: input_data.value}
yield (
"output",
formatter.format_string(
input_data.format, {input_data.name: input_data.value}
),
)
except Exception as e:
yield "output", f"Error: {e}, {input_data.value}"
@@ -474,10 +477,13 @@ class AgentFileInputBlock(AgentInputBlock):
# for_block_output: smart format - workspace:// in CoPilot, data URI in graphs
return_format = "for_external_api" if input_data.base_64 else "for_block_output"
yield "result", await store_media_file(
file=input_data.value,
execution_context=execution_context,
return_format=return_format,
yield (
"result",
await store_media_file(
file=input_data.value,
execution_context=execution_context,
return_format=return_format,
),
)

View File

@@ -75,7 +75,6 @@ class LinearClient:
response_data = response.json()
if "errors" in response_data:
error_messages = [
error.get("message", "") for error in response_data["errors"]
]

View File

@@ -692,7 +692,6 @@ async def llm_call(
reasoning=reasoning,
)
elif provider == "anthropic":
an_tools = convert_openai_tool_fmt_to_anthropic(tools)
system_messages = [p["content"] for p in prompt if p["role"] == "system"]

View File

@@ -75,11 +75,14 @@ class PersistInformationBlock(Block):
storage_key = get_storage_key(input_data.key, input_data.scope, graph_id)
# Store the data
yield "value", await self._store_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=storage_key,
data=input_data.value,
yield (
"value",
await self._store_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=storage_key,
data=input_data.value,
),
)
async def _store_data(

View File

@@ -160,10 +160,13 @@ class PineconeQueryBlock(Block):
combined_text = "\n\n".join(texts)
# Return both the raw matches and combined text
yield "results", {
"matches": results["matches"],
"combined_text": combined_text,
}
yield (
"results",
{
"matches": results["matches"],
"combined_text": combined_text,
},
)
yield "combined_results", combined_text
except Exception as e:

View File

@@ -309,10 +309,13 @@ class PostRedditCommentBlock(Block):
async def run(
self, input_data: Input, *, credentials: RedditCredentials, **kwargs
) -> BlockOutput:
yield "comment_id", self.reply_post(
credentials,
post_id=input_data.post_id,
comment=input_data.comment,
yield (
"comment_id",
self.reply_post(
credentials,
post_id=input_data.post_id,
comment=input_data.comment,
),
)
yield "post_id", input_data.post_id

View File

@@ -141,7 +141,9 @@ class ReplicateFluxAdvancedModelBlock(Block):
),
],
test_mock={
"run_model": lambda api_key, model_name, prompt, seed, steps, guidance, interval, aspect_ratio, output_format, output_quality, safety_tolerance: "https://replicate.com/output/generated-image-url.jpg",
"run_model": lambda api_key, model_name, prompt, seed, steps, guidance, interval, aspect_ratio, output_format, output_quality, safety_tolerance: (
"https://replicate.com/output/generated-image-url.jpg"
),
},
test_credentials=TEST_CREDENTIALS,
)

Some files were not shown because too many files have changed in this diff Show More