Compare commits

..

182 Commits

Author SHA1 Message Date
Krzysztof Czerwinski
94bbfe12fb Updates 2025-11-01 17:15:03 +09:00
Reinier van der Leer
a02b8d9ad7 fix(backend/scheduler): Bump apscheduler to DST-fixed version 3.11.1 (#11294)
- #11273

### Changes 🏗️

- Bump `apscheduler` to v3.11.1 which contains a fix for the issue

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] "It's a rather ugly solution but the test proves that it works."
~the maintainer
  - [x] CI passes
2025-10-31 21:40:44 +00:00
Lluis Agusti
e6fb649ced Merge 'master' into 'dev' 2025-10-30 20:05:55 +07:00
Zamil Majdy
2f8cdf62ba feat(backend): Standardize error handling with BlockSchemaInput & BlockSchemaOutput base class (#11257)
<!-- Clearly explain the need for these changes: -->

This PR addresses the need for consistent error handling across all
blocks in the AutoGPT platform. Previously, each block had to manually
define an `error` field in their output schema, leading to code
duplication and potential inconsistencies. Some blocks might forget to
include the error field, making error handling unpredictable.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **Created `BlockSchemaOutput` base class**: New base class that
extends `BlockSchema` with a standardized `error` field
- **Created `BlockSchemaInput` base class**: Added for consistency and
future extensibility
- **Updated 140+ block implementations**: Changed all block `Output`
classes from `class Output(BlockSchema):` to `class
Output(BlockSchemaOutput):`
- **Removed manual error field definitions**: Eliminated hundreds of
duplicate `error: str = SchemaField(...)` definitions
- **Updated type annotations**: Changed `Block[BlockSchema,
BlockSchema]` to `Block[BlockSchemaInput, BlockSchemaOutput]` throughout
the codebase
- **Fixed imports**: Added `BlockSchemaInput` and `BlockSchemaOutput`
imports to all relevant files
- **Maintained backward compatibility**: Updated `EmptySchema` to
inherit from `BlockSchemaOutput`

**Key Benefits:**
- Consistent error handling across all blocks
- Reduced code duplication (removed ~200 lines of repetitive error field
definitions)
- Type safety improvements with distinct input/output schema types
- Blocks can still override error field with more specific descriptions
when needed

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Verified `poetry run format` passes (all linting, formatting, and
type checking)
- [x] Tested block instantiation works correctly (MediaDurationBlock,
UnrealTextToSpeechBlock)
- [x] Confirmed error fields are automatically present in all updated
blocks
- [x] Verified block loading system works (successfully loads 353+
blocks)
  - [x] Tested backward compatibility with EmptySchema
- [x] Confirmed blocks can still override error field with custom
descriptions
  - [x] Validated core schema inheritance chain works correctly

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes were needed for this refactoring.*

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-10-30 12:28:08 +00:00
seer-by-sentry[bot]
3dc5208f71 feat(backend): Increase max_field_size in aiohttp requests (#11261)
### Changes 🏗️

- Increased `max_field_size` in `aiohttp.ClientSession` to 16KB to
handle servers with large headers (e.g., long CSP headers).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x]  Add unit test that checks it can now parse headers over 8k size

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-10-30 10:41:22 +00:00
Ubbe
04493598e2 fix(frontend): more wallet popover fixes (#11285)
## Changes 🏗️

<img width="800" height="547" alt="Screenshot 2025-10-29 at 22 11 35"
src="https://github.com/user-attachments/assets/5c700ddc-d770-48ef-9847-7e652c5dedcb"
/>
<br /><br />

- Use
[`react-currency-input-field`](https://www.npmjs.com/package/react-currency-input-field)
for `<Input type="amount" />` under the hood
  - so it formats numbers nicely with `,` and `.`
- Simplify form logic
- Make the popover cover the trigger button when open
- Re-organize imports
- Show a `$` prefix in front of the amount inputs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open the wallet with credits enabled
  - [x] Play with the inputs

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-30 14:44:29 +04:00
seer-by-sentry[bot]
4140331731 fix(blocks/llm): Validate LLM summary responses are strings (#11275)
### Changes 🏗️

- Added validation to ensure that the `summary` and `final_summary`
returned by the LLM are strings.
- Raises a `ValueError` if the LLM returns a list or other non-string
type, providing a descriptive error message to aid debugging.

Fixes
[AUTOGPT-SERVER-6M4](https://sentry.io/organizations/significant-gravitas/issues/6978480131/).
The issue was that: LLM returned list of strings instead of single
string summary, causing `_combine_summaries` to fail on `join`.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2230933

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6978480131/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Added a unit test to verify that a ValueError is raised when the
LLM returns a list instead of a string for summary or final_summary.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-30 09:52:50 +00:00
Swifty
594b1adcf7 fix(frontend): Fix marketplace sort by (#11284)
Marketplace sort by functionality was not working on the frontend. This
PR fixes it

### Changes 🏗️

- Add type hints for sort by
- Fix marketplace sort by drop downs


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested locally
2025-10-30 08:46:11 +00:00
Swifty
cab6590908 fix(frontend): Safely parse error response body in handleFetchError (#11274)
### Changes 🏗️

- Ensures `handleFetchError` can handle non-JSON error responses (e.g.,
HTML error pages).
- Attempts to parse the response body as JSON, but falls back to text if
JSON parsing fails.
- Logs a warning to the console if JSON parsing fails.
- Sets `responseData` to null if parsing fails.

Fixes
[BUILDER-482](https://sentry.io/organizations/significant-gravitas/issues/6958135748/).
The issue was that: Frontend error handler unconditionally calls
`response.json()` on a non-JSON HTML error page starting with 'A'.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2206951

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6958135748/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test Plan:
    - [x] Created unit tests for the issue that caused the error
    - [x] Created unit tests to ensure responses are parsed gracefully
2025-10-29 16:22:47 +00:00
Swifty
a1ac109356 fix(backend): Further enhance sanitization of SQL raw queries (#11279)
### Changes 🏗️

Enhanced SQL query security in the store search functionality by
implementing proper parameterization to prevent SQL injection
vulnerabilities.

**Security Improvements:**
- Replaced string interpolation with PostgreSQL positional parameters
(`$1`, `$2`, etc.) for all user inputs
- Added ORDER BY whitelist validation to prevent injection via
`sorted_by` parameter
- Parameterized search term, creators array, category, and pagination
values
- Fixed variable naming conflict (`sql_where_clause` vs `where_clause`)

**Testing:**
- Added 4 comprehensive tests validating SQL injection prevention across
different attack vectors
- Tests verify that malicious input in search queries, filters, sorting,
and categories are safely handled
- All 10 tests in db_test.py pass successfully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All existing tests pass (10/10 tests passing)
  - [x] New security tests validate SQL injection prevention
  - [x] Verified parameterized queries handle malicious input safely
  - [x] Code formatting passes (`poetry run format`)

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

*Note: No configuration changes required for this security fix*
2025-10-29 15:21:27 +00:00
Zamil Majdy
5506d59da1 fix(backend/executor): make graph execution permission check version-agnostic (#11283)
## Summary
Fix critical issue where pre-execution permission validation broke
execution of graphs that reference older versions of sub-graphs.

## Problem
The `validate_graph_execution_permissions` function was checking for the
specific version of a graph in the user's library. This caused failures
when:
1. A parent graph references an older version of a sub-graph  
2. The user updates the sub-graph to a newer version
3. The older version is no longer in their library
4. Execution of the parent graph fails with `GraphNotInLibraryError`

## Root Cause
In `backend/executor/utils.py` line 523, the function was checking for
the exact version, but sub-graphs legitimately reference older versions
that may no longer be in the library.

## Solution

### 1. Remove Version-Specific Check (backend/executor/utils.py)
- Remove `graph_version=graph.version` parameter from validation call
- Add explanatory comment about version-agnostic behavior
- Now only checks that the graph ID exists in user's library (any
version)

### 2. Enhance Documentation (backend/data/graph.py)  
- Update function docstring to explain version-agnostic behavior
- Document that `None` (now default) allows execution of any version
- Clarify this is important for sub-graph version compatibility

## Technical Details
The `validate_graph_execution_permissions` function was already designed
to handle version-agnostic checks when `graph_version=None`. By omitting
the version parameter, we skip the version check and only verify:
- Graph exists in user's library  
- Graph is not deleted/archived
- User has execution permissions

## Impact
-  Parent graphs can execute even when they reference older sub-graph
versions
-  Sub-graph updates don't break existing parent graphs  
-  Maintains security: still checks library membership and permissions
-  No breaking changes: version-specific validation still available
when needed

## Example Scenario Fixed
1. User creates parent graph that uses sub-graph v1
2. User updates sub-graph to v2 (v1 removed from library)  
3. Parent graph still references sub-graph v1
4. **Before**: Execution fails with `GraphNotInLibraryError`
5. **After**: Execution succeeds (version-agnostic permission check)

## Testing
- [x] Code formatting and linting passes
- [x] Type checking passes
- [x] No breaking changes to existing functionality
- [x] Security still maintained through library membership checks

## Files Changed
- `backend/executor/utils.py`: Remove version-specific permission check
- `backend/data/graph.py`: Enhanced documentation for version-agnostic
behavior

Closes #[issue-number-if-applicable]

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 14:13:23 +00:00
Ubbe
749341100b fix(frontend): prevent Wallet rendering twice (#11282)
## Changes 🏗️

The `<Wallet />` was being rendered twice ( one hidden with CSS `hidden`
) because of the Navbar layout, which caused logic issues within the
wallet. I changed to render it conditionally via Javascript instance,
which is always better practice than use `hidden` specially for
components with actual logic.

I also moved the component files closer to where it is used ( in the
navbar ).

I have a Cursor plugin that removes imports when unused, but annoyingly
re-organizes them, hence the changes around that...

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] There is only 1 Wallet in the DOM
2025-10-29 14:09:13 +00:00
Zamil Majdy
4922f88851 feat(backend/executor): Implement cascading stop for nested graph executions (#11277)
## Summary
Fixes critical issue where child executions spawned by
`AgentExecutorBlock` continue running after parent execution is stopped.
Implements parent-child execution tracking and recursive cascading stop
logic to ensure entire execution trees are terminated together.

## Background
When a parent graph execution containing `AgentExecutorBlock` nodes is
stopped, only the parent was terminated. Child executions continued
running, leading to:
-  Orphaned child executions consuming credits
-  No user control over execution trees  
-  Race conditions where children start after parent stops
-  Resource leaks from abandoned executions

## Core Changes

### 1. Database Schema (`schema.prisma` + migration)
```sql
-- Add nullable parent tracking field
ALTER TABLE "AgentGraphExecution" ADD COLUMN "parentGraphExecutionId" TEXT;

-- Add self-referential foreign key with graceful deletion
ALTER TABLE "AgentGraphExecution" ADD CONSTRAINT "AgentGraphExecution_parentGraphExecutionId_fkey" 
  FOREIGN KEY ("parentGraphExecutionId") REFERENCES "AgentGraphExecution"("id") 
  ON DELETE SET NULL ON UPDATE CASCADE;

-- Add index for efficient child queries
CREATE INDEX "AgentGraphExecution_parentGraphExecutionId_idx" 
  ON "AgentGraphExecution"("parentGraphExecutionId");
```

### 2. Parent ID Propagation (`backend/blocks/agent.py`)
```python
# Extract current graph execution ID and pass as parent to child
execution = add_graph_execution(
    # ... other params
    parent_graph_exec_id=graph_exec_id,  # NEW: Track parent relationship
)
```

### 3. Data Layer (`backend/data/execution.py`)
```python
async def get_child_graph_executions(parent_exec_id: str) -> list[GraphExecution]:
    """Get all child executions of a parent execution."""
    children = await AgentGraphExecution.prisma().find_many(
        where={"parentGraphExecutionId": parent_exec_id, "isDeleted": False}
    )
    return [GraphExecution.from_db(child) for child in children]
```

### 4. Cascading Stop Logic (`backend/executor/utils.py`)
```python
async def stop_graph_execution(
    user_id: str,
    graph_exec_id: str,
    wait_timeout: float = 15.0,
    cascade: bool = True,  # NEW parameter
):
    # 1. Find all child executions
    if cascade:
        children = await _get_child_executions(graph_exec_id)
        
        # 2. Stop all children recursively in parallel
        if children:
            await asyncio.gather(
                *[stop_graph_execution(user_id, child.id, wait_timeout, True) 
                  for child in children],
                return_exceptions=True,  # Don't fail parent if child fails
            )
    
    # 3. Stop the parent execution
    # ... existing stop logic
```

### 5. Race Condition Prevention (`backend/executor/manager.py`)
```python
# Before executing queued child, check if parent was terminated
if parent_graph_exec_id:
    parent_exec = get_db_client().get_graph_execution_meta(parent_graph_exec_id, user_id)
    if parent_exec and parent_exec.status == ExecutionStatus.TERMINATED:
        # Skip execution, mark child as terminated
        get_db_client().update_graph_execution_stats(
            graph_exec_id=graph_exec_id,
            status=ExecutionStatus.TERMINATED,
        )
        return  # Don't start orphaned child
```

## How It Works

### Before (Broken)
```
User stops parent execution
    ↓
Parent terminates ✓
    ↓
Child executions keep running ✗
    ↓
User cannot stop children ✗
```

### After (Fixed)
```
User stops parent execution
    ↓
Query database for all children
    ↓
Recursively stop all children in parallel
    ↓
Wait for children to terminate
    ↓
Stop parent execution
    ↓
All executions in tree stopped ✓
```

### Race Prevention
```
Child in QUEUED status
    ↓
Parent stopped
    ↓
Child picked up by executor
    ↓
Pre-flight check: parent TERMINATED?
    ↓
Yes → Skip execution, mark child TERMINATED
    ↓
Child never runs ✓
```

## Edge Cases Handled
 **Deep nesting** - Recursive cascading handles multi-level trees  
 **Queued children** - Pre-flight check prevents execution  
 **Race conditions** - Child spawned during stop operation  
 **Partial failures** - `return_exceptions=True` continues on error  
 **Multiple children** - Parallel stop via `asyncio.gather()`  
 **No parent** - Backward compatible (nullable field)  
 **Already completed** - Existing status check handles it  

## Performance Impact
- **Stop operation**: O(depth) with parallel execution vs O(1) before
- **Memory**: +36 bytes per execution (one UUID reference)
- **Database**: +1 query per tree level, indexed for efficiency

## API Changes (Backward Compatible)

### `stop_graph_execution()` - New Optional Parameter
```python
# Before
async def stop_graph_execution(user_id: str, graph_exec_id: str, wait_timeout: float = 15.0)

# After  
async def stop_graph_execution(user_id: str, graph_exec_id: str, wait_timeout: float = 15.0, cascade: bool = True)
```
**Default `cascade=True`** means existing callers get the new behavior
automatically.

### `add_graph_execution()` - New Optional Parameter
```python
async def add_graph_execution(..., parent_graph_exec_id: Optional[str] = None)
```

## Security & Safety
-  **User verification** - Users can only stop their own executions
(parent + children)
-  **No cycles** - Self-referential FK prevents infinite loops  
-  **Graceful degradation** - Errors in child stops don't block parent
stop
-  **Rate limits** - Existing execution rate limits still apply

## Testing Checklist

### Database Migration
- [x] Migration runs successfully  
- [x] Prisma client regenerates without errors
- [x] Existing tests pass

### Core Functionality  
- [ ] Manual test: Stop parent with running child → child stops
- [ ] Manual test: Stop parent with queued child → child never starts
- [ ] Unit test: Cascading stop with multiple children
- [ ] Unit test: Deep nesting (3+ levels)
- [ ] Integration test: Race condition prevention

## Breaking Changes
**None** - All changes are backward compatible with existing code.

## Rollback Plan
If issues arise:
1. **Code rollback**: Revert PR, redeploy
2. **Database rollback**: Drop column and constraints (non-destructive)

---

**Note**: This branch contains additional unrelated changes from merging
with `dev`. The core cascading stop feature involves only:
- `schema.prisma` + migration
- `backend/data/execution.py` 
- `backend/executor/utils.py`
- `backend/blocks/agent.py`
- `backend/executor/manager.py`

All other file changes are from dev branch updates and not part of this
feature.

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Nested graph executions: parent-child tracking and retrieval of child
executions

* **Improvements**
* Cascading stop: stopping a parent optionally terminates child
executions
  * Parent execution IDs propagated through runs and surfaced in logs
  * Per-user/graph concurrent execution limits enforced

* **Bug Fixes**
* Skip enqueuing children if parent is terminated; robust handling when
parent-status checks fail

* **Tests**
  * Updated tests to cover parent linkage in graph creation
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 11:11:22 +00:00
Zamil Majdy
5fb142c656 fix(backend/executor): ensure cluster lock release on all execution submission failures (#11281)
## Root Cause
During rolling deployment, execution
`97058338-052a-4528-87f4-98c88416bb7f` got stuck in QUEUED state
because:

1. Pod acquired cluster lock successfully during shutdown  
2. Subsequent setup operations failed (ThreadPoolExecutor shutdown,
resource exhaustion, etc.)
3. **No error handling existed** around the critical section after lock
acquisition
4. Cluster lock remained stuck in Redis for 5 minutes (TTL timeout)
5. Other pods couldn't acquire the lock, leaving execution permanently
queued

## The Fix

### Problem: Critical Section Not Protected
The original code had no error handling for the entire critical section
after successful lock acquisition:
```python
# Original code - no error handling after lock acquired
current_owner = cluster_lock.try_acquire()
if current_owner != self.executor_id:
    return  # didn't get lock
    
# CRITICAL SECTION - any failure here leaves lock stuck
self._execution_locks[graph_exec_id] = cluster_lock  # Could fail: memory
logger.info("Acquired cluster lock...")              # Could fail: logging  
cancel_event = threading.Event()                     # Could fail: resources
future = self.executor.submit(...)                   # Could fail: shutdown
self.active_graph_runs[...] = (future, cancel_event) # Could fail: memory
```

### Solution: Wrap Entire Critical Section  
Protect ALL operations after successful lock acquisition:
```python
# Fixed code - comprehensive error handling
current_owner = cluster_lock.try_acquire()
if current_owner != self.executor_id:
    return  # didn't get lock

# Wrap ENTIRE critical section after successful acquisition
try:
    self._execution_locks[graph_exec_id] = cluster_lock
    logger.info("Acquired cluster lock...")
    cancel_event = threading.Event()
    future = self.executor.submit(...)
    self.active_graph_runs[...] = (future, cancel_event)
except Exception as e:
    # Release cluster lock before requeue
    cluster_lock.release()
    del self._execution_locks[graph_exec_id] 
    _ack_message(reject=True, requeue=True)
    return
```

### Why This Comprehensive Approach Works
- **Complete protection**: Any failure in critical section → lock
released
- **Proper cleanup order**: Lock released → message requeued → another
pod can try
- **Uses existing infrastructure**: Leverages established
`_ack_message()` requeue logic
- **Handles all scenarios**: ThreadPoolExecutor shutdown, resource
exhaustion, memory issues, logging failures

## Protected Failure Scenarios
1. **Memory exhaustion**: `_execution_locks` assignment or
`active_graph_runs` assignment
2. **Resource exhaustion**: `threading.Event()` creation fails
3. **ThreadPoolExecutor shutdown**: `executor.submit()` with "cannot
schedule new futures after shutdown"
4. **Logging system failures**: `logger.info()` calls fail
5. **Any unexpected exceptions**: Network issues, disk problems, etc.

## Validation
-  All existing tests pass  
-  Maintains exact same success path behavior
-  Comprehensive error handling for all failure points
-  Minimal code change with maximum protection

## Impact
- **Eliminates stuck executions** during pod lifecycle events (rolling
deployments, scaling, crashes)
- **Faster recovery**: Immediate requeue vs 5-minute Redis TTL wait
- **Higher reliability**: Handles ANY failure in the critical section
- **Production-ready**: Comprehensive solution for distributed lock
management

This prevents the exact race condition that caused execution
`97058338-052a-4528-87f4-98c88416bb7f` to be stuck for >300 seconds,
plus many other potential failure scenarios.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-29 08:56:24 +00:00
Pratyush Singh
e14594ff4a fix: handle oversized notifications by sending summary email (#11119) (#11130)
📨 Fix: Handle Oversized Notification Emails
Summary

This PR adds logic to detect and handle oversized notification emails
exceeding Postmark’s 5 MB limit. Instead of retrying indefinitely, the
system now sends a lightweight summary email with key stats and a
dashboard link.

Changes

Added size check in EmailSender.send_templated()

Sends summary email when payload > ~4.5 MB

Prevents infinite retries and queue clogging

Added logs for oversized detection

Fixes #11119

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-10-29 00:57:13 +00:00
Zamil Majdy
de70ede54a fix(backend): prevent execution of deleted agents and cleanup orphaned resources (#11243)
## Summary
Fix critical bug where deleted agents continue running scheduled and
triggered executions indefinitely, consuming credits without user
control.

## Problem
When agents are deleted from user libraries, their schedules and webhook
triggers remain active, leading to:
-  Uncontrolled resource consumption 
-  "Unknown agent" executions that charge credits
-  No way for users to stop orphaned executions
-  Accumulation of orphaned database records

## Solution

### 1. Prevention: Library Validation Before Execution
- Add `is_graph_in_user_library()` function with efficient database
queries
- Validate graph accessibility before all executions in
`validate_and_construct_node_execution_input()`
- Use specific `GraphNotInLibraryError` for clear error handling

### 2. Cleanup: Remove Schedules & Webhooks on Deletion
- Enhanced `delete_library_agent()` to clean up associated schedules and
webhooks
- Comprehensive cleanup functions for both scheduled and triggered
executions
- Proper database transaction handling

### 3. Error-Based Cleanup: Handle Existing Orphaned Resources
- Catch `GraphNotInLibraryError` in scheduler and webhook handlers
- Automatically clean up orphaned resources when execution fails
- Graceful degradation without breaking existing workflows

### 4. Migration: Clean Up Historical Orphans
- SQL migration to remove existing orphaned schedules and webhooks
- Performance index for faster cleanup queries
- Proper logging and error handling

## Key Changes

### Core Library Validation
```python
# backend/data/graph.py - Single source of truth
async def is_graph_in_user_library(graph_id: str, user_id: str, graph_version: Optional[int] = None) -> bool:
    where_clause = {"userId": user_id, "agentGraphId": graph_id, "isDeleted": False, "isArchived": False}
    if graph_version is not None:
        where_clause["agentGraphVersion"] = graph_version
    count = await LibraryAgent.prisma().count(where=where_clause)
    return count > 0
```

### Enhanced Agent Deletion
```python
# backend/server/v2/library/db.py
async def delete_library_agent(library_agent_id: str, user_id: str, soft_delete: bool = True) -> None:
    # ... existing deletion logic ...
    await _cleanup_schedules_for_graph(graph_id=graph_id, user_id=user_id)
    await _cleanup_webhooks_for_graph(graph_id=graph_id, user_id=user_id)
```

### Execution Prevention
```python
# backend/executor/utils.py
if not await gdb.is_graph_in_user_library(graph_id=graph_id, user_id=user_id, graph_version=graph.version):
    raise GraphNotInLibraryError(f"Graph #{graph_id} is not accessible in your library")
```

### Error-Based Cleanup
```python
# backend/executor/scheduler.py & backend/server/integrations/router.py
except GraphNotInLibraryError as e:
    logger.warning(f"Execution blocked for deleted/archived graph {graph_id}")
    await _cleanup_orphaned_resources_for_graph(graph_id, user_id)
```

## Technical Implementation

### Database Efficiency
- Use `count()` instead of `find_first()` for faster queries
- Add performance index: `idx_library_agent_user_graph_active`
- Follow existing `prisma.is_connected()` patterns

### Error Handling Hierarchy
- **`GraphNotInLibraryError`**: Specific exception for deleted/archived
graphs
- **`NotAuthorizedError`**: Generic authorization errors (preserved for
user ID mismatches)
- Clear error messages for better debugging

### Code Organization
- Single source of truth for library validation in
`backend/data/graph.py`
- Import from centralized location to avoid duplication
- Top-level imports following codebase conventions

## Testing & Validation

### Functional Testing
-  Library validation prevents execution of deleted agents
-  Cleanup functions remove schedules and webhooks properly  
-  Error-based cleanup handles orphaned resources gracefully
-  Migration removes existing orphaned records

### Integration Testing
-  All existing tests pass (including `test_store_listing_graph`)
-  No breaking changes to existing functionality
-  Proper error propagation and handling

### Performance Testing
-  Efficient database queries with proper indexing
-  Minimal overhead for normal execution flows
-  Cleanup operations don't impact performance

## Impact

### User Experience
- 🎯 **Immediate**: Deleted agents stop running automatically
- 🎯 **Ongoing**: No more unexpected credit charges from orphaned
executions
- 🎯 **Cleanup**: Historical orphaned resources are removed

### System Reliability
- 🔒 **Security**: Users can only execute agents they have access to
- 🧹 **Cleanup**: Automatic removal of orphaned database records
- 📈 **Performance**: Efficient validation with minimal overhead

### Developer Experience
- 🎯 **Clear Errors**: Specific exception types for better debugging
- 🔧 **Maintainable**: Centralized library validation logic
- 📚 **Documented**: Comprehensive error handling patterns

## Files Modified
- `backend/data/graph.py` - Library validation function
- `backend/server/v2/library/db.py` - Enhanced agent deletion with
cleanup
- `backend/executor/utils.py` - Execution validation and prevention
- `backend/executor/scheduler.py` - Error-based cleanup for schedules
- `backend/server/integrations/router.py` - Error-based cleanup for
webhooks
- `backend/util/exceptions.py` - Specific error type for deleted graphs
-
`migrations/20251023000000_cleanup_orphaned_schedules_and_webhooks/migration.sql`
- Historical cleanup

## Breaking Changes
None. All changes are backward compatible and preserve existing
functionality.

## Follow-up Tasks
- [ ] Monitor cleanup effectiveness in production
- [ ] Consider adding metrics for orphaned resource detection
- [ ] Potential optimization of cleanup batch operations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-28 23:48:35 +00:00
Ubbe
59657eb42e fix(frontend): onboarding step 5 adjustments (#11276)
## Changes 🏗️

A couple of improvements on **Onboarding Step 5**:
- Show a spinner when the page is loading ( better contrast / context
than skeleton in this case )
- Prevent the run button being disabled if credentials failed to load
- while this is good/expected behavior, it will help us debug the issue
in production where credentials failed to load silently, given running
the agent it'll throw an error we can see

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new account/signup
  - [x] On Onboarding Step 5 test the above
2025-10-28 13:58:04 +00:00
Reinier van der Leer
5e5f45a713 fix(backend): Fix various warnings (#11252)
- Resolves #11251

This fixes all the warnings mentioned in #11251, reducing noise and
making our logs and error alerts more useful :)

### Changes 🏗️

- Remove "Block {block_name} has multiple credential inputs" warning
(not actually an issue)
- Rename `json` attribute of `MainCodeExecutionResult` to `json_data`;
retain serialized name through a field alias
- Replace `Path(regex=...)` with `Path(pattern=...)` in
`get_shared_execution` endpoint parameter config
- Change Uvicorn's WebSocket module to new Sans-I/O implementation for
WS server
- Disable Uvicorn's WebSocket module for REST server
- Remove deprecated `enable_cleanup_closed=True` argument in
`CloudStorageHandler` implementation
- Replace Prisma transaction timeout `int` argument with a `timedelta`
value
- Update Sentry SDK to latest version (v2.42.1)
- Broaden filter for cleanup warnings from indirect dependency `litellm`
- Fix handling of `MissingConfigError` in REST server endpoints

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Check that the warnings are actually gone
- [x] Deploy to dev environment and run a graph; check for any warnings
  - Test WebSocket server
- [x] Run an agent in the Builder; make sure real-time execution updates
still work
2025-10-28 13:18:45 +00:00
Ubbe
320fb7d83a fix(frontend): waitlist modal copy (#11263)
### Changes 🏗️

### Before

<img width="800" height="649" alt="Screenshot_2025-10-23_at_00 44 59"
src="https://github.com/user-attachments/assets/fd717d39-772a-4331-bc54-4db15a9a3107"
/>

### After

<img width="800" height="555" alt="Screenshot 2025-10-27 at 23 19 10"
src="https://github.com/user-attachments/assets/64878bd0-3a96-4b3a-8344-1a88c89de52e"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Try to signup with a non-approved email
  - [x] You see the modal with an updated copy
2025-10-28 11:08:06 +00:00
Ubbe
54552248f7 fix(frontend): login not visible mobile (#11245)
## Changes 🏗️

The mobile 📱 experience is still a mess but this helps a little.

### Before

<img width="350" height="395" alt="Screenshot 2025-10-24 at 18 26 18"
src="https://github.com/user-attachments/assets/75eab232-8c37-41e7-a51d-dbe07db336a0"
/>

### After

<img width="350" height="406" alt="Screenshot 2025-10-24 at 18 25 54"
src="https://github.com/user-attachments/assets/ecbd8bbd-8a94-4775-b990-c8b51de48cf9"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Load the app
  - [x] Check the Tally popup button copy
  - [x] The button still works
2025-10-28 14:00:50 +04:00
Ubbe
d8a5780ea2 fix(frontend): feedback button copy (#11246)
## Changes 🏗️

<img width="800" height="827" alt="Screenshot 2025-10-24 at 17 45 48"
src="https://github.com/user-attachments/assets/ab18361e-6c58-43e9-bea6-c9172d06c0e7"
/>

- Shows the text `Give feedback` so the button is more explicit 🏁 
- Refactor the component to stick to [new code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md)

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Load the app
  - [x] Check the Tally popup button copy
  - [x] The button still works
2025-10-28 14:00:33 +04:00
seer-by-sentry[bot]
377657f8a1 fix(backend): Extract response from LLM response dictionary (#11262)
### Changes 🏗️

- Modifies the LLM block to extract the actual response from the
dictionary returned by the LLM, instead of yielding the entire
dictionary. This addresses
[AUTOGPT-SERVER-6EY](https://sentry.io/organizations/significant-gravitas/issues/6950850822/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] After applying the fix, I ran the agent that triggered the Sentry
error and confirmed that it now completes successfully without errors.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-28 08:43:29 +00:00
seer-by-sentry[bot]
ff71c940c9 fix(backend): Properly encode hostname in URL validation (#11259)
Fixes
[AUTOGPT-SERVER-6KZ](https://sentry.io/organizations/significant-gravitas/issues/6976926125/).
The issue was that: Redirect handling strips the URL scheme, causing
subsequent requests to fail validation and hit a 404.

- Ensures the hostname in the URL is properly IDNA-encoded after
validation.
- Reconstructs the netloc with the encoded hostname and preserves the
port if it exists.

This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2204774

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6976926125/?seerDrawer=true)

### Changes 🏗️

**backend/util/request.py:**
- Fixed URL validation to properly preserve port numbers when
reconstructing netloc
- Ensures IDNA-encoded hostname is combined with port (if present)
before URL reconstruction

**Test Results:**
-  Tested request to https://www.target.com/ (original failing URL from
Sentry issue)
-  Status: 200, Content retrieved successfully (339,846 bytes)
-  Port preservation verified for URLs with explicit ports

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tested request to https://www.target.com/ (original failing URL)
  - [x] Verified status code 200 and successful content retrieval
  - [x] Verified port preservation in URL validation

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-28 08:43:14 +00:00
Reinier van der Leer
9967b3a7ce fix(frontend/builder): Fix unnecessary graph re-saving (#11145)
- Resolves #10980
- 2nd attempt after #11075 broke some things

Fixes unnecessary graph re-saving when no changes were made after
initial save. More specifically, this PR fixes two causes of this issue:
- Frontend node IDs were being compared to backend IDs, which won't
match if the graph has been modified and saved since loading.
- `fillDefaults` was being applied to all nodes (including existing
ones) on element creation, and empty values were being stripped
*post-save* with `removeEmptyStringsAndNulls`. This invisible
auto-modification of node input data meant that in some common cases the
graph would never be in sync with the backend.

### Changes 🏗️

- Fix node ID handling
- Use `node.data.backend_id ?? node.id` instead of `node.id` in
`prepareSaveableGraph`
    - Also map link source/sink IDs to their corresponding backend IDs
  - Add note about `node.data.backend_id` to `_saveAgent`
  - Use `node.data.backend_id || node.id` as display ID in `CustomNode`

- Prevent auto-modification of node input data on existing nodes
- Prune empty values (`undefined`, `null`, `""`) from node input data
*pre-save* instead of post-save
- Related: improve typing and functionality of
`fillObjectDefaultsFromSchema` (moved and renamed from `fillDefaults`)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Node display ID updates on save
- [x] Clicking save a second time (without making more changes) doesn't
cause re-save
- [x] Updating nodes with dynamic input links (e.g. Create Dictionary
Block) doesn't make the links disappear


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Prevented unintended auto-modification of existing nodes during
editing
* Improved consistency of node and connection identifiers in saved
graphs

* **Improvements**
  * Enhanced node title display logic for clearer node identification
* Optimized data cleanup utilities for more robust input processing in
the builder

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-27 16:49:02 +00:00
Bently
9db443960a feat(blocks/claude): Remove Claude 3.5 Sonnet and Haiku model (#11260)
Removes CLAUDE_3_5_SONNET and CLAUDE_3_5_HAIKU from LlmModel enum, model
metadata, and cost configuration since they are deprecated

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
  - [x] Verify the models are gone from the llm blocks
2025-10-27 16:49:02 +00:00
Ubbe
9316100864 fix(frontend): agent activity graph names (#11233)
## Changes 🏗️

We weren't fetching all library agents, just the first 15... to compute
the agent map on the Agent Activity dropdown. We suspect that is causing
some agent executions coming as `Unknown agent`.

In this changes, I'm fetching all the library agents upfront ( _without
blocking page load_ ) and caching them on the browser, so we have all
the details to render the agent runs. This is re-used in the library as
well for fast initial load on the agents list page.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First request populates cache; subsequent identical requests hit
cache
- [x] Editing an agent invalidates relevant cache keys and serves fresh
data
  - [x] Different query params generate distinct cache entries
  - [x] Cache layer gracefully falls back to live data on errors
  - [x] 404 behavior for unknown agents unchanged

### For configuration changes:

None
2025-10-27 20:08:21 +04:00
Ubbe
cbe0cee0fc fix(frontend): Credentials disabling onboarding Run button (#11244)
## Changes 🏗️

The onboarding `Run` button is disabled sometimes when an agent
requiring credentials is selected. We think this can be because the
credentials load _async_ by a sub-component ( `<CredentialsInputs />` ),
and there wasn't a way for the parent component to know whether they
loaded or not.

- Refactored **Step 5** of onboarding to adhere to our code conventions
  - split concerns and colocated state
  - used generated API hooks
  - the UI will only render once API calls succeed
- Created a system where ``<CredentialsInputs />` notify the parent
component when they load
- Did minor adjustments here and there

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I will know once I find an agent with credentials that I can
run....


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added visual agent selection card displaying agent details during
onboarding
  * Introduced credentials input management for agent configuration
  * Added onboarding guidance for initiating agent runs

* **Improvements**
  * Enhanced onboarding flow with improved state management
  * Refined login state handling
  * Adjusted spacing in agent rating display

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-27 19:53:14 +04:00
Swifty
b31d60276a fix(backend/store): Sanitize all sql terms (#11228)
Categories and Creators where not sanitized in the full text search

- apply sanitization to categories and creators

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] run tests to check it still works
2025-10-27 13:16:37 +01:00
Swifty
7cbb1ed859 fix(backend/store): Sanitize all sql terms (#11228)
Categories and Creators where not sanitized in the full text search

### Changes 🏗️

- apply sanitization to categories and creators

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] run tests to check it still works
2025-10-27 12:59:05 +01:00
Toran Bruce Richards
b52e95e1fc fix(blocks): Add missing error output pins to all Firecrawl blocks (#11256)
Added error output pins to all Firecrawl blocks as standard on the
AutoGPT platform. The base block execution code already handles error
yielding, so no try-catch logic was needed.

- FirecrawlScrapeBlock: Added error output pin for scrape failures
- FirecrawlCrawlBlock: Added error output pin for crawl failures
- FirecrawlExtractBlock: Added error output pin for extraction failures
- FirecrawlMapBlock: Added error output pin for map failures
- FirecrawlSearchBlock: Added error output pin for search failures

Resolves #11253

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
2025-10-27 08:36:28 +00:00
Reinier van der Leer
e06e7ff33f fix(backend): Implement graceful shutdown in AppService to prevent RPC errors (#11240)
We're currently seeing errors in the `DatabaseManager` while it's
shutting down, like:

```
WARNING [DatabaseManager] Termination request: SystemExit; 0 executing cleanup.
INFO [DatabaseManager]  Disconnecting Database...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection started...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection completed successfully.
INFO [DatabaseManager] Terminated.
ERROR POST /create_or_add_to_user_notification_batch failed: Failed to create or add to notification batch for user {user_id} and type AGENT_RUN: NoneType: None
```

This indicates two issues:
- The service doesn't wait for pending RPC calls to finish before
terminating
- We're using `logger.exception` outside an error handling context,
causing the confusing and not much useful `NoneType: None` to be printed
instead of error info

### Changes 🏗️

- Implement graceful shutdown in `AppService` so in-flight RPC calls can
finish
  - Add tests for graceful shutdown
  - Prevent `AppService` accepting new requests during shutdown
- Rework `AppService` lifecycle management; add support for async
`lifespan`
- Fix `AppService` endpoint error logging
- Improve logging in `AppProcess` and `AppService`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Deploy to Dev cluster, then `kubectl rollout restart` the different
services a few times
    - [x] -> `DatabaseManager` doesn't break on re-deployment
    - [x] -> `Scheduler` doesn't break on re-deployment
    - [x] -> `NotificationManager` doesn't break on re-deployment
2025-10-25 14:47:19 +00:00
Bently
f4ba02f2f1 feat(blocks/revid): Add cost configs for revid video blocks (#11242)
Updated block costs in `backend/backend/data/block_cost_config.py`:
  - **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
  - **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
  - **AIScreenshotToVideoAdBlock**: Added cost of 612 credits

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
2025-10-24 18:35:37 +01:00
Abhimanyu Yadav
acb946801b feat(frontend): add agent execution functionality in new builder (#11186)
This PR implements real-time agent execution functionality in the new
flow editor, enabling users to run, monitor, and view results of their
agent workflows directly within the builder interface.


https://github.com/user-attachments/assets/8a730e08-f88d-49d4-be31-980e2c7a2f83

#### Key Features Added:

##### 1. **Agent Execution Controls**
- Added "Run Agent" / "Stop Agent" button with gradient styling in the
builder interface
- Implemented execution state management through a new `graphStore` for
tracking running status
- Save graph automatically before execution to ensure latest changes are
persisted

##### 2. **Real-time Execution Monitoring**
- Implemented WebSocket-based real-time updates for node execution
status via `useFlowRealtime` hook
- Subscribe to graph execution events and node execution events for live
status tracking
- Visual execution status badges on nodes showing states: `QUEUED`,
`RUNNING`, `COMPLETED`, `FAILED`, etc.
   - Animated gradient border effect when agent is actively running

##### 3. **Node Execution Results Display**
- New `NodeDataRenderer` component to display input/output data for each
executed node
   - Collapsible result sections with formatted JSON display
- Prepared UI for future functionality: copy, info, and expand actions
for node data

#### Technical Implementation:

- **State Management**: Extended `nodeStore` with execution status and
result tracking methods
- **WebSocket Integration**: Real-time communication for execution
updates without polling
- **Component Architecture**: Modular components for execution controls,
status display, and result rendering
- **Visual Feedback**: Color-coded status badges and animated borders
for clear execution state indication


#### TODO Items for Future PRs:
- Complete implementation of node result action buttons (copy, info,
expand)
- Add agent output display component
- Implement schedule run functionality
- Handle credential and input parameters for graph execution
- Add tooltips for better UX

### Checklist

- [x] Create a new agent with at least 3 blocks and verify execution
starts correctly
- [x] Verify real-time status updates appear on nodes during execution
- [x] Confirm execution results display in the node output sections
- [x] Verify the animated border appears when agent is running
- [x] Check that node status badges show correct states (QUEUED,
RUNNING, COMPLETED, etc.)
- [x] Test WebSocket reconnection after connection loss
- [x] Verify graph is saved before execution begins
2025-10-24 12:05:09 +00:00
Bently
48ff225837 feat(blocks/revid): Add cost configs for revid video blocks (#11242)
Updated block costs in `backend/backend/data/block_cost_config.py`:
  - **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
  - **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
  - **AIScreenshotToVideoAdBlock**: Added cost of 612 credits

  ### Checklist 📋

  #### For code changes:
  - [x] I have clearly listed my changes in the PR description
  - [x] I have made a test plan
  - [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
2025-10-23 09:46:22 +00:00
Nicholas Tindle
e2a9923f30 feat(frontend): Improve waitlist error display & messages (#11206)
Improves the "not on waitlist" error display based on feedback.

- Follow-up to #11198
  - Follow-up to #11196

### Changes 🏗️

- Use standard `ErrorCard`
- Improve text strings
- Merge `isWaitlistError` and `isWaitlistErrorFromParams`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] We need to test in dev becasue we don't have a waitlist locally
and will revert if it doesnt work
- deploy to dev environment and sign up with a non approved account and
see if error appears
2025-10-22 13:37:42 +00:00
Reinier van der Leer
39792d517e fix(frontend): Filter out undefined query params in API requests (#11238)
Part of our effort to eliminate preventable warnings and errors.

- Resolves #11237

### Changes 🏗️

- Exclude `undefined` query params in API requests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Open the Builder without a `flowVersion` URL parameter
    - [x] -> `GET /api/library/agents/by-graph/{graph_id}` succeeds
  - Open the builder with a `flowVersion` URL parameter
    - [x] -> version is correctly included in request URL parameters
2025-10-22 13:25:34 +00:00
Bently
a6a2f71458 Merge commit from fork
* Replace urllib with Requests in RSS block to prevent SSRF

* Format
2025-10-22 14:18:34 +01:00
Bently
788b861bb7 Merge commit from fork 2025-10-22 14:17:26 +01:00
Ubbe
e203e65dc4 feat(frontend): setup datafast custom events (#11231)
## Changes 🏗️

- Add [custom events](https://datafa.st/docs/custom-goals) in
**Datafa.st** to track the user journey around core actions
  - track `add_to_library`
  - track `download_agent`
  - track `run_agent`
  - track `schedule_agent` 
- Refactor the analytics service to encapsulate both **GA** and
**Datafa.st**

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Analytics load correctly locally
  - [x] Events fire in production
 
### For configuration changes:

Once deployed to production we need to verify we are receiving analytics
and custom events in [Datafa.st](https://datafa.st/)
2025-10-22 16:56:30 +04:00
Ubbe
bd03697ff2 fix(frontend): URL substring sanitazion issue (#11232)
Potential fix for
[https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145)

To fix the issue, rather than using substring matching on the raw URL
string, we need to properly parse the URL and inspect its hostname. We
should confirm that the `hostname` property of the parsed URL is equal
to either `vimeo.com` or explicitly permitted subdomains like
`www.vimeo.com`. We can use the native JavaScript `URL` class for robust
parsing.

**File/Location:**  
- Only change line(s) in
`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/OutputRenderers/renderers/MarkdownRenderer.tsx`
- Specifically, update the logic in function `isVideoUrl()` on line 45.

**Methods/Imports/Definitions:**  
- Use the standard `URL` class (no need to add a new import, as this is
available in browsers and in Node.js).
- Provide fallback in case the URL passed in is malformed (wrap in a
try-catch, treat as non-video in this case).
- Check the parsed hostname for equality with `vimeo.com` or,
optionally, specific allowed subdomains (`www.vimeo.com`).

---


_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-22 16:56:12 +04:00
Reinier van der Leer
efd37b7a36 fix(frontend): Limit Sentry console capture to warnings and errors (#11223)
Debug and info level messages are currently ending up in Sentry,
polluting our issue feed.

### Changes 🏗️

- Limit Sentry console capture to warnings and worse

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, no test needed
2025-10-22 09:49:25 +00:00
Zamil Majdy
bb0b45d7f7 fix(backend): Make Jinja Error on TextFormatter as value error (#11236)
<!-- Clearly explain the need for these changes: -->

This PR converts Jinja2 TemplateError exceptions to ValueError in the
TextFormatter class to ensure proper error handling and HTTP status code
responses (400 instead of 500).

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added import for `jinja2.exceptions.TemplateError` in
`backend/util/text.py:6`
- Wrapped template rendering in try-catch block in `format_string`
method (`backend/util/text.py:105-109`)
- Convert `TemplateError` to `ValueError` to ensure proper 400 HTTP
status code for client errors
- Added warning logging for template rendering errors before re-raising
as ValueError

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan: -->
- [x] Verified that invalid Jinja2 templates now raise ValueError
instead of TemplateError
  - [x] Confirmed that valid templates continue to work correctly
  - [x] Checked that warning logs are generated for template errors
  - [x] Validated that the exception chain is preserved with `from e`

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-10-22 09:38:02 +00:00
Reinier van der Leer
04df981115 fix(backend): Fix structured logging for cloud environments (#11227)
- Resolves #11226

### Changes 🏗️

- Drop use of `CloudLoggingHandler` which docs state isn't for use in
GKE
- For cloud logging, output only structured log entries to `stdout`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test deploy to dev and check logs
2025-10-21 12:48:41 +00:00
Swifty
d25997b4f2 Revert "Merge branch 'swiftyos/secrt-1709-store-provider-names-and-en… (#11225)
Changes to providers blocks to store in db

### Changes 🏗️

- revet change

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have reverted the merge
2025-10-21 09:12:00 +00:00
Zamil Majdy
11d55f6055 fix(backend/executor): Avoid running direct query in executor (#11224)
## Summary
- Fixes database connection warnings in executor logs: "Client is not
connected to the query engine, you must call `connect()` before
attempting to query data"
- Implements resilient database client pattern already used elsewhere in
the codebase
- Adds caching to reduce database load for user context lookups

## Changes
- Updated `get_user_context()` to check `prisma.is_connected()` and fall
back to database manager client
- Added `@cached(maxsize=1000, ttl_seconds=3600)` decorator for
performance optimization
- Updated database manager to expose `get_user_by_id` method

## Test plan
- [x] Verify executor pods no longer show Prisma connection warnings
- [x] Confirm user timezone is still correctly retrieved
- [x] Test fallback behavior when Prisma is disconnected

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-21 08:46:40 +00:00
Ubbe
063dc5cf65 refactor(frontend): standardise with environment service (#11209)
## Changes 🏗️

Standardize all the runtime environment checks on the Front-end and
associated conditions to run against a single environment service where
all the environment config is centralized and hence easier to manage.

This helps prevent typos and bug when manually asserting against
environment variables ( which are typed as `string` ), the helper
functions are easier to read and re-use across the codebase.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app and click around
  - [x] Everything is smooth
  - [x] Test on the CI and types are green  

### For configuration changes:

None 🙏🏽
2025-10-21 08:44:34 +00:00
Ubbe
b7646f3e58 docs(frontend): contributing guidelines (#11210)
## Changes 🏗️

Document how to contribute on the Front-end so it is easier for
non-regular contributors.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Contribution guidelines make sense and look good considering the
AutoGPT stack

### For configuration changes:

None
2025-10-21 08:26:51 +00:00
Ubbe
0befaf0a47 feat(frontend): update tooltip and alert styles (#11212)
## Changes 🏗️

Matching updated changes in AutoGPT design system:

<img width="283" height="156" alt="Screenshot 2025-10-20 at 23 55 15"
src="https://github.com/user-attachments/assets/3a2e0ee7-cd53-4552-b72b-42f4631f1503"
/>
<img width="427" height="92" alt="Screenshot 2025-10-20 at 23 55 25"
src="https://github.com/user-attachments/assets/95344765-2155-4861-abdd-f5ec1497ace2"
/>
<img width="472" height="85" alt="Screenshot 2025-10-20 at 23 55 30"
src="https://github.com/user-attachments/assets/31084b40-0eea-4feb-a627-c5014790c40d"
/>
<img width="370" height="87" alt="Screenshot 2025-10-20 at 23 55 35"
src="https://github.com/user-attachments/assets/a81dba12-a792-4d41-b269-0bc32fc81271"
/>


## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Check the stories for Tooltip and Alerts, they look good


#### For configuration changes:
None
2025-10-21 08:14:28 +00:00
Reinier van der Leer
93f58dec5e Merge branch 'master' into dev 2025-10-21 08:49:12 +02:00
Reinier van der Leer
3da595f599 fix(backend): Only try to initialize LaunchDarkly once (#11222)
We currently try to re-init the LaunchDarkly client every time a feature flag is checked.
This causes 5 second extra latency on the flag check when LD is down, such as now.
Since flag checks are performed on every block execution, this currently cripples the platform's executors.

- Follow-up to #11221

### Changes 🏗️

- Only try to init LaunchDarkly once
- Improve surrounding log statements in the `feature_flag` module

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - This is a critical hotfix; we'll see its effect once deployed
2025-10-21 08:46:07 +02:00
Reinier van der Leer
e5e60921a3 fix(backend): Handle LaunchDarkly init failure (#11221)
LaunchDarkly is currently down and it's keeping our executor pods from
spinning up.

### Changes 🏗️

- Wrap `LaunchDarklyIntegration` init in a try/except

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - We'll see if it works once it deploys
2025-10-21 07:53:40 +02:00
Copilot
90af8f8e1a feat(backend): Add language fallback for YouTube transcription block (#11057)
## Problem

The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.

**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)

**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ! 
No transcripts were found for any of the requested language codes: ('en',)

For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```

## Solution

Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:

1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
   - Manually created transcripts (any language)
   - Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all

**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ")  #  Raises NoTranscriptFound

# After: Video with only Hungarian transcript  
get_transcript("3AMl5d2NKpQ")  #  Returns Hungarian transcript
```

## Changes

- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order

## Testing

-  All 7 new unit tests pass
-  Block integration test passes
-  Full test suite: 621 passed, 0 failed (no regressions)
-  Code formatting and linting pass

## Impact

This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:

-  Videos in any language can now be transcribed
-  English is still preferred when available
-  No breaking changes to existing functionality
-  Graceful degradation to available languages

Fixes #10637
Fixes https://linear.app/autogpt/issue/OPEN-2626

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
> 
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
> 
> 


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-21 02:31:33 +00:00
Nicholas Tindle
eba67e0a4b fix(platform/blocks): update linear oauth to use refresh tokens (#10998)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue SECRT-1665, which mandates an update to
Linear's OAuth2 implementation. Linear is transitioning from long-lived
access tokens to short-lived access tokens with refresh tokens, with a
deadline of April 1, 2026. This change is crucial to ensure continued
integration with Linear and to support their new token management
system, including a migration path for existing long-lived tokens.

### Changes 🏗️

-   **`autogpt_platform/backend/backend/blocks/linear/_oauth.py`**:
- Implemented full support for refresh tokens, including HTTP Basic
Authentication for token refresh requests.
- Added `migrate_old_token()` method to exchange old long-lived access
tokens for new short-lived tokens with refresh tokens using Linear's
`/oauth/migrate_old_token` endpoint.
- Enhanced `get_access_token()` to automatically detect and attempt
migration for old tokens, and to refresh short-lived tokens when they
expire.
    -   Improved error handling and token expiration management.
- Updated `_request_tokens` to handle both authorization code and
refresh token flows, supporting Linear's recommended authentication
methods.
-   **`autogpt_platform/backend/backend/blocks/linear/_config.py`**:
- Updated `TEST_CREDENTIALS_OAUTH` mock data to include realistic
`access_token_expires_at` and `refresh_token` for testing the new token
lifecycle.
-   **`LINEAR_OAUTH_IMPLEMENTATION.md`**:
- Added documentation detailing the new Linear OAuth refresh token
implementation, including technical details, migration strategy, and
testing notes.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified OAuth URL generation and parameter encoding.
- [x] Confirmed HTTP Basic Authentication header creation for refresh
requests.
  - [x] Tested token expiration logic with a 5-minute buffer.
  - [x] Validated migration detection for old vs. new token types.
  - [x] Checked code syntax and import compatibility.

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

---
Linear Issue: [SECRT-1665](https://linear.app/autogpt/issue/SECRT-1665)

<a
href="https://cursor.com/background-agent?bcId=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-20 20:44:58 +00:00
Nicholas Tindle
47bb89caeb fix(backend): Disable LaunchDarkly integration in metrics.py (#11217) 2025-10-20 14:07:21 -05:00
Ubbe
271a520afa feat(frontend): setup DataFast analytics (#11182)
## Changes 🏗️

Following https://datafa.st/docs/nextjs-app-router

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once we make a production deployment and get data into
the platform

### For configuration changes:

None
2025-10-20 16:18:04 +04:00
Swifty
3988057032 Merge branch 'swiftyos/secrt-1712-remove-error-handling-form-store-routes' into dev 2025-10-18 12:28:25 +02:00
Swifty
a6c6e48f00 Merge branch 'swiftyos/open-2791-featplatform-add-easy-test-data-creation' into dev 2025-10-18 12:28:17 +02:00
Swifty
e72ce2f9e7 Merge branch 'swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db' into dev 2025-10-18 12:27:58 +02:00
Swifty
bd7a79a920 Merge branch 'swiftyos/secrt-1706-improve-store-search' into dev 2025-10-18 12:27:31 +02:00
Nicholas Tindle
3f546ae845 fix(frontend): improve waitlist error display for users not on allowlist (#11198)
fix issue with identifying errors :(
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] we have to test in dev due to waitlist integration, so we are
merging. will revert if fails

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-18 05:14:05 +00:00
Nicholas Tindle
097a19141d fix(frontend): improve waitlist error display for users not on allowlist (#11196)
## Summary

This PR improves the user experience for users who are not on the
waitlist during sign-up. When a user attempts to sign up or log in with
an email that's not on the allowlist, they now see a clear, helpful
modal with a direct call-to-action to join the waitlist.

Fixes
[OPEN-2794](https://linear.app/autogpt/issue/OPEN-2794/display-waitlist-error-for-users-not-on-waitlist-during-sign-up)

## Changes

-  Updated `EmailNotAllowedModal` with improved messaging and a "Join
Waitlist" button
- 🔧 Fixed OAuth provider signup/login to properly display the waitlist
modal
- 📝 Enhanced auth-code-error page to detect and display
waitlist-specific errors
- 💬 Added helpful guidance about checking email address and Discord
support link
- 🎯 Consistent waitlist error handling across all auth flows (regular
signup, OAuth, error pages)

## Test Plan

Tested locally by:
1. Attempting signup with non-allowlisted email - modal appears 
2. Attempting Google SSO with non-allowlisted account - modal appears 
3. Modal shows "Join Waitlist" button that opens
https://agpt.co/waitlist 
4. Help text about checking email and Discord support is visible 

## Screenshots

The new waitlist modal includes:
- Clear "Join the Waitlist" title
- Explanation that platform is in closed beta
- "Join Waitlist" button (opens in new tab)
- Help text about checking email address
- Discord support link for users who need help

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-18 03:37:31 +00:00
Swifty
c958c95d6b fix incorrect type import 2025-10-17 20:36:49 +02:00
Swifty
3e50cbd2cb fix import 2025-10-17 19:19:17 +02:00
Swifty
1b69f1644d revert frontend type change 2025-10-17 17:26:08 +02:00
Swifty
d9035a233c Merge branch 'swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db' of github.com:Significant-Gravitas/AutoGPT into swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db 2025-10-17 17:20:27 +02:00
Swifty
972cbfc3de fix tests 2025-10-17 17:20:05 +02:00
Swifty
8f861b1bb2 removed error handling from routes 2025-10-17 17:08:17 +02:00
Swifty
fa2731bb8b Merge branch 'dev' into swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db 2025-10-17 17:06:09 +02:00
Swifty
2dc0c97a52 Add block registry and updated 2025-10-17 16:49:04 +02:00
Zamil Majdy
0bb2b87c32 fix(backend): resolve UserBalance migration issues and credit spending bug (#11192)
## Summary
Fix critical UserBalance migration and spending issues affecting users
with credits from transaction history but no UserBalance records.

## Root Issues Fixed

### Issue 1: UserBalance Migration Complexity
- **Problem**: Complex data migration with timestamp logic issues and
potential race conditions
- **Solution**: Simplified to idempotent table creation only,
application handles auto-population

### Issue 2: Credit Spending Bug  
- **Problem**: Users with $10.0 from transaction history couldn't spend
$0.16
- **Root Cause**: `_add_transaction` and `_enable_transaction` only
checked UserBalance table, returning 0 balance for users without records
- **Solution**: Enhanced both methods with transaction history fallback
logic

### Issue 3: Exception Handling Inconsistency
- **Problem**: Raw SQL unique violations raised different exception
types than Prisma ORM
- **Solution**: Convert raw SQL unique violations to
`UniqueViolationError` at source

## Changes Made

### Migration Cleanup
- **Idempotent operations**: Use `CREATE TABLE IF NOT EXISTS`, `CREATE
INDEX IF NOT EXISTS`
- **Inline foreign key**: Define constraint within `CREATE TABLE`
instead of separate `ALTER TABLE`
- **Removed data migration**: Application creates UserBalance records
on-demand
- **Safe to re-run**: No errors if table/index/constraint already exists

### Credit Logic Fixes
- **Enhanced `_add_transaction`**: Added transaction history fallback in
`user_balance_lock` CTE
- **Enhanced `_enable_transaction`**: Added same fallback logic for
payment fulfillment
- **Exception normalization**: Convert raw SQL unique violations to
`UniqueViolationError`
- **Simplified `onboarding_reward`**: Use standardized
`UniqueViolationError` catching

### SQL Fallback Pattern
```sql
COALESCE(
    (SELECT balance FROM UserBalance WHERE userId = ? FOR UPDATE),
    -- Fallback: compute from transaction history if UserBalance doesn't exist
    (SELECT COALESCE(ct.runningBalance, 0) 
     FROM CreditTransaction ct 
     WHERE ct.userId = ? AND ct.isActive = true AND ct.runningBalance IS NOT NULL 
     ORDER BY ct.createdAt DESC LIMIT 1),
    0
) as balance
```

## Impact

### Before
-  Users with transaction history but no UserBalance couldn't spend
credits
-  Migration had complex timestamp logic with potential bugs  
-  Raw SQL and Prisma exceptions handled differently
-  Error: "Insufficient balance of $10.0, where this will cost $0.16"

### After  
-  Seamless spending for all users regardless of UserBalance record
existence
-  Simple, idempotent migration that's safe to re-run
-  Consistent exception handling across all credit operations
-  Automatic UserBalance record creation during first transaction
-  Backward compatible - existing users unaffected

## Business Value
- **Eliminates user frustration**: Users can spend their credits
immediately
- **Smooth migration path**: From old User.balance to new UserBalance
table
- **Better reliability**: Atomic operations with proper error handling
- **Maintainable code**: Consistent patterns across credit operations

## Test Plan
- [ ] Manual testing with users who have transaction history but no
UserBalance records
- [ ] Verify migration can be run multiple times safely
- [ ] Test spending credits works for all user scenarios
- [ ] Verify payment fulfillment (`_enable_transaction`) works correctly
- [ ] Add comprehensive test coverage for this scenario

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 19:46:13 +07:00
Swifty
a1d9b45238 updated openapi spec 2025-10-17 14:01:37 +02:00
Swifty
29895c290f store providers in db 2025-10-17 13:34:35 +02:00
Zamil Majdy
73c0b6899a fix(backend): Remove advisory locks for atomic credit operations (#11143)
## Problem
High QPS failures on `spend_credits` operations due to lock contention
from `pg_advisory_xact_lock` causing serialization and seconds of wait
time.

## Solution 
Replace PostgreSQL advisory locks with atomic database operations using
CTEs (Common Table Expressions).

### Key Changes
- **Add persistent balance column** to User table for O(1) balance
lookups
- **Atomic CTE-based operations** for all credit transactions using
UPDATE...RETURNING pattern
- **Comprehensive concurrency tests** with 7 test scenarios including
stress testing
- **Remove all advisory lock usage** from the credit system

### Implementation Details
1. **Migration**: Adds balance column with backfill from transaction
history
2. **Atomic Operations**: All credit operations now use single atomic
CTEs that update balance and create transaction in one query
3. **Race Condition Prevention**: WHERE clauses in UPDATE statements
ensure balance never goes negative
4. **BetaUserCredit Compatibility**: Preserved monthly refill logic with
updated `_add_transaction` signature

### Performance Impact
-  Eliminated lock contention bottlenecks
-  O(1) balance lookups instead of O(n) transaction aggregation  
-  Atomic operations prevent race conditions without locks
-  Supports high QPS without serialization delays

### Testing
- All existing tests pass
- New concurrency test suite (`credit_concurrency_test.py`) with:
  - Concurrent spends from same user
  - Insufficient balance handling
  - Mixed operations (spends, top-ups, balance checks)
  - Race condition prevention
  - Integer overflow protection
  - Stress testing with 100 concurrent operations

### Breaking Changes
None - all existing APIs maintain compatibility

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Enhanced top‑up flows with top‑up types, clearer credit→dollar
formatting, and idempotent onboarding rewards.

* **Bug Fixes**
* Fixed race conditions for concurrent spends/top‑ups, added
integer‑overflow and underflow protection, stronger input validation,
and improved refund/dispute handling.

* **Refactor**
* Persisted per‑user balance with atomic updates for reliable balances;
admin history now prefetches balances.

* **Tests**
* Added extensive concurrency, refund, ceiling/underflow and migration
test suites.

* **Chores**
* Database migration to add persisted user balance; APIKey status
extended (SUSPENDED).
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-17 17:05:05 +07:00
Zamil Majdy
4c853a54d7 Merge commit 'e4bc728d40332e7c2b1edec5f1b200f1917950e2' into HEAD 2025-10-17 16:43:23 +07:00
Zamil Majdy
dfdd632161 fix(backend/util): handle nested Pydantic models in SafeJson (#11188)
## Summary

Fixes a critical serialization bug introduced in PR #11187 where
`SafeJson` failed to serialize dictionaries containing Pydantic models,
causing 500 Internal Server Errors in the executor service.

## Problem

The error manifested as:
```
CRITICAL: Operation Approaching Failure Threshold: Service communication: '_call_method_async'
Current attempt: 50/50
Error: HTTPServerError: HTTP 500: Server error '500 Internal Server Error' 
for url 'http://autogpt-database-manager.prod-agpt.svc.cluster.local:8005/create_graph_execution'
```

Root cause in `create_graph_execution`
(backend/data/execution.py:656-657):
```python
"credentialInputs": SafeJson(credential_inputs) if credential_inputs else Json({})
```

Where `credential_inputs: Mapping[str, CredentialsMetaInput]` is a dict
containing Pydantic models.

After PR #11187's refactor, `_sanitize_value()` only converted top-level
BaseModel instances to dicts, but didn't handle BaseModel instances
nested inside dicts/lists/tuples. This caused Prisma's JSON serializer
to fail with:
```
TypeError: Type <class 'backend.data.model.CredentialsMetaInput'> not serializable
```

## Solution

Added BaseModel handling to `_sanitize_value()` to recursively convert
Pydantic models to dicts before sanitizing:

```python
elif isinstance(value, BaseModel):
    # Convert Pydantic models to dict and recursively sanitize
    return _sanitize_value(value.model_dump(exclude_none=True))
```

This ensures all nested Pydantic models are properly serialized
regardless of nesting depth.

## Changes

- **backend/util/json.py**: Added BaseModel check to `_sanitize_value()`
function
- **backend/util/test_json.py**: Added 6 comprehensive tests covering:
  - Dict containing Pydantic models
  - Deeply nested Pydantic models  
  - Lists of Pydantic models in dicts
  - The exact CredentialsMetaInput scenario
  - Complex mixed structures
  - Models with control characters

## Testing

 All new tests pass  
 Verified fix resolves the production 500 error  
 Code formatted with `poetry run format`

## Related

- Fixes issues introduced in PR #11187
- Related to executor service 500 errors in production

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 09:27:09 +00:00
Swifty
1ed224d481 simplify test and add reset-db make command 2025-10-17 11:12:00 +02:00
Swifty
3b5d919399 fix formatting 2025-10-17 10:56:45 +02:00
Swifty
3c16de22ef add test data creation to makefile and test it 2025-10-17 10:51:58 +02:00
Zamil Majdy
e4bc728d40 Revert "Revert "fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)""
This reverts commit 8258338caf.
2025-10-17 15:25:30 +07:00
Swifty
2c6d85d15e feat(platform): Shared cache (#11150)
### Problem
When running multiple backend pods in production, requests can be routed
to different pods causing inconsistent cache states. Additionally, the
current cache implementation in `autogpt_libs` doesn't support shared
caching across processes, leading to data inconsistency and redundant
cache misses.

### Changes 🏗️

- **Moved cache implementation from autogpt_libs to backend**
(`/backend/backend/util/cache.py`)
  - Removed `/autogpt_libs/autogpt_libs/utils/cache.py`
  - Centralized cache utilities within the backend module
  - Updated all import statements across the codebase

- **Implemented Redis-based shared caching**
- Added `shared_cache` parameter to `@cached` decorator for
cross-process caching
  - Implemented Redis connection pooling for efficient cache operations
  - Added support for cache key pattern matching and bulk deletion
  - Added TTL refresh on cache access with `refresh_ttl_on_get` option

- **Enhanced cache functionality**
  - Added thundering herd protection with double-checked locking
  - Implemented thread-local caching with `@thread_cached` decorator
- Added cache management methods: `cache_clear()`, `cache_info()`,
`cache_delete()`
  - Added support for both sync and async functions

- **Updated store caching** (`/backend/server/v2/store/cache.py`)
  - Enabled shared caching for all store-related cache functions
  - Set appropriate TTL values (5-15 minutes) for different cache types
  - Added `clear_all_caches()` function for cache invalidation

- **Added Redis configuration**
  - Added Redis connection settings to backend settings
  - Configured dedicated connection pool for cache operations
  - Set up binary mode for pickle serialization

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify Redis connection and cache operations work correctly
  - [x] Test shared cache across multiple backend instances
  - [x] Verify cache invalidation with `clear_all_caches()`
- [x] Run cache tests: `poetry run pytest
backend/backend/util/cache_test.py`
  - [x] Test thundering herd protection under concurrent load
  - [x] Verify TTL refresh functionality with `refresh_ttl_on_get=True`
  - [x] Test thread-local caching for request-scoped data
  - [x] Ensure no performance regression vs in-memory cache

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes (Redis already configured)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Redis cache configuration uses existing Redis service settings
(REDIS_HOST, REDIS_PORT, REDIS_PASSWORD)
  - No new environment variables required
2025-10-17 07:56:01 +00:00
Bentlybro
8258338caf Revert "fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)"
This reverts commit e62a56e8ba.
2025-10-17 08:31:23 +01:00
Zamil Majdy
374f35874c feat(platform): Add LaunchDarkly flag for platform payment system (#11181)
## Summary

Implement selective rollout of payment functionality using LaunchDarkly
feature flags to enable gradual deployment to pilot users.

- Add `ENABLE_PLATFORM_PAYMENT` flag to control credit system behavior
- Update `get_user_credit_model` to use user-specific flag evaluation  
- Replace hardcoded `NEXT_PUBLIC_SHOW_BILLING_PAGE` with LaunchDarkly
flag
- Enable payment UI components only for flagged users
- Maintain backward compatibility with existing beta credit system
- Default to beta monthly credits when flag is disabled
- Fix tests to work with new async credit model function

## Key Changes

### Backend
- **Credit Model Selection**: The `get_user_credit_model()` function now
takes a `user_id` parameter and uses LaunchDarkly to determine which
credit model to return:
- Flag enabled → `UserCredit` (payment system enabled, no monthly
refills)
- Flag disabled → `BetaUserCredit` (current behavior with monthly
refills)
  
- **Flag Integration**: Added `ENABLE_PLATFORM_PAYMENT` flag and
integrated LaunchDarkly evaluation throughout the credit system

- **API Updates**: All credit-related endpoints now use the
user-specific credit model instead of a global instance

### Frontend
- **Dynamic UI**: Payment-related components (billing page, wallet
refill) now show/hide based on the LaunchDarkly flag
- **Removed Environment Variable**: Replaced
`NEXT_PUBLIC_SHOW_BILLING_PAGE` with runtime flag evaluation

### Testing
- **Test Fixes**: Updated all tests that referenced the removed global
`_user_credit_model` to use proper mocking of the new async function

## Deployment Strategy

This implementation enables a controlled rollout:
1. Deploy with flag disabled (default) - no behavior change for existing
users
2. Enable flag for pilot/beta users via LaunchDarkly dashboard
3. Monitor usage and feedback from pilot users
4. Gradually expand to more users
5. Eventually enable for all users once validated

## Test Plan

- [x] Unit tests pass for credit system components
- [x] Payment UI components show/hide correctly based on flag
- [x] Default behavior (flag disabled) maintains current functionality
- [x] Flag enabled users get payment system without monthly refills
- [x] Admin credit operations work correctly
- [x] Backward compatibility maintained

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 06:11:39 +00:00
Zamil Majdy
e62a56e8ba fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)
## Summary

Fixes the `Invalid \escape` error occurring in
`/upsert_execution_output` endpoint by completely rewriting the SafeJson
implementation.

## Problem

- Error: `POST /upsert_execution_output failed: Invalid \escape: line 1
column 36404 (char 36403)`
- Caused by data containing literal backslash-u sequences (e.g.,
`\u0000` as text, not actual null characters)
- Previous implementation tried to remove problematic escape sequences
from JSON strings
- This created invalid JSON when it removed `\\u0000` and left invalid
sequences like `\w`

## Solution

Completely rewrote SafeJson to work on Python data structures instead of
JSON strings:

1. **Direct data sanitization**: Recursively walks through dicts, lists,
and tuples to remove control characters directly from strings
2. **No JSON string manipulation**: Avoids all escape sequence parsing
issues
3. **More efficient**: Eliminates the serialize → sanitize → deserialize
cycle
4. **Preserves valid content**: Backslashes, paths, and literal text are
correctly preserved

## Changes

- Removed `POSTGRES_JSON_ESCAPES` regex (no longer needed)
- Added `_sanitize_value()` helper function for recursive sanitization
- Simplified `SafeJson()` to convert Pydantic models and sanitize data
structures
- Added `import json  # noqa: F401` for backwards compatibility

## Testing

-  Verified fix resolves the `Invalid \escape` error
-  All existing SafeJson unit tests pass
-  Problematic data with literal escape sequences no longer causes
errors
-  Code formatted with `poetry run format`

## Technical Details

**Before (JSON string approach):**
```python
# Serialize to JSON string
json_string = dumps(data)
# Remove escape sequences from string (BREAKS!)
sanitized = regex.sub("", json_string)
# Parse back (FAILS with Invalid \escape)
return Json(json.loads(sanitized))
```

**After (data structure approach):**
```python
# Convert Pydantic to dict
data = model.model_dump() if isinstance(data, BaseModel) else data
# Recursively sanitize strings in data structure
sanitized = _sanitize_value(data)
# Return as Json (no parsing needed)
return Json(sanitized)
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 05:56:08 +00:00
Abhimanyu Yadav
f3f9a60157 feat(frontend): add extra info in custom node in new builder (#11172)
Currently, we don’t add category and cost information to custom nodes in
the new builder. This means we’re rendering with the correct information
and costs are displayed accurately based on the selected discriminator
value.

<img width="441" height="781" alt="Screenshot 2025-10-15 at 2 43 33 PM"
src="https://github.com/user-attachments/assets/8199cfa7-4353-4de2-8c15-b68aa86e458c"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All information is displayed correctly.
- [x] I’ve tried changing the discrimination value and we’re getting the
correct cost for the selected value.
2025-10-17 04:35:22 +00:00
Swifty
3ed1c93ec0 Merge branch 'dev' into swiftyos/secrt-1706-improve-store-search 2025-10-16 15:10:01 +02:00
Swifty
773f545cfd update existing rows when migration is ran 2025-10-16 13:38:01 +02:00
Swifty
84ad4a9f95 updated migration and query 2025-10-16 13:06:47 +02:00
Swifty
8610118ddc ai sucks - fixing 2025-10-16 12:14:26 +02:00
Bently
9469b9e2eb feat(platform/backend): Add Claude Haiku 4.5 model support (#11179)
### Changes 🏗️

- **Added Claude Haiku 4.5 model support** (`claude-haiku-4-5-20251001`)
- Added model to `LlmModel` enum in
`autogpt_platform/backend/backend/blocks/llm.py`
- Configured model metadata with 200k context window and 64k max output
tokens
- Set pricing to 4 credits per million tokens in
`backend/data/block_cost_config.py`
  
- **Classic Forge Integration**
- Added `CLAUDE4_5_HAIKU_v1` to Anthropic provider in
`classic/forge/forge/llm/providers/anthropic.py`
- Configured with $1/1M prompt tokens and $5/1M completion tokens
pricing
  - Enabled function call API support

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
- [x] Verify Claude Haiku 4.5 model appears in the LLM block model
selection dropdown
- [x] Test basic text generation using Claude Haiku 4.5 in an agent
workflow

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Configuration changes</summary>

  - No environment variable changes required
  - No docker-compose changes needed
- Model configuration is handled through existing Anthropic API
integration
</details>




https://github.com/user-attachments/assets/bbc42c47-0e7c-4772-852e-55aa91f4d253

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Bently <Bentlybro@users.noreply.github.com>
2025-10-16 10:11:38 +00:00
Swifty
ebb4ebb025 include parital types in second place 2025-10-16 12:10:38 +02:00
Swifty
cb532e1c4d update docker file to include partial types 2025-10-16 12:08:04 +02:00
Zamil Majdy
b7ae2c2fd2 fix(backend): move DatabaseError to backend.util.exceptions for better layer separation (#11177)
## Summary

Move DatabaseError from store-specific exceptions to generic backend
exceptions for proper layer separation, while also fixing store
exception inheritance to ensure proper HTTP status codes.

## Problem

1. **Poor Layer Separation**: DatabaseError was defined in
store-specific exceptions but represents infrastructure concerns that
affect the entire backend
2. **Incorrect HTTP Status Codes**: Store exceptions inherited from
Exception instead of ValueError, causing 500 responses for client errors
3. **Reusability Issues**: Other backend modules couldn't use
DatabaseError for DB operations
4. **Blanket Catch Issues**: Store-specific catches were affecting
generic database operations

## Solution

### Move DatabaseError to Generic Location
- Move from backend.server.v2.store.exceptions to
backend.util.exceptions
- Update all 23 references in backend/server/v2/store/db.py to use new
location
- Remove from StoreError inheritance hierarchy

### Fix Complete Store Exception Hierarchy
- MediaUploadError: Changed from Exception to ValueError inheritance
(client errors → 400)
- StoreError: Changed from Exception to ValueError inheritance (business
logic errors → 400)
- Store NotFound exceptions: Changed to inherit from NotFoundError (→
404)
- DatabaseError: Now properly inherits from Exception (infrastructure
errors → 500)

## Benefits

###  Proper Layer Separation
- Database errors are infrastructure concerns, not store-specific
business logic
- Store exceptions focus on business validation and client errors  
- Clean separation between infrastructure and business logic layers

###  Correct HTTP Status Codes
- DatabaseError: 500 (server infrastructure errors)
- Store NotFound errors: 404 (via existing NotFoundError handler)
- Store validation errors: 400 (via existing ValueError handler)
- Media upload errors: 400 (client validation errors)

###  Architectural Improvements
- DatabaseError now reusable across entire backend
- Eliminates blanket catch issues affecting generic DB operations
- All store exceptions use global exception handlers properly
- Future store exceptions automatically get proper status codes

## Files Changed

- **backend/util/exceptions.py**: Add DatabaseError class
- **backend/server/v2/store/exceptions.py**: Remove DatabaseError, fix
inheritance hierarchy
- **backend/server/v2/store/db.py**: Update all DatabaseError references
to new location

## Result

-  **No more stack trace spam**: Expected business logic errors handled
properly
-  **Proper HTTP semantics**: 500 for infrastructure, 400/404 for
client errors
-  **Better architecture**: Clean layer separation and reusable
components
-  **Fixes original issue**: AgentNotFoundError now returns 404 instead
of 500

This addresses the logging issue mentioned by @zamilmajdy while also
implementing the architectural improvements suggested by @Pwuts.
2025-10-16 09:51:58 +00:00
Swifty
794aee25ab add full text search 2025-10-16 11:49:36 +02:00
Abhimanyu Yadav
8b995c2394 feat(frontend): add saving ability in new builder (#11148)
This PR introduces saving functionality to the new builder interface,
allowing users to save and update agent flows. The implementation
includes both UI components and backend integration for persistent
storage of agent configurations.



https://github.com/user-attachments/assets/95ee46de-2373-4484-9f34-5f09aa071c5e


### Key Features Added:

#### 1. **Save Control Component** (`NewSaveControl`)
- Added a new save control popover in the control panel with form inputs
for agent name, description, and version display
- Integrated with the new control panel as a primary action button with
a floppy disk icon
- Supports keyboard shortcuts (Ctrl+S / Cmd+S) for quick saving

#### 2. **Graph Persistence Logic**
- Implemented `useNewSaveControl` hook to handle:
  - Creating new graphs via `usePostV1CreateNewGraph`
  - Updating existing graphs via `usePutV1UpdateGraphVersion`
- Intelligent comparison to prevent unnecessary saves when no changes
are made
  - URL parameter management for flowID and flowVersion tracking

#### 3. **Loading State Management**
- Added `GraphLoadingBox` component to display a loading indicator while
graphs are being fetched
- Enhanced `useFlow` hook with loading state tracking
(`isFlowContentLoading`)
- Improved UX with clear visual feedback during graph operations

#### 4. **Component Reorganization**
- Refactored components from `NewBlockMenu` to `NewControlPanel`
directory structure for better organization:
- Moved all block menu related components under
`NewControlPanel/NewBlockMenu/`
- Separated save control into its own module
(`NewControlPanel/NewSaveControl/`)
  - Improved modularity and separation of concerns

#### 5. **State Management Enhancements**
- Added `controlPanelStore` for managing control panel states (e.g.,
save popover visibility)
- Enhanced `nodeStore` with `getBackendNodes()` method for retrieving
nodes in backend format
- Added `getBackendLinks()` to `edgeStore` for consistent link
formatting

### Technical Improvements:

- **Graph Comparison Logic**: Implemented `graphsEquivalent()` helper to
deeply compare saved and current graph states, preventing redundant
saves
- **Form Validation**: Used Zod schema validation for save form inputs
with proper constraints
- **Error Handling**: Comprehensive error handling with user-friendly
toast notifications
- **Query Invalidation**: Proper cache invalidation after successful
saves to ensure data consistency

### UI/UX Enhancements:

- Clean, modern save dialog with clear labeling and placeholder text
- Real-time version display showing the current graph version
- Disabled state for save button during operations to prevent double
submissions
- Toast notifications for success and error states
- Higher z-index for GraphLoadingBox to ensure visibility over other
elements

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving is working perfectly. All nodes, links, their positions,
and hardcoded data are saved correctly.
  - [x] If there are no changes, the user cannot save the graph.
2025-10-16 08:06:18 +00:00
Zamil Majdy
12b1067017 fix(backend/store): improve store exception hierarchy for proper HTTP status codes (#11176)
## Summary

Fix store exception hierarchy to prevent ERROR level stack trace spam
for expected business logic errors and ensure proper HTTP status codes.

## Problem

The original error from production logs showed AgentNotFoundError for
non-existent agents like autogpt/domain-drop-catcher was:
- Returning 500 status codes instead of 404 
- Generating ERROR level stack traces in logs for expected not found
scenarios
- Bypassing global exception handlers due to improper inheritance

## Root Cause

Store exceptions inherited from Exception instead of ValueError, causing
them to bypass the global ValueError handler (400) and fall through to
the generic Exception handler (500) with full stack traces.

## Solution

Create proper exception hierarchy for ALL store-related errors by
making:
- MediaUploadError inherit from ValueError instead of Exception
- StoreError inherit from ValueError instead of Exception  
- Store NotFound exceptions inherit from NotFoundError (which inherits
from ValueError)

## Changes Made

1. MediaUploadError: Changed from Exception to ValueError inheritance
2. StoreError: Changed from Exception to ValueError inheritance  
3. Store NotFound exceptions: Changed to inherit from NotFoundError

## Benefits

- Correct HTTP status codes: Not found errors return 404, validation
errors return 400
- No more 500 stack trace spam for expected business logic errors
- Clean consistent error handling using existing global handlers
- Future-proof: Any new store exceptions automatically get proper status
codes

## Result

- AgentNotFoundError for autogpt/domain-drop-catcher now returns 404
instead of 500
- InvalidFileTypeError, VirusDetectedError, etc. now return 400 instead
of 500
- No ERROR level stack traces for expected client errors
- Proper HTTP semantics throughout the store API
2025-10-16 04:36:49 +00:00
Zamil Majdy
ba53cb78dc fix(backend/util): comprehensive SafeJson sanitization to prevent PostgreSQL null character errors (#11174)
## Summary
Fix critical SafeJson function to properly sanitize JSON-encoded Unicode
escape sequences that were causing PostgreSQL 22P05 errors when null
characters in web scraped content were stored as "\u0000" in the
database.

## Root Cause Analysis
The existing SafeJson function in backend/util/json.py:
1. Only removed raw control characters (\x00-\x08, etc.) using
POSTGRES_CONTROL_CHARS regex
2. Failed to handle JSON-encoded Unicode escape sequences (\u0000,
\u0001, etc.)
3. When web scraping returned content with null bytes, these were
JSON-encoded as "\u0000" strings
4. PostgreSQL rejected these Unicode escape sequences, causing 22P05
errors

## Changes Made

### Enhanced SafeJson Function (backend/util/json.py:147-153)
- **Add POSTGRES_JSON_ESCAPES regex**: Comprehensive pattern targeting
all PostgreSQL-incompatible Unicode and single-char JSON escape
sequences
- **Unicode escapes**: \u0000-\u0008, \u000B-\u000C, \u000E-\u001F,
\u007F (preserves \u0009=tab, \u000A=newline, \u000D=carriage return)
- **Single-char escapes**: \b (backspace), \f (form feed) with negative
lookbehind/lookahead to preserve file paths like "C:\\file.txt"
- **Two-pass sanitization**: Remove JSON escape sequences first, then
raw characters as fallback

### Comprehensive Test Coverage (backend/util/test_json.py:219-414)
Added 11 new test methods covering:
- **Control character sanitization**: Verify dangerous characters (\x00,
\x07, \x0C, etc.) are removed while preserving safe whitespace (\t, \n,
\r)
- **Web scraping content**: Simulate SearchTheWebBlock scenarios with
null bytes and control characters
- **Code preservation**: Ensure legitimate file paths, JSON strings,
regex patterns, and programming code are preserved
- **Unicode escape handling**: Test both \u0000-style and \b/\f-style
escape sequences
- **Edge case protection**: Prevent over-matching of legitimate
sequences like "mybfile.txt" or "\\u0040"
- **Mixed content scenarios**: Verify only problematic sequences are
removed while preserving legitimate content

## Validation Results
-  All 24 SafeJson tests pass including 11 new comprehensive
sanitization tests
-  Control characters properly removed: \x00, \x01, \x08, \x0C, \x1F,
\x7F
-  Safe characters preserved: \t (tab), \n (newline), \r (carriage
return)
-  File paths preserved: "C:\\Users\\file.txt", "\\\\server\\share"
-  Programming code preserved: regex patterns, JSON strings, SQL
escapes
-  Unicode escapes sanitized: \u0000 → removed, \u0048 ("H") →
preserved if valid
-  No false positives: Legitimate sequences not accidentally removed
-  poetry run format succeeds without errors

## Impact
- **Prevents PostgreSQL 22P05 errors**: No more null character database
rejections from web scraping
- **Maintains data integrity**: Legitimate content preserved while
dangerous characters removed
- **Comprehensive protection**: Handles both raw bytes and JSON-encoded
escape sequences
- **Web scraping reliability**: SearchTheWebBlock and similar blocks now
store content safely
- **Backward compatibility**: Existing SafeJson behavior unchanged for
legitimate content

## Test Plan
- [x] All existing SafeJson tests pass (24/24)
- [x] New comprehensive sanitization tests pass (11/11)
- [x] Control character removal verified
- [x] Legitimate content preservation verified
- [x] Web scraping scenarios tested
- [x] Code formatting and type checking passes

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-15 21:25:30 +00:00
Reinier van der Leer
f9778cc87e fix(blocks): Unhide "Add to Dictionary" block's dictionary input (#11175)
The `dictionary` input on the Add to Dictionary block is hidden, even
though it is the main input.

### Changes 🏗️

- Mark `dictionary` explicitly as not advanced (so not hidden by
default)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, no test needed
2025-10-15 15:04:56 +00:00
Nicholas Tindle
b230b1b5cf feat(backend): Add Sentry user and tag tracking to node execution (#11170)
Integrates Sentry SDK to set user and contextual tags during node
execution for improved error tracking and user count analytics. Ensures
Sentry context is properly set and restored, and exceptions are captured
with relevant context before scope restoration.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds sentry tracking to block failures
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test to make sure the userid and block details show up in Sentry
  - [x] make sure other errors aren't contaminated 


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- New Features
- Added conditional support for feature flags when configured, enabling
targeted rollouts and experiments without impacting unconfigured
environments.

- Chores
- Enhanced error monitoring with richer contextual data during node
execution to improve stability and diagnostics.
- Updated metrics initialization to dynamically include feature flag
integrations when available, without altering behavior for unconfigured
setups.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-15 14:33:08 +00:00
Reinier van der Leer
1925e77733 feat(backend): Include default input values in graph export (#11173)
Since #10323, one-time graph inputs are no longer stored on the input
nodes (#9139), so we can reasonably assume that the default value set by
the graph creator will be safe to export.

### Changes 🏗️

- Don't strip the default value from input nodes in
`NodeModel.stripped_for_export(..)`, except for inputs marked as
`secret`
- Update and expand tests for graph export secrets stripping mechanism

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Expanded tests pass
- Relatively simple change with good test coverage, no manual test
needed
2025-10-15 14:04:44 +00:00
Copilot
9bc9b53b99 fix(backend): Add channel ID support to SendDiscordMessageBlock for consistency with other Discord blocks (#11055)
## Problem

The `SendDiscordMessageBlock` only accepted channel names, while other
Discord blocks like `SendDiscordFileBlock` and `SendDiscordEmbedBlock`
accept both channel IDs and channel names. This inconsistency made it
difficult to use channel IDs with the message sending block, which is
often more reliable and direct than name-based lookup.

## Solution

Updated `SendDiscordMessageBlock` to accept both channel IDs and channel
names through the `channel_name` field, matching the implementation
pattern used in other Discord blocks.

### Changes Made

1. **Enhanced channel resolution logic** to try parsing the input as a
channel ID first, then fall back to name-based search:
   ```python
   # Try to parse as channel ID first
   try:
       channel_id = int(channel_name)
       channel = client.get_channel(channel_id)
   except ValueError:
       # Not an ID, treat as channel name
       # ... search guilds for matching channel name
   ```

2. **Updated field descriptions** to clarify the dual functionality:
- `channel_name`: Now describes that it accepts "Channel ID or channel
name"
   - `server_name`: Clarified as "only needed if using channel name"

3. **Added type checking** to ensure the resolved channel can send
messages before attempting to send

4. **Updated documentation** to reflect the new capability

## Backward Compatibility

 **Fully backward compatible**: The field name remains `channel_name`
(not renamed), and all existing workflows using channel names will
continue to work exactly as before.

 **New capability**: Users can now also provide channel IDs (e.g.,
`"123456789012345678"`) for more direct channel targeting.

## Testing

- All existing tests pass, including `SendDiscordMessageBlock` and all
other Discord block tests
- Implementation verified to match the pattern used in
`SendDiscordFileBlock` and `SendDiscordEmbedBlock`
- Code passes all linting, formatting, and type checking

Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10909

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: SendDiscordMessage needs to take a channel id as an
option under channelname the same as the other discord blocks
> Issue Description: with how we can process the other discord blocks we
should do the same here with the identifiers being allowed to be a
channel name or id. we can't rename the field though or that will break
backwards compatibility
> Fixes
https://linear.app/autogpt/issue/OPEN-2701/senddiscordmessage-needs-to-take-a-channel-id-as-an-option-under
> 
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User 055a3053-5ab6-449a-bcfa-990768594185:
> the ones with boxes around them need confirmed for lables but yeah its
related but not dupe
> 
> Comment by User 264d7bf4-db2a-46fa-a880-7d67b58679e6:
> this might be a duplicate since there is a related ticket but not sure
> 
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10909).
All replies are displayed in both locations.
> 
> 


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey3.medallia.com/?EAHeSx-AP01bZqG0Ld9QLQ) to start
the survey.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* New Features
* Send Discord Message block now accepts a channel ID in addition to
channel name.
  * Server name is only required when using a channel name.
* Improved channel detection and validation with clearer errors if the
channel isn’t found.

* Documentation
* Updated block documentation to reflect support for channel ID or name
and clarify when server name is needed.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bently <Github@bentlybro.com>
2025-10-15 13:04:53 +00:00
Toran Bruce Richards
adfa75eca8 feat(blocks): Add references output pin to Fact Checker block (#11166)
Closes #11163

## Summary
Expanded the Fact Checker block to yield its references list from the
Jina AI API response.

## Changes 🏗️
- Added `Reference` TypedDict to define the structure of reference
objects
- Added `references` field to the Output schema
- Modified the `run` method to extract and yield references from the API
response
- Added fallback to empty list if references are not present

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the Fact Checker block schema includes the new
references field
- [x] Confirmed that references are properly extracted from the API
response when present
- [x] Tested the fallback behavior when references are not in the API
response
- [x] Ensured backward compatibility - existing functionality remains
unchanged
- [x] Verified the Reference TypedDict matches the expected API response
structure

Generated with [Claude Code](https://claude.ai/code)

## Summary by CodeRabbit

* **New Features**
* Fact-checking results now include a references list to support
verification.
* Each reference provides a URL, a key quote, and an indicator showing
whether it supports the claim.
* References are presented alongside factuality, result, and reasoning
when available; otherwise, an empty list is returned.
* Enhances transparency and traceability of fact-check outcomes without
altering existing result fields.

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-15 10:19:43 +00:00
seer-by-sentry[bot]
0f19d01483 fix(frontend): Improve error handling for invalid agent files (#11165)
### Changes 🏗️

<img width="672" height="761" alt="Screenshot 2025-10-14 at 16 12 50"
src="https://github.com/user-attachments/assets/9e664ade-10fe-4c09-af10-b26d10dca360"
/>


Fixes
[BUILDER-3YG](https://sentry.io/organizations/significant-gravitas/issues/6942679655/).
The issue was that: User uploaded an incompatible external agent persona
file lacking required flow graph keys (`nodes`, `links`).

- Improves error handling when an invalid agent file is uploaded.
- Provides a more user-friendly error message indicating the file must
be a valid agent.json file exported from the AutoGPT platform.
- Clears the invalid file from the form and resets the agent object to
null.

This fix was generated by Seer in Sentry, triggered by Toran Bruce
Richards. 👁️ Run ID: 1943626

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6942679655/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test that uploading an invalid agent file (e.g., missing `nodes`
or `links`) triggers the improved error handling and displays the
user-friendly error message.
- [x] Verify that the invalid file is cleared from the form after the
error, and the agent object is reset to null.

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
2025-10-15 09:55:00 +00:00
Abhimanyu Yadav
112c39f6a6 fix(frontend): fix auto select credential mechanism in new builder (#11171)
We’re currently facing two problems with credentials:

1. When we change the discriminator input value, the form data
credential field should be cleaned up completely.
2. When I select a different discriminator and if that provider has a
value, it should select the latest one.

So, in this PR, I’ve encountered both issues.

### Changes 🏗️
- Updated CredentialField to utilize a new setCredential function for
managing selected credentials.
- Implemented logic to auto-select the latest credential when none is
selected and clear the credential if the provider changes.
- Improved SelectCredential component with a defaultValue prop and
adjusted styling for better UI consistency.
- Removed unnecessary console logs from helper functions to clean up the
code.

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Credential selection works perfectly with both the discriminator
and normal addition.
  - [x] Auto-select credential is also working.
2025-10-15 08:39:05 +00:00
Toran Bruce Richards
22946f4617 feat(blocks): add dedicated Perplexity block (#11164)
Fixes #11162

## Summary

Implements a new Perplexity block that allows users to query
Perplexity's sonar models via OpenRouter with support for extracting URL
citations and annotations.

## Changes

- Add new block for Perplexity sonar models (sonar, sonar-pro,
sonar-deep-research)
- Support model selection for all three Perplexity models
- Implement annotations output pin for URL citations and source
references
- Integrate with OpenRouter API for accessing Perplexity models
- Follow existing block patterns from AI text generator block

## Test Plan

 Block successfully instantiates
 Block is properly loaded by the dynamic loading system
 Output fields include response and annotations as required

Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- New Features
- Added a Perplexity integration block to query Sonar models via
OpenRouter.
- Supports multiple model options, optional system prompt, and
adjustable max tokens.
- Returns concise responses with citation-style annotations extracted
from the model output.
  - Provides clear error messages when requests fail.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-15 08:34:37 +00:00
Ubbe
938834ac83 dx(frontend): enable Next.js sourcemaps for Sentry (#11161)
## Changes 🏗️

Next.js Sourcemaps aren't working on production, followed:

- https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/
- https://docs.sentry.io/organization/integrations/deployment/vercel/

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see once deployed ...

### For configuration changes:

None
2025-10-15 12:47:14 +04:00
Zamil Majdy
934cb3a9c7 feat(backend): Make execution limit per user per graph and reduce to 25 (#11169)
## Summary
- Changed max_concurrent_graph_executions_per_user from 50 to 25
concurrent executions
- Updated the limit to be per user per graph instead of globally per
user
- Users can now run different graphs concurrently without being limited
by executions of other graphs
- Enhanced database query to filter by both user_id and graph_id

## Changes Made
- **Settings**: Reduced default limit from 50 to 25 and updated
description to clarify per-graph scope
- **Database Layer**: Modified `get_graph_executions_count` to accept
optional `graph_id` parameter
- **Executor Manager**: Updated rate limiting logic to check
per-user-per-graph instead of per-user globally
- **Logging**: Enhanced warning messages to include graph_id context

## Test plan
- [ ] Verify that users can run up to 25 concurrent executions of the
same graph
- [ ] Verify that users can run different graphs concurrently without
interference
- [ ] Test rate limiting behavior when limit is exceeded for a specific
graph
- [ ] Confirm logging shows correct graph_id context in rate limit
messages

## Impact
This change improves the user experience by allowing concurrent
execution of different graphs while still preventing resource exhaustion
from running too many instances of the same graph.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-15 00:02:55 +00:00
seer-by-sentry[bot]
7b8499ec69 feat(backend): Prevent duplicate slugs for store submissions (#11155)
<!-- Clearly explain the need for these changes: -->
This PR prevents users from creating multiple store submissions with the
same slug, which could lead to confusion and potential conflicts in the
marketplace. Each user's submissions should have unique slugs to ensure
proper identification and navigation.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- **Backend**: Added validation to check for existing slugs before
creating new store submissions in `backend/server/v2/store/db.py`
- **Backend**: Introduced new `SlugAlreadyInUseError` exception in
`backend/server/v2/store/exceptions.py` for clearer error handling
- **Backend**: Updated store routes to return HTTP 409 Conflict when
slug is already in use in `backend/server/v2/store/routes.py`
- **Backend**: Cleaned up test file in
`backend/server/v2/store/db_test.py`
- **Frontend**: Enhanced error handling in the publish agent modal to
display specific error messages to users in
`frontend/src/components/contextual/PublishAgentModal/components/AgentInfoStep/useAgentInfoStep.ts`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Add a store submission with a specific slug
- [x] Attempt to add another store submission with the same slug for the
same user - Verify a 409 conflict error is returned with appropriate
error message
- [x] Add a store submission with the same slug, but for a different
user - Verify the submission is successful
- [x] Verify frontend displays the specific error message when slug
conflict occurs
  - [x] Existing tests pass without modification

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-14 11:14:00 +00:00
Abhimanyu Yadav
63076a67e1 fix(frontend): fix client side error handling in custom mutator (#11160)
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11159

Currently, we’re not throwing errors for client-side requests in the
custom mutator. This way, we’re ignoring the client-side request error.
If we do encounter an error, we send it as a normal response object.
That’s why our onError callback on React Query mutation and hasError
isn’t working in the query. To fix this, in this PR, we’re throwing the
client-side error.

### Changes 🏗️
- We’re throwing errors for both server-side and client-side requests.  
- Why server-side? So the client cache isn’t hydrated with the error.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All end-to-end functionality is working properly.
- [x] I’ve manually checked all the pages and they are all functioning
correctly.
2025-10-14 08:41:57 +00:00
Abhimanyu Yadav
41260a7b4a fix(frontend): fix publish agent behavior when user is logged out (#11159)
When a user clicks the “Become a Creator” button on the marketplace
page, we send an unauthorised request to the server to get a list of
agents. In this PR, I’ve fixed this by checking if the user is logged
in. If they’re not, I’ll show them a UI to log in or create an account.
 
<img width="967" height="605" alt="Screenshot 2025-10-14 at 12 04 52 PM"
src="https://github.com/user-attachments/assets/95079d9c-e6ef-4d75-9422-ce4fb138e584"
/>

### Changes
- Modify the publish agent test to detect the correct text even when the
user is logged out.
- Use Supabase helpers to check if the user is logged in. If not,
display the appropriate UI.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] The login UI is correctly displayed when the user is logged out.
- [x] The login UI is also correctly displayed when the user is logged
in.
  - [x] The login and signup buttons are working perfectly.
2025-10-14 08:41:49 +00:00
Ubbe
5f2d4643f8 feat(frontend): dynamic search terms (#11156)
## Changes 🏗️

<img width="800" height="664" alt="Screenshot 2025-10-14 at 14 09 54"
src="https://github.com/user-attachments/assets/73f6277a-6bef-40f9-b208-31aba0cfc69f"
/>

<img width="600" height="773" alt="Screenshot 2025-10-14 at 14 10 05"
src="https://github.com/user-attachments/assets/c88cb22f-1597-4216-9688-09c19030df89"
/>

Allow to manage on the fly which search terms appear on the Marketplace
page via Launch Darkly dashboard. There is a new flag configured there:
`marketplace-search-terms`:
- **enabled** → `["Foo", "Bar"]` → the terms that will appear
- **disabled** → `[ "Marketing", "SEO", "Content Creation",
"Automation", "Fun"]` → the default ones show

### Small fix

Fix the following browser console warning about `onLoadingComplete`
being deprecated...
<img width="600" height="231" alt="Screenshot 2025-10-14 at 13 55 45"
src="https://github.com/user-attachments/assets/1b26e228-0902-4554-9f8c-4839f8d4ed83"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Ran the flag locally and verified it shows the terms set on Launch
Darkly

### For configuration changes:

Launch Darkly new flag needs to be configured on all environments.
2025-10-14 06:43:56 +01:00
Krzysztof Czerwinski
9c8652b273 feat(backend): Whitelist Onboarding Agents (#11149)
Some agents aren't suitable for onboarding. This adds per-store agent
setting to allow them for onboarding. In case no agent is allowed
fallback to the former search.

### Changes 🏗️

- Add `useForOnboarding` to `StoreListing` model and `StoreAgent` view
(with migration)
- Remove filtering of agents with empty input schema or credentials 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Only allowed agents are displayed
- [x] Fallback to the old system in case there aren't enough allowed
agents
2025-10-13 15:05:22 +00:00
Swifty
58ef687a54 fix(platform): Disable logging store terms (#11147)
There is concern that the write load on the database may derail the
performance optimisations.
This hotfix comments out the code that adds the search terms to the db,
so we can discuss how best to do this in a way that won't bring down the
db.

### Changes 🏗️

- commented out the code to log store terms to the db

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] check search still works in dev
2025-10-13 13:17:04 +00:00
Ubbe
c7dcbc64ec fix(frontend): ask for credentials in onboarding agent run (#11146)
## Changes 🏗️

<img width="800" height="852" alt="Screenshot_2025-10-13_at_19 20 47"
src="https://github.com/user-attachments/assets/2fc150b9-1053-4e25-9018-24bcc2d93b43"
/>

<img width="800" height="669" alt="Screenshot 2025-10-13 at 19 23 41"
src="https://github.com/user-attachments/assets/9078b04e-0f65-42f3-ac4a-d2f3daa91215"
/>

- Onboarding “Run” step now renders required credentials (e.g., Google
OAuth) and includes them in execution.
- Run button remains disabled until required inputs and credentials are
provided.
- Logic extracted and strongly typed; removed any usage.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan ( _once merged
in dev..._ )
  - [ ] Select an onboarding agent that requires Google OAuth:
  - [ ] Credentials selector appears.
  - [ ] After selecting/signing in, “Run agent” enables.
  - [ ]Run succeeds and navigates to the next step.

### For configuration changes:

None
2025-10-13 12:51:45 +00:00
Ubbe
99ac206272 fix(frontend): handle websocket disconnect issue (#11144)
## Changes 🏗️

I found that if I logged out while an agent was running, sometimes
Webscokets would keep open connections but fail to connect ( given there
is no token anymore ) and cause strange behavior down the line on the
login screen.

Two root causes behind after inspecting the browser logs 🧐 
- WebSocket connections were attempted with an empty token right after
logout, yielding `wss://.../ws?token=` and repeated `1006/connection`
refused loops.
- During logout, sockets in `CONNECTING` state weren’t being closed, so
the browser kept trying to finish the handshake and were reattempted
shortly after failing

Trying to fix this like:
- Guard `connectWebSocket()` to no-op if a logout/disconnect intent is
set, and to skip connecting when no token is available.
- Treat `CONNECTING` sockets as closeable in `disconnectWebSocket()` and
clear `wsConnecting` to avoid stale pending Promises
- Left existing heartbeat/reconnect logic intact, but it now won’t run
when we’re logging out or when we can’t get a token.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login and run an agent that takes long to run
  - [x] Logout
  - [x] Check the browser console and you don't see any socket errors
  - [x] The login screen behaves ok   

### For configuration changes:

Noop
2025-10-13 12:10:16 +00:00
Abhimanyu Yadav
f67d78df3e feat(frontend): Implement discriminator logic in the new builder’s credential system. (#11124)
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107
and https://github.com/Significant-Gravitas/AutoGPT/pull/11122

In this PR, I’ve added support for discrimination. Now, users can choose
a credential type based on other input values.


https://github.com/user-attachments/assets/6cedc59b-ec84-4ae2-bb06-59d891916847

### Changes 🏗️
- Updated CredentialsField to utilize credentialProvider from schema.
- Refactored helper functions to filter credentials based on the
selected provider.
- Modified APIKeyCredentialsModal and PasswordCredentialsModal to accept
provider as a prop.
- Improved FieldTemplate to dynamically display the correct credential
provider.
- Added getCredentialProviderFromSchema function to manage
multi-provider scenarios.

### Checklist 📋

#### For code changes:
- [x] Credential input is correctly updating based on other input
values.
- [x] Credential can be added correctly.
2025-10-13 12:08:10 +00:00
Swifty
e32c509ccc feat(backend): Simplify caching to just store routes (#11140)
### Problem
Limits caching to just the main marketplace routes

### Changes 🏗️

- **Simplified store cache implementation** in
`backend/server/v2/store/cache.py`
  - Streamlined caching logic for better maintainability
  - Reduced complexity while maintaining performance
  
- **Added cache invalidation on store updates**
  - Implemented cache clearing when new agents are added to the store
- Added invalidation logic in admin store routes
(`admin_store_routes.py`)
  - Ensures all pods reflect the latest store state after modifications

- **Updated store database operations** in
`backend/server/v2/store/db.py`
  - Modified to work with the new cache structure
  
- **Added cache deletion tests** (`test_cache_delete.py`)
  - Validates cache invalidation works correctly
  - Ensures cache consistency across operations

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify store listings are cached correctly
  - [x] Upload a new agent to the store and confirm cache is invalidated
2025-10-13 07:25:59 +00:00
seer-by-sentry[bot]
20acd8b51d fix(backend): Improve Postmark error handling and logging for notification delivery (#11052)
<!-- Clearly explain the need for these changes: -->
Fixes
[AUTOGPT-SERVER-5K6](https://sentry.io/organizations/significant-gravitas/issues/6887660207/).
The issue was that: Batch sending fails due to malformed data (422) and
inactive recipients (406); the 406 error is misclassified as a size
limit failure.

- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.


This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 1675950

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6887660207/?seerDrawer=true)

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Implements more robust error handling for Postmark API failures during
notification sending.
- Specifically handles inactive recipients (HTTP 406), malformed data
(HTTP 422), and oversized notifications.
- Adds detailed logging for each error case, including the notification
index and error message.
- Skips individual notifications that fail due to these errors,
preventing the entire batch from failing.
- Improves error handling for ValueErrors during send_templated calls,
specifically addressing oversized notifications.
- Also disables this in prod to prevent scaling issues until we work out
some of the more critical issues

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test sending notifications with invalid email addresses to ensure
406 errors are handled correctly.
- [x] Test sending notifications with malformed data to ensure 422
errors are handled correctly.
- [x] Test sending oversized notifications to ensure they are skipped
and logged correctly.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- New Features
  - None

- Bug Fixes
- Individual email failures no longer abort a batch; processing
continues after per-recipient errors.
- Specific handling for inactive recipients and malformed messages to
prevent repeated delivery attempts.

- Chores
  - Improved error logging and diagnostics for email delivery scenarios.

- Tests
- Added tests covering email-sending error cases, user-deactivation on
inactive addresses, and batch-continuation behavior.

- Documentation
  - None
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-13 07:16:48 +00:00
Nicholas Tindle
a49c957467 Revert "fix(frontend/builder): Sync frontend node IDs with backend after save" (#11142)
Reverts Significant-Gravitas/AutoGPT#11075
2025-10-13 07:16:02 +00:00
Abhimanyu Yadav
cf6e724e99 feat(platform): load graph on new builder (#11141)
In this PR, I’ve added functionality to fetch a graph based on the
flowID and flowVersion provided in the URL. Once the graph is fetched,
we add the nodes and links using the graph data in a new builder.

<img width="1512" height="982" alt="Screenshot 2025-10-11 at 10 26
07 AM"
src="https://github.com/user-attachments/assets/2f66eb52-77b2-424c-86db-559ea201b44d"
/>


### Changes
- Added `get_specific_blocks` route in `routes.py`.
- Created `get_block_by_id` function in `db.py`.
- Add a new hook `useFlow.ts` to load the graph and populate it in the
flow editor.
- Updated frontend components to reflect changes in block handling.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to load the graph correctly.
  - [x] Able to populate it on the builder.
2025-10-11 15:28:37 +00:00
Reinier van der Leer
b67555391d fix(frontend/builder): Sync frontend node IDs with backend after save (#11075)
- Resolves #10980

Fixes unnecessary graph re-saving when no changes were made after
initial save. The issue occurred because frontend node IDs weren't
synced with backend IDs after save operations.

### Changes 🏗️

- Update actual node.id to match backend node ID after save
- Update edge references with new node IDs
- Properly sync visual editor state with backend

### Test Plan 📋

- [x] TypeScript compilation passes  
- [x] Pre-commit hooks pass
- [x] Manual test: Save graph, verify no re-save needed on subsequent
save/run
2025-10-11 01:12:19 +00:00
Zamil Majdy
05a72f4185 feat(backend): implement user rate limiting for concurrent graph executions (#11128)
## Summary
Add configurable rate limiting to prevent users from exceeding the
maximum number of concurrent graph executions, defaulting to 50 per
user.

## Changes Made

### Configuration (`backend/util/settings.py`)
- Add `max_concurrent_graph_executions_per_user` setting (default: 50,
range: 1-1000)
- Configurable via environment variables or settings file

### Database Query Function (`backend/data/execution.py`) 
- Add `get_graph_executions_count()` function for efficient count
queries
- Supports filtering by user_id, statuses, and time ranges
- Used to check current RUNNING/QUEUED executions per user

### Database Manager Integration (`backend/executor/database.py`)
- Expose `get_graph_executions_count` through DatabaseManager RPC
interface
- Follows existing patterns for database operations
- Enables proper service-to-service communication

### Rate Limiting Logic (`backend/executor/manager.py`)
- Inline rate limit check in `_handle_run_message()` before cluster lock
- Use existing `db_client` pattern for consistency
- Reject and requeue executions when limit exceeded
- Graceful error handling - proceed if rate limit check fails
- Enhanced logging with user_id and current/max execution counts

## Technical Implementation
- **Database approach**: Query actual execution statuses for accuracy
- **RPC pattern**: Use DatabaseManager client following existing
codebase patterns
- **Fail-safe design**: Proceed with execution if rate limit check fails
- **Requeue on limit**: Rejected executions are requeued for later
processing
- **Early rejection**: Check rate limit before expensive cluster lock
operations

## Rate Limiting Flow
1. Parse incoming graph execution request
2. Query database via RPC for user's current RUNNING/QUEUED execution
count
3. Compare against configured limit (default: 50)
4. If limit exceeded: reject and requeue message
5. If within limit: proceed with normal execution flow

## Configuration Example
```env
MAX_CONCURRENT_GRAPH_EXECUTIONS_PER_USER=25  # Reduce to 25 for stricter limits
```

## Test plan
- [x] Basic functionality tested - settings load correctly, database
function works
- [x] ExecutionManager imports and initializes without errors
- [x] Database manager exposes the new function through RPC
- [x] Code follows existing patterns and conventions
- [ ] Integration testing with actual rate limiting scenarios
- [ ] Performance testing to ensure minimal impact on execution pipeline

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-11 08:02:34 +07:00
Swifty
36f634c417 fix(backend): Update store agent view to return only latest version (#11065)
This PR fixes duplicate agent listings on the marketplace home page by
updating the StoreAgent view to return only the latest approved version
of each agent.

### Changes 🏗️

- Updated `StoreAgent` database view to filter for only the latest
approved version per listing
- Added CTE (Common Table Expression) `latest_versions` to efficiently
determine the maximum version for each store listing
- Modified the join logic to only include the latest version instead of
all approved versions
- Updated `versions` array field to contain only the single latest
version

**Technical details:**
- The view now uses a `latest_versions` CTE that groups by
`storeListingId` and finds `MAX(version)` for approved submissions
- Join condition ensures only the latest version is included:
`slv.version = lv.latest_version`
- This prevents duplicate entries for agents with multiple approved
versions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified marketplace home page shows no duplicate agents
- [x] Confirmed only latest version is displayed for agents with
multiple approved versions
  - [x] Checked that agent details page still functions correctly
  - [x] Validated that run counts and ratings are still accurate

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-10-10 09:31:36 +00:00
Swifty
18e169aa51 feat(platform): Log Marketplace Search Terms (#11092)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <Pwuts@users.noreply.github.com>
2025-10-10 11:33:28 +02:00
Swifty
c5b90f7b09 feat(platform): Simplify running of core docker services (#11113)
Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>
2025-10-10 11:32:46 +02:00
Ubbe
a446c1acc9 fix(frontend): improve navbar on mobile (#11137)
## Changes 🏗️

Make the navigation bar look nice across screen sizes 📱 

<img width="1229" height="388" alt="Screenshot 2025-10-10 at 17 53 48"
src="https://github.com/user-attachments/assets/037a9957-9c0b-4e2c-9ef5-af198fdce923"
/>

<img width="700" height="392" alt="Screenshot 2025-10-10 at 17 53 42"
src="https://github.com/user-attachments/assets/bf9a0f83-a528-4613-83e7-6e204078b905"
/>

<img width="500" height="377" alt="Screenshot 2025-10-10 at 17 52 24"
src="https://github.com/user-attachments/assets/2209d4f3-a41a-4700-894b-5e6e7c15fefb"
/>

<img width="300" height="381" alt="Screenshot 2025-10-10 at 17 52 16"
src="https://github.com/user-attachments/assets/1c87d545-784e-47b5-b23c-6f37cfae489b"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to the platform and resize the window down
- [x] The navbar looks good across screen sizes and everything is
aligned and accessible

### For configuration changes:

None
2025-10-10 09:10:24 +00:00
Ubbe
59d242f69c fix(frontend): remove agent activity flag (#11136)
## Changes 🏗️

The Agent Activity Dropdown is now stable, it has been 100% exposed to
users on production for a few weeks, no need to have it behind a flag
anymore.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to AutoGPT
- [x] The bell on the navbar is always present even if the flag on
Launch Darkly is turned off

### For configuration changes:

None
2025-10-10 09:08:42 +00:00
Abhimanyu Yadav
a2cd5d9c1f feat(frontend): add support for user password credentials in new FlowEditor (#11122)
- depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11107

In this PR, I’ve added a way to add a username and password as
credentials on new builder.


https://github.com/user-attachments/assets/b896ea62-6a6d-487c-99a3-727cef4ad9a5

### Changes 🏗️
- Introduced PasswordCredentialsModal to handle user password
credentials.
- Updated useCredentialField to support user password type.
- Refactored APIKeyCredentialsModal to remove unnecessary onSuccess
prop.
- Enhanced the CredentialsField component to conditionally render the
new password modal based on supported credential types.

### Checklist 📋

#### For code changes:
- [x] Ability to add username and password correctly.
- [x] The username and password are visible in the credentials list
after adding it.
2025-10-10 07:15:21 +00:00
Abhimanyu Yadav
df5b348676 feat(frontend): add search functionality in new block menu (#11121)
- Depends on https://github.com/Significant-Gravitas/AutoGPT/pull/11120

In this PR, I’ve added a search functionality to the new block menu with
pagination.



https://github.com/user-attachments/assets/4c199997-4b5a-43c7-83b6-66abb1feb915



### Changes 🏗️
- Add a frontend for the search list with pagination functionality.
- Updated the search route to use GET method.
- Removed the SearchRequest model and replaced it with individual query
parameters.

### Checklist 📋

#### For code changes:
- [x] The search functionality is working perfectly.
- [x] If the search query doesn’t exist, it correctly displays a “No
Result” UI.
2025-10-09 12:28:12 +00:00
Bently
4856bd1f3a fix(backend): prevent sub-agent execution visibility across users (#11132)
Fixes a issue where sub-agent executions triggered by one user were
visible in the original agent author's execution library.
 ## Solution

Fixed the user_id attribution in
`autogpt_platform/backend/backend/executor/manager.py` by ensuring that
sub-agent executions always use the actual executor's user_id rather
than the agent author's user_id stored in node defaults.

### Changes
- Added user_id override in `execute_node()` function when preparing
AgentExecutorBlock input (line 194)
- Ensures sub-agent executions are correctly attributed to the user
running them, not the agent author
- Maintains proper privacy isolation between users in marketplace agent
scenarios

### Security Impact
- **Before**: When User B downloaded and ran a marketplace agent
containing sub-agents owned by User A, the sub-agent executions appeared
in User A's library
- **After**: Sub-agent executions now only appear in the library of the
user who actually ran them
- Prevents unauthorized access to execution data and user privacy
violation

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Test plan: -->
  - [x] Create an agent with sub-agents as User A
  - [x] Publish agent to marketplace
  - [x] Run the agent as User B
- [x] Verify User A cannot see User B's sub-agent executions in their
library
  - [x] Verify User B can see their own sub-agent executions
  - [x] Verify primary agent executions remain correctly filtered
2025-10-09 11:17:26 +00:00
Abhimanyu Yadav
2e1d3dd185 refactor(frontend): replace context api in new block menu with zustand store (#11120)
Currently, we use the context API for the block menu provider and to
access some of its state outside the blockMenuProvider wrapper. For
instance, in the tutorial, we need to move this wrapper higher up in the
tree, perhaps at the top of the builder tree. This will cause
unnecessary re-renders. Therefore, we should create a block menu zustand
store so that we can easily access it in other parts of the builder.

### Changes 🏗️
- Deleted `block-menu-provider.tsx` file.
- Updated BlockMenu, BlockMenuContent, BlockMenuDefaultContent, and
other components to utilize blockMenuStore instead of
BlockMenuStateProvider.
- Adjusted imports and context usage accordingly.

### Checklist 📋
- [x] Changes have been clearly listed.
- [x] Code has been tested and verified.
- [x] I’ve checked every part of the block menu where we used the
context API and it’s working perfectly.
- [x] Ability to use block menu state in other parts of the builder.
2025-10-09 11:04:42 +00:00
Abhimanyu Yadav
ff72343035 feat(frontend): add UI for sticky notes on new builder (#11123)
Currently, the new builder doesn’t support sticky notes. We’re rendering
them as normal nodes with an input, which is why I’ve added a UI for
this.

<img width="1512" height="982" alt="Screenshot 2025-10-08 at 4 12 58 PM"
src="https://github.com/user-attachments/assets/be716e45-71c6-4cc4-81ba-97313426222f"
/>

To add sticky notes, go to the search menu of the block menu and search
for “Note block”. Then, add them from there.

### Changes 🏗️
- Updated CustomNodeData to include uiType.
- Conditional rendering in CustomNode based on uiType.
- Added a custom sticky note UI component called `StickyNoteBlock.tsx`.
- Adjusted FormCreator and FieldTemplate to pass and utilize uiType.
- Enhanced TextInputWidget to render differently based on uiType.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to attach sticky notes to the builder.
- [x] Able to accurately capture data while writing on sticky notes and
data is persistent also
2025-10-09 06:48:19 +00:00
Abhimanyu Yadav
7982c34450 feat(frontend): add oauth2 credential support in new builder (#11107)
In this PR, I have added support of oAuth2 in new builder.


https://github.com/user-attachments/assets/89472ebb-8ec2-467a-9824-79a80a71af8a

### Changes 🏗️
- Updated the FlowEditor to support OAuth2 credential selection.
- Improved the UI for API key and OAuth2 modals, enhancing user
experience.
- Refactored credential field components for better modularity and
maintainability.
- Updated OpenAPI documentation to reflect changes in OAuth flow
endpoints.

### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to create OAuth credentials
  - [x] OAuth2 is correctly selected using the Credential Selector.
2025-10-09 06:47:15 +00:00
Zamil Majdy
59c27fe248 feat(backend): implement comprehensive rate-limited Discord alerting system (#11106)
## Summary
Implement comprehensive Discord alerting system with intelligent rate
limiting to prevent spam and provide proper visibility into system
failures across retry mechanisms and execution errors.

## Key Features

### 🚨 Rate-Limited Discord Alerting Infrastructure
- **Reusable rate-limited alerts**: `send_rate_limited_discord_alert()`
function for any Discord alerts
- **5-minute rate limiting**: Prevents spam for identical error
signatures (function+error+context)
- **Thread-safe**: Proper locking for concurrent alert attempts
- **Configurable channels**: Support custom Discord channels or default
to PLATFORM
- **Graceful failure handling**: Alert failures don't break main
application flow

### 🔄 Enhanced Retry Alert System
- **Unified threshold alerting**: Both general retries and
infrastructure retries alert at EXCESSIVE_RETRY_THRESHOLD (50 attempts)
- **Critical retry alerts**: Early warning when operations approach
failure threshold
- **Infrastructure monitoring**: Dedicated alerts for database, Redis,
RabbitMQ connection issues
- **Rate limited**: All retry alerts use rate limiting to prevent
overwhelming Discord channels

### 📊 Unknown Execution Error Alerts
- **Automatic error detection**: Alert for unexpected graph execution
failures
- **Rich context**: Include user ID, graph ID, execution ID, error type
and message
- **Filtered alerts**: Skip known errors (InsufficientBalanceError,
ModerationError)
- **Proper error tracking**: Ensure execution_stats.error is set for all
error types

## Technical Implementation

### Rate Limiting Strategy
```python
# Create unique signatures based on function+error+context
error_signature = f"{context}:{func_name}:{type(exception).__name__}:{str(exception)[:100]}"
```
- **5-minute windows**: ALERT_RATE_LIMIT_SECONDS = 300 prevents
duplicate alerts
- **Memory efficient**: Only store last alert timestamp per unique error
signature
- **Context awareness**: Same error in different contexts can send
separate alerts

### Alerting Hierarchy
1. **50 attempts**: Critical alert warning about approaching failure
(EXCESSIVE_RETRY_THRESHOLD)
2. **100 attempts**: Final infrastructure failure (conn_retry max_retry)
3. **Unknown execution errors**: Immediate rate-limited alerts for
unexpected failures

## Files Modified

### Core Implementation
- `backend/executor/manager.py`: Unknown execution error alerts with
rate limiting
- `backend/util/retry.py`: Comprehensive rate-limited alerting
infrastructure
- `backend/util/retry_test.py`: Full test coverage for rate limiting
functionality (14 tests)

### Code Quality Improvements
- **Inlined alert messages**: Eliminated unnecessary temporary variables
- **Simplified logic**: Removed excessive comments and redundant alerts
- **Consistent patterns**: All alert functions follow same clean code
style
- **DRY principle**: Reusable rate-limited alert system for future
monitoring needs

## Benefits

### 🛡️ Prevents Alert Spam
- **Rate limiting**: No more overwhelming Discord channels with
duplicate alerts
- **Intelligent deduplication**: Same errors rate limited while
different errors get through
- **Thread safety**: Concurrent operations handled correctly

### 🔍 Better System Visibility  
- **Unknown errors**: Issues that need investigation are properly
surfaced
- **Infrastructure monitoring**: Early warning for
database/Redis/RabbitMQ issues
- **Rich context**: All necessary debugging information included in
alerts

### 🧹 Maintainable Codebase
- **Reusable infrastructure**: `send_rate_limited_discord_alert()` for
future monitoring
- **Clean, consistent code**: Inlined messages, simplified logic, proper
abstractions
- **Comprehensive testing**: Rate limiting edge cases and real-world
scenarios covered

## Validation Results
-  All 14 retry tests pass including comprehensive rate limiting
coverage
-  Manager execution tests pass validating integration with execution
flow
-  Thread safety validated with concurrent alert attempt tests
-  Real-world scenarios tested including the specific spend_credits
spam issue that motivated this work
-  Code formatting, linting, and type checking all pass

## Before/After Comparison

### Before
- No rate limiting → Discord spam for repeated errors
- Unknown execution errors not monitored → Issues went unnoticed  
- Inconsistent alerting thresholds → Confusing monitoring
- Verbose code with temporary variables → Harder to maintain

### After  
-  Rate-limited intelligent alerting prevents spam
-  Unknown execution errors properly monitored with context
-  Unified 50-attempt threshold for consistent monitoring
-  Clean, maintainable code with reusable infrastructure

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-09 08:22:15 +07:00
Zamil Majdy
c7575dc579 fix(backend): implement rate limiting for critical retry alerts to prevent spam (#11127)
## Summary
Fix the critical issue where retry failure alerts were being spammed
when service communication failed repeatedly.

## Problem
The service communication retry mechanism was sending a critical Discord
alert every time it hit the 50-attempt threshold, with no rate limiting.
This caused alert spam when the same operation (like spend_credits) kept
failing repeatedly with the same error.

## Solution

### Rate Limiting Implementation
- Add ALERT_RATE_LIMIT_SECONDS = 300 (5 minutes) between identical
alerts
- Create _should_send_alert() function with thread-safe rate limiting
using _alert_rate_limiter dict
- Generate unique signatures based on
context:func_name:exception_type:exception_message
- Only send alert if sufficient time has passed since last identical
alert

### Enhanced Logging  
- Rate-limited alerts log as warnings instead of being silently dropped
- Add full exception tracebacks for final failures and every 10th retry
attempt
- Improve alert message clarity and add note about rate limiting
- Better structured logging with exception types and details

### Error Context Preservation
- Maintain all original retry behavior and exception handling
- Preserve critical alerting for genuinely new issues  
- Clean up alert message (removed accidental paste from error logs)

## Technical Details
- Thread-safe implementation using threading.Lock() for rate limiter
access
- Signature includes first 100 chars of exception message for
granularity
- Memory efficient - only stores last alert timestamp per unique error
type
- No breaking changes to existing retry functionality

## Impact
- **Eliminates alert spam**: Same failing operation only alerts once per
5 minutes
- **Preserves critical alerts**: New/different failures still trigger
immediate alerts
- **Better debugging**: Enhanced logging provides full exception context
- **Maintains reliability**: All retry logic works exactly as before

## Testing
-  Rate limiting tested with multiple scenarios
-  Import compatibility verified 
-  No regressions in retry functionality
-  Alert generation works for new vs repeated errors

## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update

## How Has This Been Tested?
- Manual testing of rate limiting functionality with different error
scenarios
- Import verification to ensure no regressions
- Code formatting and linting compliance

## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation (N/A -
internal utility)
- [x] My changes generate no new warnings
- [x] Any dependent changes have been merged and published in downstream
modules (N/A)
2025-10-09 05:53:10 +07:00
Ubbe
73603a8ce5 fix(frontend): onboarding re-directs (#11126)
## Changes 🏗️

We weren't awaiting the onboarding enabled check and also we were
re-directing to a wrong onboarding URL.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new user
  - [x] Re-directs well to onboarding
  - [x] Complete up to Step 5 and logout
  - [x] Login again
  - [x] You are on Step 5  

#### For configuration changes:

None
2025-10-08 15:18:25 +00:00
Ubbe
e562ca37aa fix(frontend): login redirects + onboarding (#11125)
## Changes 🏗️

### Fix re-direct bugs

Sometimes the app will re-direct to a strange URL after login:
```
http://localhost:3000/marketplace,%20/marketplace
```
It looks like a race-condition because the re-direct to `/marketplace`
was done on a [server
action](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
rather than in the browser.

** Fixed by** 

Moving the login / signup server actions to Next.js API endpoints. In
this way the login/signup pages just call an API endpoint and handle its
response without having to hassle with serverless 💆🏽

### Wallet layout flash

<img width="800" height="744" alt="Screenshot 2025-10-08 at 22 52 03"
src="https://github.com/user-attachments/assets/7cb85fd5-7dc4-4870-b4e1-173cc8148e51"
/>

The wallet popover would sometimes flash after login, because it was
re-rendering once onboarding and credits data loaded.

** Fixed by** 

Only rendering once we have onboarding and credits data, without the
popover is useless and causes flashes.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login / Signup to the app with email and Google
  - [x] Works fine
  - [x] Onboarding popover does not flash
  - [x] Onboarding and marketplace re-directs work   

### For configuration changes:

None
2025-10-08 18:35:45 +04:00
Nicholas Tindle
f906fd9298 fix(backend): Allow Project.content to be optional for linear search projects (#11118)
Changed the type of the 'content' field in the Project model to accept
None, making it optional instead of required. Linear doesn't always
return data here if its not set by the user.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- Makes the content optional
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manually test it works with our data


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of projects with no content by making content
optional.
- Prevents validation errors during project creation, import, or sync
when content is missing.
- Enhances compatibility with integrations that may omit content fields.
- No impact on existing projects with content; behavior remains
unchanged.
  - No user action required.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-10-07 20:04:37 +00:00
seer-by-sentry[bot]
9e79add436 fix(backend): Change progress type to float in Linear Project (#11117)
### Changes 🏗️

- Changed the type of the `progress` field in the `LinearTask` model
from `int` to `float` to fix
[BUILDER-3V5](https://sentry.io/organizations/significant-gravitas/issues/6929150079/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Root cause analysis confirms fix -- testing will need to occur in
dev environment before release to prod but this should merge now


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- New Features
- Progress indicators now support decimal values, allowing more precise
tracking (e.g., 42.5% instead of 42%). This enables finer-grained
updates in the interface and any integrations consuming progress data.
- Users may notice smoother progress changes during long-running tasks,
with improved accuracy in percentage displays across relevant views and
APIs.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-07 17:59:26 +00:00
Nicholas Tindle
de6f4fca23 Merge branch 'master' into dev 2025-10-07 11:13:38 -05:00
Nicholas Tindle
fb4b8ed9fc feat: track users with sentry on client side (not backend yet) (#11077)
<!-- Clearly explain the need for these changes: -->
We need to be able to count user impact by user count which means we
need to track that
### Changes 🏗️
- Attaches user id to frontend actions (which hopefully propagate to the
backend in some places)
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test login -> shows on sentry
  - [x] Test logout -> no longer shows on sentry
2025-10-07 15:35:57 +00:00
Zamil Majdy
f3900127d7 feat(backend): instrument prometheus for internal services (#11114)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

Instrument Prometheus for internal services

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing tests
2025-10-07 21:34:38 +07:00
Abhimanyu Yadav
7c47f54e25 feat(frontend): add an API key modal for adding credentials in new builder. (#11105)
In this PR, I’ve added an API Key modal to the new builder so users can
add API key credentials.


https://github.com/user-attachments/assets/68da226c-3787-4950-abb0-7a715910355e

### Changes
- Updated the credential field to support API key.
- Added a modal for creating new API keys and improved the selection UI
for credentials.
- Refactored components for better modularity and maintainability.
- Enhanced styling and user experience in the FlowEditor components.
- Updated OpenAPI documentation for better clarity on credential
operations.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Able to create API key perfectly.
  - [x] can select the correct credentials.
2025-10-07 11:19:17 +00:00
Lluis Agusti
927042d93e fix(frontend): more turnstile experiments (2) 2025-10-07 00:40:49 +09:00
Lluis Agusti
4244979a45 fix(frontend): more turnstile experiments 2025-10-07 00:22:20 +09:00
Lluis Agusti
aa27365e7f fix(frontend): fix captcha reset 2025-10-06 23:57:42 +09:00
Nicholas Tindle
b86aa8b14e feat(frontend): launchdarkly tracking on frontend browser (#11076)
<!-- Clearly explain the need for these changes: -->
We struggle to identify where issues are coming from feature flags and
which are from normal use. This adds that split on the frontend.

### Changes 🏗️
Include sentry in the LD initialization
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test that launch darkly flags get attached to the frontend
(browser only)
2025-10-06 13:48:13 +00:00
Lluis Agusti
e7ab2626f5 fix(frontend): remove captcha ref reset 2025-10-06 22:34:08 +09:00
Ubbe
ff58ce174b fix(frontend): possible login issues related to turnstile (#11094)
## Changes 🏗️

We are seeing login and authentication issues in production and staging.
Locally though, the app behaves fine. We also had issues related to the
CAPTCHA in the past.

Our CAPTCHA code is less than ideal, with some heavy `useEffect` that
will load the Turnstile script into the DOM. I have the impression that
is loading the script multiple times ( due to dependencies on the
effects array not being well set ), or the like causing associated login
issues.

Created a new Turnstile component using
[`react-turnstile`](https://docs.page/marsidev/react-turnstile) that is
way simpler and should hopefully be more stable.

I also fixed an issue with the Credits popover layout rendering cropped
on the window.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login/logout on the app multiple times with Turnstile ON,
everything is stable
  - [x] Credits popover appears on the right place 

### For configuration changes:

None
2025-10-06 12:59:27 +00:00
Abhimanyu Yadav
2d8ab6b7c0 feat(frontend): add selecting UI for custom node in new builder (#11091)
React Flow has built-in functionality to select multiple nodes by using
`cmd` + click. You can also select using rectangle selection by holding
the shift key. However, we need to design a custom node after it’s
selected.

<img width="845" height="510" alt="Screenshot 2025-10-06 at 12 41 16 PM"
src="https://github.com/user-attachments/assets/c91f22e3-2211-46b6-b3d3-fbc89148e99a"
/>

### Tests
- [x] Selecting Ui is visible after selecting a node, using cmd + click,
and after rectangle selection.
2025-10-06 12:53:59 +00:00
Abhimanyu Yadav
a7306970b8 refactor(frontend): simplify marketplace search page and update data fetching (#11061)
This PR refactors the marketplace search page to improve code
maintainability, readability, and follows modern React patterns by
extracting complex logic into a custom hook and creating dedicated
components.

### 🔄 Changes

#### **Architecture Improvements**
- **Component Extraction**: Replaced the monolithic `SearchResults`
component with a cleaner `MainSearchResultPage` component that focuses
solely on presentation
- **Custom Hook Pattern**: Extracted all business logic and state
management into `useMainSearchResultPage` hook for better separation of
concerns
- **Loading State Component**: Added dedicated
`MainSearchResultPageLoading` component for consistent loading UI

#### **Code Simplification**
- **Reduced search page to 19 lines** (from 175 lines) by removing
inline logic and state management
- **Centralized data fetching** using auto-generated API endpoints
(`useGetV2ListStoreAgents`, `useGetV2ListStoreCreators`)
- **Improved error handling** with dedicated error states and loading
states

#### **Feature Updates**
- **Sort Options**: Commented out "Most Recent" and "Highest Rated" sort
options due to backend limitations (no date/rating data available)
- **Client-side Sorting**: Implemented client-side sorting for "runs"
and "rating" as a temporary solution
- **Search Filters**: Maintained filter functionality for
agents/creators with improved state management

### 📊 Impact

- **Better Developer Experience**: Code is now more modular and easier
to understand
- **Improved Maintainability**: Business logic separated from
presentation layer
- **Future-Ready**: Structure prepared for backend improvements when
date/rating data becomes available
- **Type Safety**: Leveraging TypeScript with auto-generated API types

### 🧪 Testing Checklist

- [x] Search functionality works correctly with various search terms
- [x] Filter chips correctly toggle between "All", "Agents", and
"Creators"
- [x] Sort dropdown displays only "Most Runs" option
- [x] Client-side sorting correctly sorts agents and creators by runs
- [x] Loading state displays while fetching data
- [x] Error state displays when API calls fail
- [x] "No results found" message appears for empty searches
- [x] Search bar in results page is functional
- [x] Results display correctly with proper layout and styling
2025-10-06 12:53:45 +00:00
Abhimanyu Yadav
c42f94ce2a feat(frontend): add new credential field for new builder (#11066)
In this PR, I’ve added a feature to select a credential from a list and
also provided a UI to create a new credential if desired.

<img width="443" height="157" alt="Screenshot 2025-10-06 at 9 28 07 AM"
src="https://github.com/user-attachments/assets/d9e72a14-255d-45b6-aa61-b55c2465dd7e"
/>

#### Frontend Changes:
- **Refactored credential field** from a single component to a modular
architecture:
  - Created `CredentialField/` directory with separated concerns
- Added `SelectCredential.tsx` component for credential selection UI
with provider details display
- Implemented `useCredentialField.ts` custom hook for credential data
fetching with 10-minute caching
- Added `helpers.ts` with credential filtering and provider name
formatting utilities
  - Added loading states with skeleton UI while fetching credentials

- **Enhanced UI/UX features**:
- Dropdown selector showing credentials with provider, title, username,
and host details
  - Visual key icon for each credential option
  - Placeholder "Add API Key" button (implementation pending)
  - Loading skeleton UI for better perceived performance
  - Smart filtering of credentials based on provider requirements

- **Template improvements**:
- Updated `FieldTemplate.tsx` to properly handle credential field
display
- Special handling for credential field labels showing provider-specific
names
  - Removed input handle for credential fields in the node editor

#### Backend Changes:
- **API Documentation improvements**:
- Added OpenAPI summaries to `/credentials` endpoint ("List
Credentials")
- Added summary to `/{provider}/credentials/{cred_id}` endpoint ("Get
Specific Credential By ID")

### Test Plan 📋

   - [x] Navigate to the flow builder
   - [x] Add a block that requires credentials (e.g., API block)
- [x] Verify the credential dropdown loads and displays available
credentials
- [x] Check that only credentials matching the provider requirements are
shown
2025-10-06 12:52:45 +00:00
Zamil Majdy
4e1557e498 fix(backend): Add dynamic input pin support for Smart Decision Maker Block (#11082)
## Summary

- Centralize dynamic field delimiters and helpers in
backend/data/dynamic_fields.py.
- Refactor SmartDecisionMaker: build function signatures with
dynamic-field mapping and re-map tool outputs back to original dynamic
names.
- Deterministic retry loop with retry-only feedback to avoid polluting
final conversation history.
- Update executor/utils.py and data/graph.py to use centralized
utilities.
- Update and extend tests: dynamic-field E2E flow, mapping verification,
output yielding, and retry validation; switch mocked llm_call to
AsyncMock; align tool-name expectations.
- Add a single-tool fallback in schema lookup to support mocked
scenarios.

## Validation

- Full backend test suite: 1125 passed, 88 skipped, 53 warnings (local).
- Backend lint/format pass.

## Scope

- Minimal and localized to SmartDecisionMaker and dynamic-field
utilities; unrelated pyright warnings remain unchanged.

## Risks/Mitigations

- Behavior is backward-compatible; dynamic-field constants are
centralized and reused.
- Output re-mapping only affects SmartDecisionMaker tool outputs and
matches existing link naming conventions.

## Checklist

- [x] Formatted and linted
- [x] All updated tests pass locally
- [x] No secrets introduced

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-04 14:23:13 +00:00
seer-by-sentry[bot]
7f8cf36ceb feat(frontend): Add description to Upload Agent dialog (#11053)
### Changes 🏗️

- Added a description to the Upload Agent dialog to provide more context
for users. Fixes
[BUILDER-3N1](https://sentry.io/organizations/significant-gravitas/issues/6915512912/).
The issue was that: DialogContent in LibraryUploadAgentDialog lacks an
accessible description, violating WAI-ARIA standards.

<img width="2066" height="1740" alt="image"
src="https://github.com/user-attachments/assets/c876fb33-4375-4a66-a6a2-6b13c00ef8d3"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test it works
  - [x] Get design approval

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-03 16:38:49 +00:00
Ubbe
0978566089 fix(frontend): performance and layout issues (#11036)
## Changes 🏗️

### Performance (Onboarding) 🐎 

- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
 
**Result:** faster initial render and smoother onboarding flows under
load.

### Layout and overflow fixes 📐 

- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.

**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.

### New Generic Error pages

- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
  - [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently

### For configuration changes:

None
2025-10-03 22:41:01 +09:00
Zamil Majdy
8b4eb6f87c fix(backend): resolve SmartDecisionMaker ChatCompletionMessage error and enhance tool call token counting (#11059)
## Summary
Fix two critical production issues affecting SmartDecisionMaker
functionality and prompt compression accuracy.

### 🔧 Changes Made

#### Issue 1: SmartDecisionMaker ChatCompletionMessage Error
**Problem**: PR #11015 introduced code that appended
`response.raw_response` (ChatCompletionMessage object) directly to
conversation history, causing `'ChatCompletionMessage' object has no
attribute 'get'` errors.

**Root Cause**: ChatCompletionMessage objects don't have `.get()` method
but conversation history processing expects dictionary objects with
`.get()` capability.

**Solution**: Created `_convert_raw_response_to_dict()` helper function
for type-safe conversion:
-  **Helper function**: Safely converts raw_response to dictionary
format for conversation history
-  **Type safety**: Handles OpenAI (ChatCompletionMessage), Anthropic
(Message), and Ollama (string) responses
-  **Preserves context**: Maintains conversation flow for multi-turn
tool calling scenarios
-  **DRY principle**: Single helper used in both validation error path
(line 624) and success path (line 681)
-  **No breaking changes**: Tool call continuity preserved for complex
workflows

#### Issue 2: Tool Call Token Counting in Prompt Compression
**Problem**: `_msg_tokens()` function only counted tokens in 'content'
field, severely undercounting tool calls which store data in different
fields (tool_calls, function.arguments, etc.).

**Root Cause**: Tool calls have no 'content' to calculate length of,
causing massive token undercounting during prompt compression that could
lead to context overflow.

**Solution**: Enhanced `_msg_tokens()` to handle both OpenAI and
Anthropic tool call formats:
-  **OpenAI format**: Count tokens in `tool_calls[].id`, `type`,
`function.name`, `function.arguments`
-  **Anthropic format**: Count tokens in `content[].tool_use` (`id`,
`name`, `input`) and `content[].tool_result`
-  **Backward compatibility**: Regular string content counted exactly
as before
-  **Comprehensive testing**: Added 11 unit tests in `prompt_test.py`

### 📊 Validation Results
-  **SmartDecisionMaker errors resolved**: No more
ChatCompletionMessage.get() failures
-  **Token counting accuracy**: OpenAI tool calls 9+ tokens vs previous
3-4 wrapper-only tokens
-  **Token counting accuracy**: Anthropic tool calls 13+ tokens vs
previous 3-4 wrapper-only tokens
-  **Backward compatibility**: Regular messages maintain exact same
token count
-  **Type safety**: 0 type errors in both modified files
-  **Test coverage**: All 11 new unit tests pass + existing
SmartDecisionMaker tests pass
-  **Multi-turn conversations**: Tool call workflows continue working
correctly

### 🎯 Impact
- **Resolves Sentry issue OPEN-2750**: ChatCompletionMessage errors
eliminated
- **Prevents context overflow**: Accurate token counting during prompt
compression for long tool call conversations
- **Production stability**: SmartDecisionMaker retry mechanism works
correctly with proper conversation flow
- **Resource efficiency**: Better memory management through accurate
token accounting
- **Zero breaking changes**: Full backward compatibility maintained

### 🧪 Test Plan
- [x] Verified SmartDecisionMaker no longer crashes with
ChatCompletionMessage errors
- [x] Validated tool call token counting accuracy with comprehensive
unit tests (11 tests all pass)
- [x] Confirmed backward compatibility for regular message token
counting
- [x] Tested both OpenAI and Anthropic tool call formats
- [x] Verified type safety with pyright checks
- [x] Ensured conversation history flows correctly with helper function
- [x] Confirmed multi-turn tool calling scenarios work with preserved
context

### 📝 Files Modified
- `backend/blocks/smart_decision_maker.py` - Added
`_convert_raw_response_to_dict()` helper for safe conversion
- `backend/util/prompt.py` - Enhanced tool call token counting for
accurate prompt compression
- `backend/util/prompt_test.py` - Comprehensive unit tests for token
counting (11 tests)

###  Ready for Review
Both fixes are critical for production stability and have been
thoroughly tested with zero breaking changes. The helper function
approach ensures type safety while preserving essential conversation
context for complex tool calling workflows.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-03 00:25:21 +00:00
Reinier van der Leer
4b7d17b9d2 refactor(blocks/code): Clean up & rename code execution blocks (#11019)
The code execution blocks' implementations are heavily duplicated and
their names aren't very clear.
E.g. the "InstantiationBlock" just shows up as "Instantiation" in the
block list.

I would've done this in #11017 but kept the refactoring separate for
easier reviewing.

### Changes 🏗️

- Rename "Code Execution" block to "Execute Code"
- Rename "Instantiation" block to "Instantiate Code Sandbox"
- Rename "Step Execution" block to "Execute Code Step"
- Deduplicate implementation of the three code execution blocks
- Add `dispose_sandbox` toggle to "Execute Code" and "Execute Code Step"
blocks
- Note: it's default `True` on the Execute Code block, default `False`
on the Execute Code Step block
- Update block and input descriptions to clarify behavior
- Fix all linting issues

<details>
<summary>Screenshots</summary>

![the three blocks as they look
now](https://github.com/user-attachments/assets/8e4274f7-e006-440c-b2b8-980df546186d)
![updated block names and descriptions in the block
list](https://github.com/user-attachments/assets/866c3d9e-13ea-4fc0-87de-a5257bafb6d4)
![the new dispose_sandbox toggle on the Execute Code
block](https://github.com/user-attachments/assets/56815dbb-f313-4308-81dd-50d949d9eafb)
![the new dispose_sandbox toggle on the Execute Code Step
block](https://github.com/user-attachments/assets/469c140c-4cd2-4210-97b2-f27fc91778de)

</details>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test all code execution blocks manually
2025-10-02 22:50:49 +00:00
dependabot[bot]
0fc6a44389 chore(backend/deps-dev): Bump the development-dependencies group across 1 directory with 4 updates (#10946)
Bumps the development-dependencies group with 4 updates in the
/autogpt_platform/backend directory:
[faker](https://github.com/joke2k/faker),
[pyright](https://github.com/RobertCraigie/pyright-python),
[pytest-mock](https://github.com/pytest-dev/pytest-mock) and
[ruff](https://github.com/astral-sh/ruff).

Updates `faker` from 37.6.0 to 37.8.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v37.8.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.8.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.7.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.7.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.7.0...v37.8.0">v37.8.0
- 2025-09-15</a></h3>
<ul>
<li>Add Automotive providers for <code>ja_JP</code> locale. Thanks <a
href="https://github.com/ItoRino424"><code>@​ItoRino424</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.6.0...v37.7.0">v37.7.0
- 2025-09-15</a></h3>
<ul>
<li>Add Nigerian name locales (<code>yo_NG</code>, <code>ha_NG</code>,
<code>ig_NG</code>, <code>en_NG</code>). Thanks <a
href="https://github.com/ifeoluwaoladeji"><code>@​ifeoluwaoladeji</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4bde8f57ad"><code>4bde8f5</code></a>
Bump version: 37.7.0 → 37.8.0</li>
<li><a
href="f542f364cb"><code>f542f36</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="e28d7cb909"><code>e28d7cb</code></a>
fix test</li>
<li><a
href="e4305b0e29"><code>e4305b0</code></a>
fix padding</li>
<li><a
href="a359441a81"><code>a359441</code></a>
💄 format code</li>
<li><a
href="0e3f0bdf81"><code>0e3f0bd</code></a>
Add Automotive providers for <code>ja_JP</code> locale (<a
href="https://redirect.github.com/joke2k/faker/issues/2251">#2251</a>)</li>
<li><a
href="d4fa69dfc7"><code>d4fa69d</code></a>
Bump version: 37.6.0 → 37.7.0</li>
<li><a
href="f636f06a37"><code>f636f06</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="9a482dd25b"><code>9a482dd</code></a>
💄 Format code</li>
<li><a
href="2493b2d51a"><code>2493b2d</code></a>
fix: fix minor grammar typo (<a
href="https://redirect.github.com/joke2k/faker/issues/2259">#2259</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.6.0...v37.8.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `pyright` from 1.1.404 to 1.1.405
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e211ec8df8"><code>e211ec8</code></a>
Pyright NPM Package update to 1.1.405 (<a
href="https://redirect.github.com/RobertCraigie/pyright-python/issues/353">#353</a>)</li>
<li>See full diff in <a
href="https://github.com/RobertCraigie/pyright-python/compare/v1.1.404...v1.1.405">compare
view</a></li>
</ul>
</details>
<br />

Updates `pytest-mock` from 3.14.1 to 3.15.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/releases">pytest-mock's
releases</a>.</em></p>
<blockquote>
<h2>v3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/529">#529</a>:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>v3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><a
href="https://redirect.github.com/pytest-dev/pytest-mock/pull/524">#524</a>:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pytest-dev/pytest-mock/blob/main/CHANGELOG.rst">pytest-mock's
changelog</a>.</em></p>
<blockquote>
<h2>3.15.1</h2>
<p><em>2025-09-16</em></p>
<ul>
<li><code>[#529](https://github.com/pytest-dev/pytest-mock/issues/529)
&lt;https://github.com/pytest-dev/pytest-mock/issues/529&gt;</code>_:
Fixed <code>itertools._tee object has no attribute error</code> -- now
<code>duplicate_iterators=True</code> must be passed to
<code>mocker.spy</code> to duplicate iterators.</li>
</ul>
<h2>3.15.0</h2>
<p><em>2025-09-04</em></p>
<ul>
<li>Python 3.8 (EOL) is no longer supported.</li>
<li><code>[#524](https://github.com/pytest-dev/pytest-mock/issues/524)
&lt;https://github.com/pytest-dev/pytest-mock/pull/524&gt;</code>_:
Added <code>spy_return_iter</code> to <code>mocker.spy</code>, which
contains a duplicate of the return value of the spied method if it is an
<code>Iterator</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e1b5c62a38"><code>e1b5c62</code></a>
Release 3.15.1</li>
<li><a
href="184eb190d6"><code>184eb19</code></a>
Set <code>spy_return_iter</code> only when explicitly requested (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/537">#537</a>)</li>
<li><a
href="4fa0088a0a"><code>4fa0088</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/536">#536</a>)</li>
<li><a
href="f5aff33ce7"><code>f5aff33</code></a>
Fix test failure with pytest 8+ and verbose mode (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/535">#535</a>)</li>
<li><a
href="adc41873c9"><code>adc4187</code></a>
Bump actions/setup-python from 5 to 6 in the github-actions group (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/533">#533</a>)</li>
<li><a
href="95ad570060"><code>95ad570</code></a>
[pre-commit.ci] pre-commit autoupdate (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/532">#532</a>)</li>
<li><a
href="e696bf02c1"><code>e696bf0</code></a>
Fix standalone mock support (<a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/531">#531</a>)</li>
<li><a
href="5b29b03ce9"><code>5b29b03</code></a>
Fix gen-release-notes script</li>
<li><a
href="7d22ef4e56"><code>7d22ef4</code></a>
Merge pull request <a
href="https://redirect.github.com/pytest-dev/pytest-mock/issues/528">#528</a>
from pytest-dev/release-3.15.0</li>
<li><a
href="90b29f89e2"><code>90b29f8</code></a>
Update CHANGELOG for 3.15.0</li>
<li>Additional commits viewable in <a
href="https://github.com/pytest-dev/pytest-mock/compare/v3.14.1...v3.15.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `ruff` from 0.12.11 to 0.13.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.13.0</h2>
<h2>Release Notes</h2>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.13.0">blog
post</a> for a migration guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p><strong>Several rules can now add <code>from __future__ import
annotations</code> automatically</strong></p>
<p><code>TC001</code>, <code>TC002</code>, <code>TC003</code>,
<code>RUF013</code>, and <code>UP037</code> now add <code>from
__future__ import annotations</code> as part of their fixes when the
<code>lint.future-annotations</code> setting is enabled. This allows the
rules to move more imports into <code>TYPE_CHECKING</code> blocks
(<code>TC001</code>, <code>TC002</code>, and <code>TC003</code>), use
PEP 604 union syntax on Python versions before 3.10
(<code>RUF013</code>), and unquote more annotations
(<code>UP037</code>).</p>
</li>
<li>
<p><strong>Full module paths are now used to verify first-party
modules</strong></p>
<p>Ruff now checks that the full path to a module exists on disk before
categorizing it as a first-party import. This change makes first-party
import detection more accurate, helping to avoid false positives on
local directories with the same name as a third-party dependency, for
example. See the <a
href="https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc">FAQ
section</a> on import categorization for more details.</p>
</li>
<li>
<p><strong>Deprecated rules must now be selected by exact rule
code</strong></p>
<p>Ruff will no longer activate deprecated rules selected by their group
name or prefix. As noted below, the two remaining deprecated rules were
also removed in this release, so this won't affect any current rules,
but it will still affect any deprecations in the future.</p>
</li>
<li>
<p><strong>The deprecated macOS configuration directory fallback has
been removed</strong></p>
<p>Ruff will no longer look for a user-level configuration file at
<code>~/Library/Application Support/ruff/ruff.toml</code> on macOS. This
feature was deprecated in v0.5 in favor of using the <a
href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG
specification</a> (usually resolving to
<code>~/.config/ruff/ruff.toml</code>), like on Linux. The fallback and
accompanying deprecation warning have now been removed.</p>
</li>
</ul>
<h3>Removed Rules</h3>
<p>The following rules have been removed:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/pandas-df-variable-name"><code>pandas-df-variable-name</code></a>
(<code>PD901</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-pep604-isinstance"><code>non-pep604-isinstance</code></a>
(<code>UP038</code>)</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow-dag-no-schedule-argument"><code>airflow-dag-no-schedule-argument</code></a>
(<code>AIR002</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-removal"><code>airflow3-removal</code></a>
(<code>AIR301</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-moved-to-provider"><code>airflow3-moved-to-provider</code></a>
(<code>AIR302</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-suggested-update"><code>airflow3-suggested-update</code></a>
(<code>AIR311</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/airflow3-suggested-to-move-to-provider"><code>airflow3-suggested-to-move-to-provider</code></a>
(<code>AIR312</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/long-sleep-not-forever"><code>long-sleep-not-forever</code></a>
(<code>ASYNC116</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/f-string-number-format"><code>f-string-number-format</code></a>
(<code>FURB116</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/os-symlink"><code>os-symlink</code></a>
(<code>PTH211</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/generic-not-last-base-class"><code>generic-not-last-base-class</code></a>
(<code>PYI059</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/redundant-none-literal"><code>redundant-none-literal</code></a>
(<code>PYI061</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/pytest-raises-ambiguous-pattern"><code>pytest-raises-ambiguous-pattern</code></a>
(<code>RUF043</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/unused-unpacked-variable"><code>unused-unpacked-variable</code></a>
(<code>RUF059</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/useless-class-metaclass-type"><code>useless-class-metaclass-type</code></a>
(<code>UP050</code>)</li>
</ul>
<p>The following behaviors have been stabilized:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.13.0</h2>
<p>Check out the <a href="https://astral.sh/blog/ruff-v0.13.0">blog
post</a> for a migration
guide and overview of the changes!</p>
<h3>Breaking changes</h3>
<ul>
<li>
<p><strong>Several rules can now add <code>from __future__ import
annotations</code> automatically</strong></p>
<p><code>TC001</code>, <code>TC002</code>, <code>TC003</code>,
<code>RUF013</code>, and <code>UP037</code> now add <code>from
__future__ import annotations</code> as part of their fixes when the
<code>lint.future-annotations</code> setting is enabled. This allows the
rules to move
more imports into <code>TYPE_CHECKING</code> blocks (<code>TC001</code>,
<code>TC002</code>, and <code>TC003</code>),
use PEP 604 union syntax on Python versions before 3.10
(<code>RUF013</code>), and
unquote more annotations (<code>UP037</code>).</p>
</li>
<li>
<p><strong>Full module paths are now used to verify first-party
modules</strong></p>
<p>Ruff now checks that the full path to a module exists on disk before
categorizing it as a first-party import. This change makes first-party
import detection more accurate, helping to avoid false positives on
local
directories with the same name as a third-party dependency, for example.
See
the <a
href="https://docs.astral.sh/ruff/faq/#how-does-ruff-determine-which-of-my-imports-are-first-party-third-party-etc">FAQ
section</a> on import categorization for more details.</p>
</li>
<li>
<p><strong>Deprecated rules must now be selected by exact rule
code</strong></p>
<p>Ruff will no longer activate deprecated rules selected by their group
name
or prefix. As noted below, the two remaining deprecated rules were also
removed in this release, so this won't affect any current rules, but it
will
still affect any deprecations in the future.</p>
</li>
<li>
<p><strong>The deprecated macOS configuration directory fallback has
been removed</strong></p>
<p>Ruff will no longer look for a user-level configuration file at
<code>~/Library/Application Support/ruff/ruff.toml</code> on macOS. This
feature was
deprecated in v0.5 in favor of using the <a
href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG
specification</a>
(usually resolving to <code>~/.config/ruff/ruff.toml</code>), like on
Linux. The
fallback and accompanying deprecation warning have now been removed.</p>
</li>
</ul>
<h3>Removed Rules</h3>
<p>The following rules have been removed:</p>
<ul>
<li><a
href="https://docs.astral.sh/ruff/rules/pandas-df-variable-name"><code>pandas-df-variable-name</code></a>
(<code>PD901</code>)</li>
<li><a
href="https://docs.astral.sh/ruff/rules/non-pep604-isinstance"><code>non-pep604-isinstance</code></a>
(<code>UP038</code>)</li>
</ul>
<h3>Stabilization</h3>
<p>The following rules have been stabilized and are no longer in
preview:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a1fdd66f10"><code>a1fdd66</code></a>
Bump 0.13.0 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20336">#20336</a>)</li>
<li><a
href="8770b95509"><code>8770b95</code></a>
[ty] introduce <code>DivergentType</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20312">#20312</a>)</li>
<li><a
href="65982a1e14"><code>65982a1</code></a>
[ty] Use 'unknown' specialization for upper bound on Self (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20325">#20325</a>)</li>
<li><a
href="57d1f7132d"><code>57d1f71</code></a>
[ty] Simplify unions of enum literals and subtypes thereof (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20324">#20324</a>)</li>
<li><a
href="7a75702237"><code>7a75702</code></a>
Ignore deprecated rules unless selected by exact code (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20167">#20167</a>)</li>
<li><a
href="9ca632c84f"><code>9ca632c</code></a>
Stabilize adding future import via config option (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20277">#20277</a>)</li>
<li><a
href="64fe7d30a3"><code>64fe7d3</code></a>
[<code>flake8-errmsg</code>] Stabilize extending
<code>raw-string-in-exception</code> (<code>EM101</code>) to ...</li>
<li><a
href="beeeb8d5c5"><code>beeeb8d</code></a>
Stabilize the remaining Airflow rules (<a
href="https://redirect.github.com/astral-sh/ruff/issues/20250">#20250</a>)</li>
<li><a
href="b6fca52855"><code>b6fca52</code></a>
[<code>flake8-bugbear</code>] Stabilize support for non-context-manager
calls in `assert...</li>
<li><a
href="ac7f882c78"><code>ac7f882</code></a>
[<code>flake8-commas</code>] Stabilize support for trailing comma checks
in type paramet...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.12.11...0.13.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-10-02 20:57:18 +00:00
dependabot[bot]
f5ee579ab2 chore(backend/deps): Bump firecrawl-py from 2.16.3 to 4.3.1 in /autogpt_platform/backend (#10809)
Bumps [firecrawl-py](https://github.com/firecrawl/firecrawl) from 2.16.3
to 4.3.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/firecrawl/firecrawl/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=firecrawl-py&package-manager=pip&previous-version=2.16.3&new-version=4.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrade firecrawl-py to v4.3.6 and refactor firecrawl blocks to new v4
API, formats handling, method names, and response fields.
> 
> - **Dependencies**
> - Bump `firecrawl-py` from `2.16.3` to `4.3.6` (adds `httpx`, updates
`pydantic>=2`).
> - **Firecrawl API migration**
>   - Centralize `ScrapeFormat` in `backend/blocks/firecrawl/_api.py`.
> - Add `_format_utils.convert_to_format_options` to map `ScrapeFormat`
(incl. `screenshot@fullPage`) to v4 `FormatOption`/`ScreenshotFormat`.
> - Switch to v4 types (`firecrawl.v2.types.ScrapeOptions`); adopt
snake_case fields (`only_main_content`, `max_age`, `wait_for`).
> - Rename methods: `crawl_url` → `crawl`, `scrape_url` → `scrape`,
`map_url` → `map`.
> - Normalize response attributes: `rawHtml` → `raw_html`,
`changeTracking` → `change_tracking`.
> - **Blocks**
> - `crawl.py`, `scrape.py`, `search.py`: use new formats conversion and
updated options/fields; adjust iteration over results (`search`: iterate
`web` when present).
> - `map.py`: return both `links` and detailed `results`
(url/title/description) and update output schema accordingly.
> - **Project files**
> - Update `pyproject.toml` and `poetry.lock` for new dependency
versions.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d872f2e82b. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-02 20:14:18 +00:00
Zamil Majdy
57a06f7088 fix(blocks, security): Fixes for various DoS vulnerabilities (#10798)
This PR addresses multiple critical and medium security vulnerabilities
that could lead to Denial of Service (DoS) attacks. All fixes implement
defense-in-depth strategies with comprehensive testing.

### Changes 🏗️

#### **Critical Security Fixes:**

1. **GHSA-m2wr-7m3r-p52c - ReDoS in CodeExtractionBlock** 
- Fixed catastrophic backtracking in regex patterns `\s+[\s\S]*?` and
`\s+(.*?)`
   - Replaced with safer patterns: `[ \t]*\n([^\s\S]*?)`
   - Files: `backend/blocks/code_extraction_block.py`

2. **GHSA-955p-gpfx-r66j - AITextSummarizerBlock Memory Amplification**
   - Added 1MB text size limit and 100 chunk maximum
   - Prevents 10K input → 50G memory amplification attacks
   - Files: `backend/blocks/llm.py`

3. **GHSA-5cqw-g779-9f9x - RSS Feed XML Bomb DoS**
   - Added 10MB feed size limit and 30s timeout
   - Prevents deep XML parsing memory exhaustion
   - Files: `backend/blocks/rss.py`

4. **GHSA-7g34-7fvq-xxq6 - File Storage Disk Exhaustion**
   - Added 100MB per file and 1GB per execution directory limits
   - Prevents disk space exhaustion from file uploads
   - Files: `backend/util/file.py`

5. **GHSA-pppq-xx2w-7jpq - ExtractTextInformationBlock ReDoS**
   - Added 1MB text limit, 1000 match limit, and 5s timeout protection
   - Prevents lookahead pattern memory exhaustion
   - Files: `backend/blocks/text.py`

6. **GHSA-vw3v-whvp-33v5 - Docker Logging Disk Exhaustion**
- Added log rotation limits at Docker (10MB × 3 files) and application
levels
   - Prevents unbounded log growth causing disk exhaustion
- Files: `docker-compose.platform.yml`,
`autogpt_libs/autogpt_libs/logging/config.py`

#### **Additional Security Improvements:**

7. **StepThroughItemsBlock DoS Prevention**
   - Added 10,000 item limit and 1MB input size limit
   - Prevents large iteration DoS attacks
   - Files: `backend/blocks/iteration.py`

8. **XMLParserBlock XML Bomb Prevention**
   - Added 10MB XML input size limit
   - Files: `backend/blocks/xml_parser.py`

#### **Code Quality:**
- Fixed Python 3.10 typing compatibility issues
- Added comprehensive security test suite
- All code formatted and linted

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created comprehensive security test suite covering all
vulnerabilities
  - [x] Verified ReDoS patterns are fixed and don't cause timeouts
  - [x] Confirmed memory limits prevent amplification attacks
  - [x] Tested file size limits prevent disk exhaustion
  - [x] Validated log rotation prevents unbounded growth
  - [x] Ensured backward compatibility for normal usage

#### For configuration changes:
- [x] `docker-compose.yml` is updated with logging limits
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

### Test Plan 🧪

**Security Tests:**
1. **ReDoS Protection**: Tested with malicious regex inputs (large
spaces) - completes without hanging
2. **Memory Limits**: Verified 2MB text input gets truncated to 1MB,
chunk limits enforced
3. **File Size Limits**: Confirmed 200MB files rejected, directory size
limits enforced
4. **Iteration Limits**: Tested 20K item arrays rejected, large JSON
strings rejected
5. **Timeout Protection**: Dangerous regex patterns timeout after 5s
instead of hanging

**Compatibility Tests:**
- Normal functionality preserved for all blocks
- Existing tests pass with new security limits
- Performance impact minimal for typical usage

### Security Impact 🛡️

**Before:** Multiple attack vectors could cause:
- CPU exhaustion (ReDoS attacks)
- Memory exhaustion (amplification attacks)  
- Disk exhaustion (file/log bombs)
- Service unavailability

**After:** All attack vectors mitigated with:
- Input validation and size limits
- Timeout protections
- Resource quotas
- Defense-in-depth approach

All fixes maintain backward compatibility while preventing DoS attacks.

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds robust DoS protections across blocks (regex, memory, iteration,
XML/RSS, file I/O) and enables app/Docker log rotation with
comprehensive tests.
> 
> - **Security hardening**:
> - Replace unsafe regex in `backend/blocks/code_extraction_block.py` to
prevent ReDoS; add safer extraction/removal patterns.
> - Constrain LLM summarizer chunking in `backend/blocks/llm.py` (1MB
cap, chunk/overlap validation, chunk count limit).
> - Limit RSS fetching in `backend/blocks/rss.py` (scheme validation,
10MB cap, timeout, bounded read) and return empty on failure.
>   - Impose XML size limit (10MB) in `backend/blocks/xml_parser.py`.
> - Add file upload/download limits in `backend/util/file.py`
(100MB/file, 1GB dir quota) and enforce scanning before write.
> - Enable rotating file logs in `autogpt_libs/logging/config.py` (size
+ backups) and Docker json-file log rotation in
`docker-compose.platform.yml`.
> - **Iteration block**:
> - Add item count/string size limits; fix yielded key for dicts; cap
iterations in `backend/blocks/iteration.py`.
> - **Tests**:
> - New `backend/blocks/test/test_security_fixes.py` covering ReDoS,
timeouts, memory/size and iteration limits, XML/file constraints.
> - **Misc**:
> - Typing fallback for `NotRequired` in `activity_status_generator.py`.
>   - Dependency updates in `backend/poetry.lock`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
500e1578b1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Zamil Majdy <majdyz@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <Pwuts@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-02 12:55:55 +00:00
Zamil Majdy
258bf0b1a5 fix(backend): improve activity status generation accuracy and handle missing blocks gracefully (#11039)
## Summary
Fix critical issues where activity status generator incorrectly reported
failed executions as successful, and enhance AI evaluation logic to be
more accurate about actual task accomplishment.

## Changes Made

### 1. Missing Block Handling (`backend/data/graph.py`)
- **Replace ValueError with graceful degradation**: When blocks are
deleted/missing, return `_UnknownBlock` placeholder instead of crashing
- **Comprehensive interface implementation**: `_UnknownBlock` implements
all expected Block methods to prevent type errors
- **Warning logging**: Log missing blocks for debugging without breaking
execution flow
- **Removed unnecessary caching**: Direct constructor calls instead of
cached wrapper functions

### 2. Enhanced Activity Status AI Evaluation
(`backend/executor/activity_status_generator.py`)

#### Intention-Based Success Evaluation
- **Graph description analysis**: AI now reads graph description FIRST
to understand intended purpose
- **Purpose-driven evaluation**: Success is measured against what the
graph was designed to accomplish
- **Critical output analysis**: Enhanced detection of missing outputs
from key blocks (Output, Post, Create, Send, Publish, Generate)
- **Sub-agent failure detection**: Better identification when
AgentExecutorBlock produces no outputs

#### Improved Prompting
- **Intent-specific examples**: 'blog writing' → check for blog content,
'email automation' → check for sent emails
- **Primary evaluation criteria**: 'Did this execution accomplish what
the graph was designed to do?'
- **Enhanced checklist**: 7-point analysis including graph description
matching
- **Technical vs. goal completion**: Distinguish between workflow steps
completing vs. actual user goals achieved

#### Removed Database Error Handling
- **Eliminated try-catch blocks**: No longer needed around
`get_graph_metadata` and `get_graph` calls
- **Direct database calls**: Simplified error handling after fixing
missing block root cause
- **Cleaner code flow**: More predictable execution path without
redundant error handling

## Problem Solved
- **False success reports**: AI previously marked executions as
'successful' when critical output blocks produced no results
- **Missing block crashes**: System would fail when trying to analyze
executions with deleted/missing blocks
- **Intent-blind evaluation**: AI evaluated technical completion instead
of actual goal achievement
- **Database service errors**: 500 errors when missing blocks caused
graph loading failures

## Business Impact
- **More accurate user feedback**: Users get honest assessment of
whether their automations actually worked
- **Better task completion detection**: Clear distinction between
'workflow completed' vs. 'goal achieved'
- **Improved reliability**: System handles edge cases gracefully without
crashing
- **Enhanced user trust**: Truthful reporting builds confidence in the
platform

## Testing
-  Tested with problematic executions that previously showed false
successes
-  Confirmed missing block handling works without warnings
-  Verified enhanced prompt correctly identifies failures
-  Database calls work without try-catch protection

## Example Before/After

**Before (False Success):**
```
Graph: "Automated SEO Blog Writer"
Status: " I successfully completed your blog writing task!"
Reality: No blog content was actually created (critical output blocks had no outputs)
```

**After (Accurate Failure Detection):**
```
Graph: "Automated SEO Blog Writer"  
Status: " The task failed because the blog post creation step didn't produce any output."
Reality: Correctly identifies that the intended blog writing goal was not achieved
```

## Files Modified
- `backend/data/graph.py`: Missing block graceful handling with complete
interface
- `backend/executor/activity_status_generator.py`: Enhanced AI
evaluation with intention-based analysis

## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality) 
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update

## Checklist
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] New and existing unit tests pass locally with my changes
- [x] Any dependent changes have been merged and published in downstream
modules

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 12:28:57 +00:00
Ubbe
4a1cb6d64b fix(frontend): performance and layout issues (#11036)
## Changes 🏗️

### Performance (Onboarding) 🐎 

- Moved non-UI logic into `providers/onboarding/helpers.ts` to reduce
provider complexity.
- Memoized provider value and narrowed state updates to cut unnecessary
re-renders.
- Deferred non-critical effects until after mount to lower initial JS
work.
 
**Result:** faster initial render and smoother onboarding flows under
load.

### Layout and overflow fixes 📐 

- Replaced `w-screen` with `w-full` in platform/admin/profile layouts
and marketplace wrappers to avoid 100vw scrollbar overflow.
- Adjusted mobile navbar position (`right-0` instead of `-right-4`) to
prevent off-viewport elements.

**Result:** removed horizontal scrolling on Marketplace, Library, and
Settings pages; Build remains unaffected.

### New Generic Error pages

- Standardized global error handling in `app/global-error.tsx` for
consistent display and user feedback.
- Added platform-scoped error page(s) under `app/(platform)/error` for
route-level failures with a consistent layout.
- Improved retry affordances using existing `ErrorCard`.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify onboarding flows render faster and re-render less (DevTools
flamegraph)
- [x] Confirm no horizontal scrolling on Marketplace, Library, Settings
at common widths
  - [x] Validate mobile navbar stays within viewport
- [x] Trigger errors to confirm global and platform error pages render
consistently

### For configuration changes:

None
2025-10-02 10:21:31 +00:00
Copilot
7c9db7419a fix(frontend): Display run cost correctly - convert cents to dollars run detail components (#10997)
Fixed costs being displayed as raw cent values instead of properly
formatted dollar amounts in the frontend monitoring and agent run detail
pages.

## Problem
The platform was showing costs incorrectly in two key areas:
- **Monitoring page**: Total cost displayed as raw cents with incorrect
"seconds" unit (e.g., "Total cost: 150 seconds")
- **Agent run details**: Individual run costs displayed as raw cents
(e.g., "Cost: $150" for what should be $1.50)

## Solution
Updated the affected components to properly convert cents to dollars
with consistent formatting:

**FlowRunsStatus.tsx** - Fixed total cost calculation and display:
```tsx
// Before
{filteredFlowRuns.reduce((total, run) => total + (run.stats?.cost ?? 0), 0)} seconds

// After  
${(filteredFlowRuns.reduce((total, run) => total + (run.stats?.cost ?? 0), 0) / 100).toFixed(2)}
```

**RunDetailHeader.tsx** - Fixed individual run cost display:
```tsx
// Before
Cost: ${run.stats.cost}

// After
Cost: ${(run.stats.cost / 100).toFixed(2)}
```

## Validation
- Backend correctly stores costs in cents (verified in models and
database schemas)
- Email notification templates already handle the conversion properly
using `(credits_used|float)/100`
- Other components use the existing `formatCredits()` utility which
correctly converts cents to dollars
- No security vulnerabilities introduced (CodeQL verification passed)
- All linting and formatting checks pass

The fix ensures users now see accurate dollar amounts (e.g., $1.50
instead of $150 or "150 seconds") across the platform's cost reporting
interfaces.

![Cost Display Fix
Demo](https://github.com/user-attachments/assets/13c75a1d-7c78-4c11-9293-3dcf4c443097)

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `checkpoint.prisma.io`
> - Triggering command: `/usr/bin/node
/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;5.17.0&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2025-09-25T21:41:17Z&#34;,&#34;project_hash&#34;:&#34;a5170f80&#34;,&#34;cli_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js&#34;,&#34;cli_path_hash&#34;:&#34;40bbdaf9&#34;,&#34;endpoint&#34;:&#34;REDACTED&#34;,&#34;disable&#34;:false,&#34;arch&#34;:&#34;x64&#34;,&#34;os&#34;:&#34;linux&#34;,&#34;node_version&#34;:&#34;v20.19.5&#34;,&#34;ci&#34;:false,&#34;ci_name&#34;:&#34;&#34;,&#34;command&#34;:&#34;generate&#34;,&#34;schema_providers&#34;:[&#34;postgresql&#34;],&#34;schema_preview_features&#34;:[],&#34;schema_generators_providers&#34;:[&#34;prisma-client-py&#34;],&#34;cache_file&#34;:&#34;/root/.cache/checkpoint-nodejs/prisma-40bbdaf9&#34;,&#34;cache_duration&#34;:43200000,&#34;remind_duration&#34;:172800000,&#34;force&#34;:false,&#34;timeout&#34;:5000,&#34;unref&#34;:true,&#34;child_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child&#34;,&#34;client_event_id&#34;:&#34;&#34;,&#34;previous_client_event_id&#34;:&#34;&#34;,&#34;check_if_update_available&#34;:false}`
(dns block)
> - Triggering command: `/usr/bin/node
/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;5.17.0&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2025-09-25T21:41:19Z&#34;,&#34;project_hash&#34;:&#34;a5170f80&#34;,&#34;cli_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js&#34;,&#34;cli_path_hash&#34;:&#34;40bbdaf9&#34;,&#34;endpoint&#34;:&#34;REDACTED&#34;,&#34;disable&#34;:false,&#34;arch&#34;:&#34;x64&#34;,&#34;os&#34;:&#34;linux&#34;,&#34;node_version&#34;:&#34;v20.19.5&#34;,&#34;ci&#34;:false,&#34;ci_name&#34;:&#34;&#34;,&#34;command&#34;:&#34;migrate
deploy&#34;,&#34;schema_providers&#34;:[&#34;postgresql&#34;],&#34;schema_preview_features&#34;:[],&#34;schema_generators_providers&#34;:[&#34;prisma-client-py&#34;],&#34;cache_file&#34;:&#34;/root/.cache/checkpoint-nodejs/prisma-40bbdaf9&#34;,&#34;cache_duration&#34;:43200000,&#34;remind_duration&#34;:172800000,&#34;force&#34;:false,&#34;timeout&#34;:5000,&#34;unref&#34;:true,&#34;child_path&#34;:&#34;/root/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child&#34;,&#34;client_event_id&#34;:&#34;&#34;,&#34;previous_client_event_id&#34;:&#34;&#34;,&#34;check_if_update_available&#34;:false}`
(dns block)
> - Triggering command: `/opt/hostedtoolcache/node/21.7.3/x64/bin/node
/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child
{&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;5.17.0&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2025-09-25T21:44:58Z&#34;,&#34;project_hash&#34;:&#34;c6190a20&#34;,&#34;cli_path&#34;:&#34;/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/index.js&#34;,&#34;cli_path_hash&#34;:&#34;8d85b642&#34;,&#34;endpoint&#34;:&#34;REDACTED&#34;,&#34;disable&#34;:false,&#34;arch&#34;:&#34;x64&#34;,&#34;os&#34;:&#34;linux&#34;,&#34;node_version&#34;:&#34;v21.7.3&#34;,&#34;ci&#34;:true,&#34;ci_name&#34;:&#34;GitHub
Actions&#34;,&#34;command&#34;:&#34;generate&#34;,&#34;schema_providers&#34;:[&#34;postgresql&#34;],&#34;schema_preview_features&#34;:[],&#34;schema_generators_providers&#34;:[&#34;prisma-client-py&#34;],&#34;cache_file&#34;:&#34;/home/REDACTED/.cache/checkpoint-nodejs/prisma-8d85b642&#34;,&#34;cache_duration&#34;:43200000,&#34;remind_duration&#34;:172800000,&#34;force&#34;:false,&#34;timeout&#34;:5000,&#34;unref&#34;:true,&#34;child_path&#34;:&#34;/home/REDACTED/.cache/prisma-python/binaries/5.17.0/393aa359c9ad4a4bb28630fb5613f9c281cde053/node_modules/prisma/build/child&#34;,&#34;client_event_id&#34;:&#34;&#34;,&#34;previous_client_event_id&#34;:&#34;&#34;,&#34;check_if_update_available&#34;:false}`
(dns block)
> - `fonts.googleapis.com`
> - Triggering command: `node
/home/REDACTED/work/AutoGPT/AutoGPT/autogpt_platform/frontend/node_modules/.bin/../next/dist/bin/next
build` (dns block)
> -
`https://api.github.com/repos/Significant-Gravitas/Significant-Gravitas%2FAutoGPT/languages`
> - Triggering command:
`/home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps
/home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js`
(http block)
> - `o1.ingest.sentry.io`
> - Triggering command: `node
/home/REDACTED/work/AutoGPT/AutoGPT/autogpt_platform/frontend/node_modules/.bin/../next/dist/bin/next
build` (dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> 
> ----
> 
> *This section details on the original issue you should resolve*
> 
> <issue_title>Costs are being shown as dollars rather than cents based
on the new runs page</issue_title>
> <issue_description></issue_description>
> 
> ## Comments on the Issue (you are @copilot in this section)
> 
> <comments>
> </comments>
> 


</details>
Fixes Significant-Gravitas/AutoGPT#10886

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-02 10:08:03 +00:00
Krzysztof Czerwinski
18bbd8e572 fix(frontend): Fix confetti (#11031)
### Changes 🏗️

- Fix not being able to complete `MARKETPLACE_RUN_AGENT` task
- Fix confetti shooting on every refresh
- Fix confetti shooting from top-left corner

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Bugs eradicated
2025-10-02 03:19:25 +00:00
Zamil Majdy
047f011520 fix(platform): resolve authentication performance bottlenecks and improve reliability (#11028)
## Summary
Fix critical authentication performance bottlenecks causing infinite
loading during login and malformed redirect URL handling.

## Root Cause Analysis
- **OnboardingProvider** was running expensive `isOnboardingEnabled()`
database queries on every route for all users
- **Timezone detection** was calling backend APIs during authentication
flow instead of only during onboarding
- **Malformed redirect URLs** like `/marketplace,%20/marketplace`
causing authentication callback failures
- **Arbitrary setTimeout** creating race conditions instead of proper
authentication state management

## Changes Made

### 1. Backend: Cache Expensive Onboarding Queries
(`backend/data/onboarding.py`)
- Add `@cached(maxsize=1, ttl_seconds=300)` decorator to
`onboarding_enabled()`
- Cache expensive database queries for 5 minutes to prevent repeated
execution during auth
- Optimize query with `take=MIN_AGENT_COUNT + 1` to stop counting early
- Fix typo: "Onboading" → "Onboarding"

### 2. Frontend: Optimize OnboardingProvider
(`providers/onboarding/onboarding-provider.tsx`)
- **Route-based optimization**: Only call `isOnboardingEnabled()` when
user is actually on `/onboarding/*` routes
- **Preserve functionality**: Still fetch `getUserOnboarding()` for step
completion tracking on all routes
- **Smart redirects**: Only handle onboarding completion redirects when
on onboarding routes
- **Performance improvement**: Eliminates expensive database calls for
95% of page loads

### 3. Frontend: Fix Timezone Detection Race Conditions
(`hooks/useOnboardingTimezoneDetection.ts`)
- **Remove setTimeout hack**: Replace arbitrary 1000ms timeout with
proper authentication state checks
- **Add route filtering**: Only run timezone detection on
`/onboarding/*` routes using `pathname.startsWith()`
- **Proper auth dependencies**: Use `useSupabase()` hook to wait for
`user` and `!isUserLoading`
- **Fire-and-forget updates**: Change from `mutateAsync()` to `mutate()`
to prevent blocking UI

### 4. Frontend: Apply Fire-and-Forget Pattern
(`hooks/useTimezoneDetection.ts`)
- Change timezone auto-detection from `mutateAsync()` to `mutate()`
- Prevents blocking user interactions during background timezone updates
- API still executes successfully, user doesn't wait for response

### 5. Frontend: Enhanced URL Validation (`auth/callback/route.ts`)
- **Add malformed URL detection**: Check for commas and spaces in
redirect URLs
- **Constants**: Use `DEFAULT_REDIRECT_PATH = "/marketplace"` instead of
hardcoded strings
- **Better error handling**: Try-catch with fallback to safe default
path
- **Path depth limits**: Reject suspiciously deep URLs (>5 segments)
- **Race condition mitigation**: Default to `/marketplace` for corrupted
URLs with warning logs

## Technical Implementation

### Performance Optimizations
- **Database caching**: 5-minute cache prevents repeated expensive
onboarding queries
- **Route-aware logic**: Heavy operations only run where needed
(`/onboarding/*` routes)
- **Non-blocking updates**: Timezone updates don't block authentication
flow
- **Proper state management**: Wait for actual authentication instead of
arbitrary delays

### Authentication Flow Improvements
- **Eliminate race conditions**: No more setTimeout guessing - wait for
proper auth state
- **Faster auth**: Remove blocking timezone API calls during login flow
- **Better UX**: Handle malformed URLs gracefully instead of failing

## Files Changed
- `backend/data/onboarding.py` - Add caching to expensive queries
- `providers/onboarding/onboarding-provider.tsx` - Route-based
optimization
- `hooks/useOnboardingTimezoneDetection.ts` - Proper auth state + route
filtering + fire-and-forget
- `hooks/useTimezoneDetection.ts` - Fire-and-forget pattern
- `auth/callback/route.ts` - Enhanced URL validation

## Impact
- **Eliminates infinite loading** during authentication flow
- **Improves auth response times** from 5-11+ seconds to sub-second
- **Prevents malformed redirect URLs** that confused users
- **Reduces database load** through intelligent caching  
- **Maintains all existing functionality** with better performance
- **Eliminates race conditions** from arbitrary timeouts

## Validation
-  All pre-commit hooks pass (format, lint, typecheck)
-  No breaking changes to existing functionality
-  Backward compatible with all onboarding flows
-  Enhanced error logging and graceful fallbacks

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-02 01:26:49 +00:00
Reinier van der Leer
d11917eb10 feat(blocks): Improve data output of code execution block (#11017)
- Resolves #11016

### Changes 🏗️

- Add more extensive outputs to Code Execution Block
- Rename "Response" output to "Main Text Output"

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Object outputs can be accessed now
2025-10-01 10:38:04 +00:00
Copilot
4663066e65 feat(blocks): Implement AI Condition Block for natural language condition evaluation (#10996)
This PR implements the AI Condition Block as requested in issue
AUTOMAT-60. The new block enables users to define conditional logic
using natural language descriptions instead of traditional comparison
operators, while maintaining the same yes/no data pass-through
functionality as the existing ConditionBlock.

## Overview

The AI Condition Block uses Large Language Models to evaluate conditions
written in plain English, such as:
- "the input is the body of an email"
- "the input is a City in the USA"
- "the input is an error or a refusal"

## Key Features

**Natural Language Processing**: Users can express complex conditions in
everyday English rather than programming logic, making agent workflows
more intuitive and accessible.

**Consistent Interface**: Maintains the same input/output schema as the
standard ConditionBlock:
- Boolean `result` output indicating condition evaluation
- `yes_output` and `no_output` for conditional data flow
- Optional custom values for yes/no cases

**Robust Error Handling**: Defaults to `false` on AI evaluation failures
to ensure safe operation and prevent workflow interruption.

**Performance Optimized**: Uses minimal token limits (10 tokens) for
true/false responses to reduce latency and API costs.

## Implementation Details

The block is implemented as `AIConditionBlock` in
`backend/blocks/ai_condition.py` and inherits from `AIBlockBase`
following established platform patterns. It includes:

- Proper LLM integration with credential management
- Token usage tracking and statistics
- Comprehensive test mocking for reliable CI/CD
- Full documentation with examples and use cases

## Use Cases

This block enables more sophisticated conditional logic for:
- **Content Classification**: Automatically categorize text, emails, or
documents
- **Data Validation**: Validate inputs using natural language rules
- **Smart Routing**: Route data based on AI-evaluated conditions
- **Error Detection**: Identify and handle error messages or problematic
inputs
- **Quality Control**: Check content against flexible quality standards

## Testing

The implementation includes comprehensive testing that integrates with
the existing platform test suite. All tests pass, including:
- Unit tests with proper LLM response mocking
- Code quality checks (linting, formatting, type checking)
- Security analysis via CodeQL
- Integration testing to ensure proper block discovery and loading

The block is automatically discovered by the platform's block loading
system and is immediately available for use in agent workflows.

## PR Checklist

- [x] **Have you listed your changes in the description?**
  - Added new `AIConditionBlock` in `backend/blocks/ai_condition.py`
- Added comprehensive documentation in
`docs/content/platform/blocks/ai_condition.md`
  - Implemented natural language condition evaluation using LLMs

- [x] **Have you included a test plan?**
  - Unit tests with mocked LLM responses
  - Integration tests for block discovery and loading
  - Error handling validation
  - Token usage tracking verification

- [x] **Have you tested your changes according to the test plan?**
  - All existing tests pass
  - Linting and formatting checks pass
  - Type checking passes
  - Security analysis via CodeQL passes
- Fixed `json_format` parameter to `force_json_output` per recent API
changes

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `api.openai.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python
/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/pytest
backend/blocks/test/test_block.py::test_available_blocks -k
AIConditionBlock -v` (dns block)
> -
`https://api.github.com/repos/Significant-Gravitas/Significant-Gravitas%2FAutoGPT/languages`
> - Triggering command:
`/home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps
/home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js`
(http block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: AI Condition Block
> Issue Description: A version of the condition/if block that uses an AI
powered condition.
>
> It should have the same yes/no data pass throughs, as well as
outputting a result Boolean.
>
> The condition is plaintext English, provided by the user, and could be
anything.
>
> e.g
> If `[the input] is the body of an email`
> If `[the input] is a City in the USA`
> If `[the input] is an error or a refusal`
> Fixes https://linear.app/autogpt/issue/AUTOMAT-60/ai-condition-block
>
>
> Comment by User 4bcbb358-1758-43e4-abef-a0a42b63442f:
> 📋 I need a **repo** label on this issue to determine which GitHub
repository to work in.
>
> Please add a repo label to this issue with the format
`owner/repository-name` (e.g., `github/copilot`), then I'll
automatically start working on it!
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
>


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces `AIConditionBlock` that uses an LLM to evaluate
natural-language conditions and outputs boolean result with yes/no
pass-through, plus accompanying documentation.
> 
> - **Backend**:
>   - **New block**: `backend/blocks/ai_condition.py`
> - Evaluates natural-language conditions via `llm_call` using
selectable `LlmModel` and credentials.
> - Parses strict true/false responses (with fallback token matching),
yields `result`, `yes_output`/`no_output`, and `error` on
ambiguity/failure.
> - Tracks token usage via `NodeExecutionStats`; includes test
inputs/mocks and `force_json_output=False`.
> - **Docs**:
> - Adds `docs/content/platform/blocks/ai_condition.md` with usage,
inputs/outputs, examples, and considerations.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
06e9586bd3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-10-01 05:02:57 +00:00
Krzysztof Czerwinski
48a0faa611 feat(frontend): Restore onboarding steps (#11027)
Wallet update removed `BUILDER_OPEN` and `BUILDER_RUN_AGENT`.

### Changes 🏗️

- Restore completion codepaths for `BUILDER_OPEN` and
`BUILDER_RUN_AGENT` for analytical purposes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tasks are completed silently
2025-10-01 04:53:51 +00:00
Nicholas Tindle
70d00b4104 fix(ci): Delete pr_reviewer section in .pr_agent.toml (#11024)
Remove pr_reviewer section from configuration

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
removes the out of config status section
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] validated by global config
2025-10-01 03:01:24 +00:00
Nicholas Tindle
aad0434cb2 feat(frontend): Enhance Sentry integration and TallyPopup telemetry (#10862)
Added Sentry captureConsoleIntegration and extraErrorDataIntegration to
client, edge, and server configs. Improved replay integration with
unmasking support. Updated TallyPopup to collect and expose Sentry
replay data, user agent, and page URL for enhanced telemetry and
debugging. Improved event handling and error logging for Tally events.
Marked CustomNode title for Sentry unmasking.<!-- Clearly explain the
need for these changes: -->

### Changes 🏗️
Reconfigure sentry
Pass the id with sentry replay to tally alongside prefilling email, and
passing non user identifying attributes like platform url, full url, and
is authenticated.
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the results show up in sentry
  - [x] Test the url works in tally
2025-10-01 03:00:20 +00:00
Krzysztof Czerwinski
f33ec1f2ec feat(platform): New retention-focused tasks and wallet update (#10977)
### Changes 🏗️

- Rename wallet and update design
- Update tasks and add Hidden Tasks section
- Update onboarding backend code and related db migration
- Add progress bar for some tasks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All tasks can be finished
  - [x] Finished tasks add correct amount of credits

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-01 01:29:30 +00:00
dependabot[bot]
e68b873bcf chore(frontend/deps): Bump @faker-js/faker from 9.9.0 to 10.0.0 in /autogpt_platform/frontend (#10806)
Bumps [@faker-js/faker](https://github.com/faker-js/faker) from 9.9.0 to
10.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/faker-js/faker/releases"><code>@​faker-js/faker</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v10.0.0</h2>
<h2>New &amp; Noteworthy</h2>
<ul>
<li>esm only (for cjs support look into migration guide, we got you
covered 😉)</li>
<li>remove v9 deprecations</li>
<li>change default error strategy to 'fail' in word module</li>
<li>remove invalid credit card issuer patterns</li>
<li>see our <a
href="https://v10.fakerjs.dev/guide/upgrading.html">migration
guide</a></li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>ci: use node 24 by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3543">faker-js/faker#3543</a></li>
<li>infra: stop using node 18 by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3536">faker-js/faker#3536</a></li>
<li>infra: use import.meta.dirname by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3542">faker-js/faker#3542</a></li>
<li>chore(deps): update devdependencies (major) by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3512">faker-js/faker#3512</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3555">faker-js/faker#3555</a></li>
<li>chore(deps): update dependency <code>@​vitest/eslint-plugin</code>
to v1.3.4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3554">faker-js/faker#3554</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3556">faker-js/faker#3556</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3557">faker-js/faker#3557</a></li>
<li>feat!: esm only by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3540">faker-js/faker#3540</a></li>
<li>refactor!: remove deprecations by <a
href="https://github.com/Shinigami92"><code>@​Shinigami92</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3553">faker-js/faker#3553</a></li>
<li>docs: migration guide for v10 by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3559">faker-js/faker#3559</a></li>
<li>infra: more precise engines field by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3561">faker-js/faker#3561</a></li>
<li>refactor(word)!: change default error strategy to 'fail' by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3560">faker-js/faker#3560</a></li>
<li>chore(release): 10.0.0-beta.0 by <a
href="https://github.com/fakerjs-bot"><code>@​fakerjs-bot</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3565">faker-js/faker#3565</a></li>
<li>docs: Minor improvements to migration guide by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3569">faker-js/faker#3569</a></li>
<li>chore(deps): update pnpm to v10.13.1 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3570">faker-js/faker#3570</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3571">faker-js/faker#3571</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3572">faker-js/faker#3572</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3562">faker-js/faker#3562</a></li>
<li>chore(deps): update dependency typescript to v5.9.2 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3576">faker-js/faker#3576</a></li>
<li>chore(deps): update pnpm to v10.14.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3579">faker-js/faker#3579</a></li>
<li>chore(deps): update
mcr.microsoft.com/devcontainers/typescript-node:22 docker digest to
2baa40a by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3575">faker-js/faker#3575</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3577">faker-js/faker#3577</a></li>
<li>chore(deps): update eslint (major) by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3580">faker-js/faker#3580</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3578">faker-js/faker#3578</a></li>
<li>feat(locale): extended list of colors in Polish by <a
href="https://github.com/pkuczynski"><code>@​pkuczynski</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3586">faker-js/faker#3586</a></li>
<li>refactor(locale): remove invalid credit card issuer patterns by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3568">faker-js/faker#3568</a></li>
<li>docs: update migration guide with findings from playground update by
<a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3587">faker-js/faker#3587</a></li>
<li>chore: fix typo in test by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3591">faker-js/faker#3591</a></li>
<li>chore(deps): update all non-major dependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3596">faker-js/faker#3596</a></li>
<li>chore(deps): update amannn/action-semantic-pull-request action to v6
by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3598">faker-js/faker#3598</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3599">faker-js/faker#3599</a></li>
<li>chore(deps): update actions/checkout action to v5 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3597">faker-js/faker#3597</a></li>
<li>chore(deps): update dependency cypress to v15 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3603">faker-js/faker#3603</a></li>
<li>chore(deps): update dependency vitepress to v1.6.4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3601">faker-js/faker#3601</a></li>
<li>chore(deps): pin dependency node to 24.6.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3600">faker-js/faker#3600</a></li>
<li>chore(deps): update dependency typescript-eslint to v8.40.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3602">faker-js/faker#3602</a></li>
<li>chore(deps): update dependency eslint-plugin-jsdoc to v54 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3604">faker-js/faker#3604</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/faker-js/faker/pull/3584">faker-js/faker#3584</a></li>
<li>chore(release): 10.0.0 by <a
href="https://github.com/fakerjs-bot"><code>@​fakerjs-bot</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3605">faker-js/faker#3605</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/faker-js/faker/blob/next/CHANGELOG.md"><code>@​faker-js/faker</code>'s
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/faker-js/faker/compare/v10.0.0-beta.0...v10.0.0">10.0.0</a>
(2025-08-21)</h2>
<h3>New Locales</h3>
<ul>
<li><strong>locale:</strong> extended list of colors in Polish (<a
href="https://redirect.github.com/faker-js/faker/issues/3586">#3586</a>)
(<a
href="9940d54f75">9940d54</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>locales:</strong> add animal vocabulary(bear, bird, cat,
rabbit, pet_name) in Korean (<a
href="https://redirect.github.com/faker-js/faker/issues/3535">#3535</a>)
(<a
href="0d2143c75d">0d2143c</a>)</li>
</ul>
<h3>Changed Locales</h3>
<ul>
<li><strong>locale:</strong> remove invalid credit card issuer patterns
(<a
href="https://redirect.github.com/faker-js/faker/issues/3568">#3568</a>)
(<a
href="9783d95a8e">9783d95</a>)</li>
</ul>
<h2><a
href="https://github.com/faker-js/faker/compare/v9.9.0...v10.0.0-beta.0">10.0.0-beta.0</a>
(2025-07-09)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>
<p><strong>word:</strong> change default error strategy to 'fail' (<a
href="https://redirect.github.com/faker-js/faker/issues/3560">#3560</a>)</p>
</li>
<li>
<p>remove deprecations (<a
href="https://redirect.github.com/faker-js/faker/issues/3553">#3553</a>)</p>
</li>
<li>
<p>esm only (<a
href="https://redirect.github.com/faker-js/faker/issues/3540">#3540</a>)</p>
</li>
<li>
<p>remove deprecations (<a
href="https://redirect.github.com/faker-js/faker/issues/3553">#3553</a>)
(<a
href="623d2741a4">623d274</a>)</p>
</li>
<li>
<p><strong>word:</strong> change default error strategy to 'fail' (<a
href="https://redirect.github.com/faker-js/faker/issues/3560">#3560</a>)
(<a
href="93416f71cf">93416f7</a>)</p>
</li>
</ul>
<h3>Features</h3>
<ul>
<li>esm only (<a
href="https://redirect.github.com/faker-js/faker/issues/3540">#3540</a>)
(<a
href="160960b797">160960b</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="51943aecb9"><code>51943ae</code></a>
chore(release): 10.0.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3605">#3605</a>)</li>
<li><a
href="96d7517b9b"><code>96d7517</code></a>
chore(deps): lock file maintenance (<a
href="https://redirect.github.com/faker-js/faker/issues/3584">#3584</a>)</li>
<li><a
href="2eb6fa0a7a"><code>2eb6fa0</code></a>
chore(deps): update dependency eslint-plugin-jsdoc to v54 (<a
href="https://redirect.github.com/faker-js/faker/issues/3604">#3604</a>)</li>
<li><a
href="1fcfe4830d"><code>1fcfe48</code></a>
chore(deps): pin dependency node to 24.6.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3600">#3600</a>)</li>
<li><a
href="2bd4807fa2"><code>2bd4807</code></a>
chore(deps): update dependency typescript-eslint to v8.40.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3602">#3602</a>)</li>
<li><a
href="09a88eb100"><code>09a88eb</code></a>
chore(deps): update dependency vitepress to v1.6.4 (<a
href="https://redirect.github.com/faker-js/faker/issues/3601">#3601</a>)</li>
<li><a
href="5418574bf7"><code>5418574</code></a>
chore(deps): update dependency cypress to v15 (<a
href="https://redirect.github.com/faker-js/faker/issues/3603">#3603</a>)</li>
<li><a
href="9e4f463ecf"><code>9e4f463</code></a>
chore(deps): update actions/checkout action to v5 (<a
href="https://redirect.github.com/faker-js/faker/issues/3597">#3597</a>)</li>
<li><a
href="287ecdaa39"><code>287ecda</code></a>
chore(deps): update devdependencies (<a
href="https://redirect.github.com/faker-js/faker/issues/3599">#3599</a>)</li>
<li><a
href="2b1495956f"><code>2b14959</code></a>
chore(deps): update amannn/action-semantic-pull-request action to v6 (<a
href="https://redirect.github.com/faker-js/faker/issues/3598">#3598</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/faker-js/faker/compare/v9.9.0...v10.0.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@faker-js/faker&package-manager=npm_and_yarn&previous-version=9.9.0&new-version=10.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades `@faker-js/faker` to v10 and updates test utilities to
dynamically import Faker and make password generation async.
> 
> - **Frontend dependencies**:
>   - Bump `@faker-js/faker` from `9.9.0` to `10.0.0`.
> - **Tests**:
> - Replace static imports with dynamic `import("@faker-js/faker")` in
`src/tests/utils/{auth.ts,signup.ts}`.
> - Change `generateTestPassword` to `async` returning `Promise<string>`
to use ESM Faker.
> - Adjust test user creation to use dynamically generated
`email`/`password` via Faker.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
334f4a264d. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-30 21:12:18 +00:00
Nicholas Tindle
4530e97e59 feat(platform/blocks): Add table input UI and builder block (#10829)
<!-- Clearly explain the need for these changes: -->


https://github.com/user-attachments/assets/909a6ecf-5731-424c-8dee-fe25db907365


### Need 💡

This PR introduces a new "Table Input" block and corresponding UI
component, allowing users to easily input structured, tabular data
directly within the agent builder. This addresses the need for a
user-friendly way to define custom column headers and populate rows of
data, which is then output as a list of dictionaries.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

* **New `TableInputBlock` (backend):** A new block
(`backend/backend/blocks/table_input.py`) has been added. It defines an
`Input` schema with `headers` (a list of strings for column names) and
`value` (a list of dictionaries representing table rows). The block
outputs the `value` data in the specified dictionary format.
* **New `NodeTableInput` Component (frontend):** A new React component
(`frontend/src/components/node-table-input.tsx`) was created to render
an editable table UI, supporting dynamic row addition/removal and cell
editing.
*   **Frontend Integration:**
* `NodeGenericInputField` and `NodeObjectInputTree` were updated to pass
`parentContext` down the component hierarchy.
* `NodeArrayInput` was modified to conditionally render the new
`NodeTableInput` component. It now detects when an array field
(`selfKey` is "value") is part of a parent context that defines
`headers`, indicating it should be rendered as a table.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Add a "Table Input" block to the builder.
  - [x] Define custom headers (e.g., "Name", "Email").
  - [x] Add several rows of data using the table UI.
- [x] Verify that adding, editing, and removing rows works as expected.
- [x] Connect the output of the "Table Input" block to another block
(e.g., a "Print" block) and confirm the output format is a list of
dictionaries with the defined headers as keys.
  - [x] Test with an empty table (no rows).
  - [x] Test with no headers defined (should default).
- [x] Test that an empty row returns empty data (is this a good
behavior?


example output of the block
```
{
  "advanced": false,
  "column_headers": [
    "Col 1",
    "Col 2",
    "Col 3"
  ],
  "name": "table_input",
  "value": [
    {
      "Col 1": "row 1",
      "Col 2": "row 1",
      "Col 3": "row 1"
    },
    {
      "Col 1": "val 1",
      "Col 2": "val 2",
      "Col 3": "val 3"
    }
  ]
}
```

---
<a
href="https://cursor.com/background-agent?bcId=bc-b8d31867-1034-4374-852c-b92ca69cc399">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg">
<img alt="Open in Cursor" src="https://cursor.com/open-in-cursor.svg">
  </picture>
</a>
<a
href="https://cursor.com/agents?id=bc-b8d31867-1034-4374-852c-b92ca69cc399">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg">
<source media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg">
    <img alt="Open in Web" src="https://cursor.com/open-in-web.svg">
  </picture>
</a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2025-09-30 19:41:03 +00:00
Bently
477c261488 feat(blocks): Add claude-sonnet-4.5 (#11023)
## Summary
Adds claude-sonnet-4.5 model to the platform and sets the price to 9

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] test the new claude-sonnet-4.5 model on the platform to make sure
it works
2025-09-30 19:30:58 +00:00
dependabot[bot]
8ac2228e1e chore(frontend/deps): Upgrade @sentry/nextjs from 9.42.0 to 10.8.0 (#10802)
Bumps [@sentry/nextjs](https://github.com/getsentry/sentry-javascript)
from 9.42.0 to 10.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@​sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>10.8.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(sveltekit): Add Compatibility for builtin SvelteKit
Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17423">#17423</a>)</strong></p>
<p>This release makes the <code>@sentry/sveltekit</code> SDK compatible
with SvelteKit's native <a
href="https://svelte.dev/docs/kit/observability">observability
support</a> introduced in SvelteKit version <code>2.31.0</code>.
If you enable both, instrumentation and tracing, the SDK will now
initialize early enough to set up additional instrumentation like
database queries and it will pick up spans emitted from SvelteKit.</p>
<p>We will follow up with docs how to set up the SDK soon.
For now, If you're on SvelteKit version <code>2.31.0</code> or newer,
you can easily opt into the new feature:</p>
<ol>
<li>
<p>Enable <a
href="https://svelte.dev/docs/kit/observability">experimental tracing
and instrumentation support</a> in <code>svelte.config.js</code>:</p>
</li>
<li>
<p>Move your <code>Sentry.init()</code> call from
<code>src/hooks.server.(js|ts)</code> to the new
<code>instrumentation.server.(js|ts)</code> file:</p>
<pre lang="ts"><code>// instrumentation.server.ts
import * as Sentry from '@sentry/sveltekit';
<p>Sentry.init({<br />
dsn: '...',<br />
// rest of your config<br />
});<br />
</code></pre></p>
<p>The rest of your Sentry config in <code>hooks.server.ts</code>
(<code>sentryHandle</code> and <code>handleErrorWithSentry</code>)
should stay the same.</p>
</li>
</ol>
<p>If you prefer to stay on the hooks-file based config for now, the SDK
will continue to work as previously.</p>
<p>Thanks to the Svelte team and <a
href="https://github.com/elliott-with-the-longest-name-on-github"><code>@​elliott-with-the-longest-name-on-github</code></a>
for implementing observability support and for reviewing our PR!</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17438">#17438</a>)</li>
</ul>
<!-- raw HTML omitted -->
<ul>
<li>test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17470">#17470</a>)</li>
</ul>
<!-- raw HTML omitted -->
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.59 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>22.2 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>38.94 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>76.4 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>66.43 KB</td>
</tr>
</tbody>
</table>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@​sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>10.8.0</h2>
<h3>Important Changes</h3>
<ul>
<li>
<p><strong>feat(sveltekit): Add Compatibility for builtin SvelteKit
Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17423">#17423</a>)</strong></p>
<p>This release makes the <code>@sentry/sveltekit</code> SDK compatible
with SvelteKit's native <a
href="https://svelte.dev/docs/kit/observability">observability
support</a> introduced in SvelteKit version <code>2.31.0</code>.
If you enable both, instrumentation and tracing, the SDK will now
initialize early enough to set up additional instrumentation like
database queries and it will pick up spans emitted from SvelteKit.</p>
<p>We will follow up with docs how to set up the SDK soon.
For now, If you're on SvelteKit version <code>2.31.0</code> or newer,
you can easily opt into the new feature:</p>
<ol>
<li>
<p>Enable <a
href="https://svelte.dev/docs/kit/observability">experimental tracing
and instrumentation support</a> in <code>svelte.config.js</code>:</p>
</li>
<li>
<p>Move your <code>Sentry.init()</code> call from
<code>src/hooks.server.(js|ts)</code> to the new
<code>instrumentation.server.(js|ts)</code> file:</p>
<pre lang="ts"><code>// instrumentation.server.ts
import * as Sentry from '@sentry/sveltekit';
<p>Sentry.init({<br />
dsn: '...',<br />
// rest of your config<br />
});<br />
</code></pre></p>
<p>The rest of your Sentry config in <code>hooks.server.ts</code>
(<code>sentryHandle</code> and <code>handleErrorWithSentry</code>)
should stay the same.</p>
</li>
</ol>
<p>If you prefer to stay on the hooks-file based config for now, the SDK
will continue to work as previously.</p>
<p>Thanks to the Svelte team and <a
href="https://github.com/elliott-with-the-longest-name-on-github"><code>@​elliott-with-the-longest-name-on-github</code></a>
for implementing observability support and for reviewing our PR!</p>
</li>
</ul>
<h3>Other Changes</h3>
<ul>
<li>fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17438">#17438</a>)</li>
</ul>
<!-- raw HTML omitted -->
<ul>
<li>test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17470">#17470</a>)</li>
</ul>
<!-- raw HTML omitted -->
<h2>10.7.0</h2>
<h3>Important Changes</h3>
<ul>
<li><strong>feat(cloudflare): Add
<code>instrumentPrototypeMethods</code> option to instrument RPC methods
for DurableObjects (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/17424">#17424</a>)</strong></li>
</ul>
<p>By default, <code>Sentry.instrumentDurableObjectWithSentry</code>
will not wrap any RPC methods on the prototype. To enable wrapping for
RPC methods, set <code>instrumentPrototypeMethods</code> to
<code>true</code> or, if performance is a concern, a list of only the
methods you want to instrument:</p>
<pre lang="js"><code>&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bd8458e659"><code>bd8458e</code></a>
release: 10.8.0</li>
<li><a
href="dbdddc896f"><code>dbdddc8</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17481">#17481</a>
from getsentry/prepare-release/10.8.0</li>
<li><a
href="f5d4bd616e"><code>f5d4bd6</code></a>
meta(changelog): Update changelog for 10.8.0</li>
<li><a
href="dfdc3b0ab9"><code>dfdc3b0</code></a>
test(profiling): Add tests for current state of profiling (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17470">#17470</a>)</li>
<li><a
href="895b38590c"><code>895b385</code></a>
fix(react): Avoid multiple name updates on navigation spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17438">#17438</a>)</li>
<li><a
href="e6e20d847c"><code>e6e20d8</code></a>
feat(sveltekit): Add Compatibility for builtin SvelteKit Tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17423">#17423</a>)</li>
<li><a
href="7e24422327"><code>7e24422</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17472">#17472</a>
from getsentry/master</li>
<li><a
href="27e97b0cec"><code>27e97b0</code></a>
Merge branch 'release/10.7.0'</li>
<li><a
href="b7e4816824"><code>b7e4816</code></a>
release: 10.7.0</li>
<li><a
href="0bc8417d50"><code>0bc8417</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/17471">#17471</a>
from getsentry/prepare-release/10.7.0</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.42.0...10.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@sentry/nextjs&package-manager=npm_and_yarn&previous-version=9.42.0&new-version=10.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades `@sentry/nextjs` to 10.15.0, updating numerous related
`@sentry/*`, OpenTelemetry (v2), and build/dev dependencies via the
lockfile.
> 
> - **Dependencies (frontend)**:
>   - Upgrade `@sentry/nextjs` from `9.42.0` to `10.15.0`.
>   - Cascading updates in `pnpm-lock.yaml`:
> - `@sentry/*` packages (`browser`, `core`, `node`, `opentelemetry`,
`react`, `vercel-edge`, `webpack-plugin`, `bundler-plugin-core`, `cli`,
etc.).
> - OpenTelemetry stack to newer major versions
(`@opentelemetry/core`/`resources`/`sdk-trace-base` 2.x; multiple
`instrumentation-*` packages).
> - Build tooling: `rollup` 4.52.x and platform binaries;
`@rollup/plugin-*`.
> - Misc dev typings and utilities (e.g., `@types/mysql`, `@types/pg`,
`debug`, `@prisma/instrumentation`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
5b4b37e551. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-09-30 16:42:05 +00:00
Zamil Majdy
91dd9364bb fix(backend): implement retry mechanism for SmartDecisionMaker tool call validation (#11015)
<!-- Clearly explain the need for these changes: -->

This PR fixes a critical production issue where SmartDecisionMakerBlock
was silently accepting tool calls with typo'd parameter names (e.g.,
'maximum_keyword_difficulty' instead of 'max_keyword_difficulty'),
causing downstream blocks to receive null values and execution failures.

The solution implements comprehensive parameter validation with
automatic retry when the LLM provides malformed tool calls, giving the
LLM specific feedback to correct the errors.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

**Core Validation & Retry Logic
(`backend/blocks/smart_decision_maker.py`)**
- Add tool call parameter validation against function schema
- Implement retry mechanism using existing `create_retry_decorator` from
`backend.util.retry`
- Validate provided parameters against expected schema properties and
required fields
- Generate specific error messages for unknown parameters (typos) and
missing required parameters
- Add error feedback to conversation history for LLM learning on retry
attempts
- Use `input_data.retry` field to configure number of retry attempts

**Comprehensive Test Coverage
(`backend/blocks/test/test_smart_decision_maker.py`)**
- Add `test_smart_decision_maker_parameter_validation` with 4
comprehensive test scenarios:
1. Tool call with typo'd parameter (should retry and eventually fail
with clear error)
2. Tool call missing required parameter (should fail immediately with
clear error)
  3. Valid tool call with optional parameter missing (should succeed)
  4. Valid tool call with all parameters provided (should succeed)
- Verify retry mechanism works correctly and respects retry count
- Mock LLM responses for controlled testing of validation logic

**Load Tests Documentation Update (`load-tests/README.md`)**
- Update documentation to reflect current orchestrator-based
architecture
- Remove references to deprecated `run-tests.js` and
`comprehensive-orchestrator.js`
- Streamline documentation to focus on working
`orchestrator/orchestrator.js`
- Update NPM scripts and command examples for current workflow
- Clean up outdated file references to match actual infrastructure

**Production Impact**
- **Prevents silent failures**: Tool call parameter typos now cause
retries instead of null downstream values
- **Maintains compatibility**: No breaking changes to existing
SmartDecisionMaker functionality
- **Improves reliability**: LLM receives feedback to correct parameter
errors
- **Configurable retries**: Uses existing `retry` field for user control
- **Accurate documentation**: Load-tests docs now match actual working
infrastructure

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run existing SmartDecisionMaker tests to ensure no regressions:
`poetry run pytest backend/blocks/test/test_smart_decision_maker.py
-xvs`  All 4 tests passed
- [x] Run new parameter validation test specifically: `poetry run pytest
backend/blocks/test/test_smart_decision_maker.py::test_smart_decision_maker_parameter_validation
-xvs`  Passed with retry behavior confirmed
- [x] Verify retry mechanism works by checking log output for retry
attempts  Confirmed in test logs
- [x] Test tool call validation with different scenarios (typos, missing
params, valid calls)  All scenarios covered and working
- [x] Run code formatting and linting: `poetry run format`  All
formatters passed
- [x] Verify no breaking changes to existing SmartDecisionMaker
functionality  All existing tests pass
- [x] Verify load-tests documentation accuracy  README now matches
actual orchestrator infrastructure

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note**: No configuration changes were needed as this uses existing
retry infrastructure and block schema validation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-30 16:18:05 +00:00
Zamil Majdy
f314fbf14f fix(backend): resolve two critical long-running agent execution failures (#11011)
## Summary

Fix two production issues causing agent execution failures that occurred
this morning:

1. **AsyncRedisLock Release Error** (ExecutionID:
08b2c251-ee27-45de-b88d-1792823ca3ee)
   - Error: "Cannot release a lock that's no longer owned" 
- Root cause: Race condition where lock expires during long database
operations
   - Location: backend/executor/manager.py synchronized context manager

2. **Tool Call Parameter Validation** (ExecutionID:
766fd9a0-5f22-4a77-96e8-14c9d02f3292)
- Issue: LLM used typo'd parameter 'maximum_keyword_difficulty' instead
of 'max_keyword_difficulty'
- SmartDecisionMakerBlock silently accepted typo, setting correct
parameter to null
- Result: Downstream blocks received null values causing execution
failures

## Changes Made

### AsyncRedisLock Error Handling
- Add try-catch blocks around AsyncRedisLock.release() calls in
ExecutionManager and OAuth refresh
- Prevent crashes when locks expire between ownership check and release
- Log warnings instead of crashing execution

### Tool Call Parameter Validation  
- **Reject unknown parameters**: Raise ValueError for typo'd parameter
names with detailed error messages
- **Allow optional parameters**: Only validate missing REQUIRED
parameters
- **Safe parameter access**: Use .get() to handle optional parameters
with defaults
- **Clean code**: Extract parameters object once to eliminate
duplication

## Technical Implementation

**Lock Release Protection:**
```python
if await lock.locked() and await lock.owned():
    try:
        await lock.release()
    except Exception as e:
        logger.warning(f"Failed to release lock for key {key}: {e}")
```

**Parameter Validation Logic:**
```python
# Get parameters schema from tool definition  
if tool_def and "function" in tool_def and "parameters" in tool_def["function"]:
    parameters = tool_def["function"]["parameters"]
    expected_args = parameters.get("properties", {})
    required_params = set(parameters.get("required", []))

# Detect parameter typos and missing required params
unexpected_args = provided_args - expected_args_set  
missing_required_args = required_params - provided_args

if unexpected_args or missing_required_args:
    raise ValueError(error_msg)  # Detailed error explaining the problem
```

## Testing

- [x] All existing tests pass
- [x] Lock error handling prevents execution crashes  
- [x] Tool validation catches typos while allowing optional parameters
- [x] Maintains backward compatibility with existing workflows

## Impact

-  No more "Cannot release a lock" crashes during long database
operations
-  Tool calls with typo'd parameters are rejected with clear error
messages
-  Optional parameters work correctly with default values  
-  Production stability improved with graceful error handling

## Files Modified

- `backend/executor/manager.py` - AsyncRedisLock error handling in
synchronized context
- `backend/integrations/creds_manager.py` - OAuth refresh lock error
handling
- `backend/blocks/smart_decision_maker.py` - Tool call parameter
validation with typo detection

Fixes two critical production failures that were causing 2/5 agent runs
to fail this morning.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 15:34:20 +00:00
Zamil Majdy
a97ff641c3 feat(backend): optimize FastAPI endpoints performance and alert system (#11000)
## Summary

Comprehensive performance optimization fixing event loop binding issues
and addressing all PR feedback.

### Original Performance Issues Fixed

**Event Loop Binding Problems:**
- JWT authentication dependencies were synchronous, causing thread pool
bottlenecks under high concurrency
- FastAPI's default thread pool (40 threads) was insufficient for
high-load scenarios
- Backend services lacked proper event loop configuration

**Security & Performance Improvements:**
- Security middleware converted from BaseHTTPMiddleware to pure ASGI for
better performance
- Added blocks endpoint to cacheable paths for improved response times
- Cross-platform uvloop detection with Windows compatibility

### Key Changes Made

#### 1. JWT Authentication Async Conversion
- **Files**: `autogpt_libs/auth/dependencies.py`,
`autogpt_libs/auth/jwt_utils.py`
- **Change**: Convert all JWT functions to async (`requires_user`,
`requires_admin_user`, `get_user_id`, `get_jwt_payload`)
- **Impact**: Eliminates thread pool blocking, improves concurrency
handling
- **Tests**: All 25+ authentication tests updated to async patterns

#### 2. FastAPI Thread Pool Optimization  
- **File**: `backend/server/rest_api.py:82-93`
- **Change**: Configure thread pool size via
`config.fastapi_thread_pool_size`
- **Default**: Increased from 40 to higher limit for sync operations
- **Impact**: Better handling of remaining sync dependencies

#### 3. Performance-Optimized Security Middleware
- **File**: `backend/server/middleware/security.py`
- **Change**: Pure ASGI implementation replacing BaseHTTPMiddleware
- **Headers**: HTTP spec compliant capitalization
(X-Content-Type-Options, X-Frame-Options, etc.)
- **Caching**: Added `/api/blocks` and `/api/v1/blocks` to cacheable
paths
- **Impact**: Reduced middleware overhead, improved header compliance

#### 4. Cross-Platform Event Loop Configuration
- **File**: `backend/server/rest_api.py:311-312`
- **Change**: Platform-aware uvloop detection: `'uvloop' if
platform.system() != 'Windows' else 'auto'`
- **Impact**: Windows compatibility while maintaining Unix performance
benefits
- **Verified**: 'auto' is valid uvicorn default parameter

#### 5. Enhanced Caching Infrastructure
- **File**: `autogpt_libs/utils/cache.py:118-132`
- **Change**: Per-event-loop asyncio.Lock instances prevent cross-loop
deadlocks
- **Impact**: Thread-safe caching across multiple event loops

#### 6. Database Query Limits & Performance
- **Files**: Multiple data layer files
- **Change**: Added configurable limits to prevent unbounded queries
- **Constants**: `MAX_GRAPH_VERSIONS_FETCH=50`,
`MAX_USER_API_KEYS_FETCH=500`, etc.
- **Impact**: Consistent performance regardless of data volume

#### 7. OpenAPI Documentation Improvements
- **File**: `backend/server/routers/v1.py:68-85`
- **Change**: Added proper response model and schema for blocks endpoint
- **Impact**: Better API documentation and type safety

#### 8. Error Handling & Retry Logic Fixes
- **File**: `backend/util/retry.py:63`
- **Change**: Accurate retry threshold comments referencing
EXCESSIVE_RETRY_THRESHOLD
- **Impact**: Clear documentation for debugging retry scenarios

### ntindle Feedback Addressed

 **HTTP Header Capitalization**: All headers now use proper HTTP spec
capitalization
 **Windows uvloop Compatibility**: Clean platform detection with inline
conditional
 **OpenAPI Response Model**: Blocks endpoint properly documented in
schema
 **Retry Comment Accuracy**: References actual threshold constants
instead of hardcoded numbers
 **Code Cleanliness**: Inline conditionals preferred over verbose if
statements

### Performance Testing Results

**Before Optimization:**
- High latency under concurrent load
- Thread pool exhaustion at ~40 concurrent requests
- Event loop binding issues causing timeouts

**After Optimization:**
- Improved concurrency handling with async JWT pipeline
- Configurable thread pool scaling
- Cross-platform event loop optimization
- Reduced middleware overhead

### Backward Compatibility

 **All existing functionality preserved**  
 **No breaking API changes**  
 **Enhanced test coverage with async patterns**  
 **Windows and Unix compatibility maintained**

### Files Modified

**Core Authentication & Performance:**
- `autogpt_libs/auth/dependencies.py` - Async JWT dependencies
- `autogpt_libs/auth/jwt_utils.py` - Async JWT utilities  
- `backend/server/rest_api.py` - Thread pool config + uvloop detection
- `backend/server/middleware/security.py` - ASGI security middleware

**Database & Limits:**
- `backend/data/includes.py` - Performance constants and configurable
includes
- `backend/data/api_key.py`, `backend/data/credit.py`,
`backend/data/graph.py`, `backend/data/integrations.py` - Query limits

**Caching & Infrastructure:**
- `autogpt_libs/utils/cache.py` - Per-event-loop lock safety
- `backend/server/routers/v1.py` - OpenAPI improvements
- `backend/util/retry.py` - Comment accuracy

**Testing:**
- `autogpt_libs/auth/dependencies_test.py` - 25+ async test conversions
- `autogpt_libs/auth/jwt_utils_test.py` - Async JWT test patterns

Ready for review and production deployment. 🚀

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-29 05:32:48 +00:00
Zamil Majdy
114f604d7b Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-09-27 18:43:26 +07:00
Zamil Majdy
3abea1ed96 fix(backend): prevent duplicate graph executions across multiple executor pods (#11008)
## Problem
Multiple executor pods could simultaneously execute the same graph,
leading to:
- Duplicate executions and wasted resources
- Inconsistent execution states and results
- Race conditions in graph execution management
- Inefficient resource utilization in cluster environments

## Solution
Implement distributed locking using ClusterLock to ensure only one
executor pod can process a specific graph execution at a time.

## Key Changes

### Core Fix: Distributed Execution Coordination
- **ClusterLock implementation**: Redis-based distributed locking
prevents duplicate executions
- **Atomic lock acquisition**: Only one executor can hold the lock for a
specific graph execution
- **Automatic lock expiry**: Prevents deadlocks if executor pods crash
or become unresponsive
- **Graceful degradation**: System continues operating even if Redis
becomes temporarily unavailable

### Technical Implementation
- Move ClusterLock to `backend/executor/` alongside ExecutionManager
(its primary consumer)
- Comprehensive integration tests (27 test scenarios) ensure reliability
under all conditions
- Redis client compatibility for different deployment configurations
- Rate-limited lock refresh to minimize Redis load

### Reliability Improvements
- **Context manager support**: Automatic lock cleanup prevents resource
leaks
- **Ownership verification**: Locks can only be refreshed/released by
the owner
- **Concurrency testing**: Thread-safe operations verified under high
contention
- **Error handling**: Robust failure scenarios including network
partitions

## Test Coverage
-  Concurrent executor coordination (prevents duplicate executions)
-  Lock expiry and refresh mechanisms (prevents deadlocks)
-  Redis connection failures (graceful degradation)
-  Thread safety under high load (production scenarios)
-  Long-running executions with periodic refresh

## Impact
- **No more duplicate executions**: Eliminates wasted compute resources
and inconsistent results
- **Improved reliability**: Robust distributed coordination across
executor pods
- **Better resource utilization**: Only one pod processes each execution
- **Scalable architecture**: Supports multiple executor pods without
conflicts

## Validation
- All integration tests pass 
- Existing ExecutionManager functionality preserved   
- No breaking changes to APIs 
- Production-ready distributed locking 

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-27 11:42:40 +00:00
Abhimanyu Yadav
da6e1ad26d refactor(frontend): enhance builder UI for better performance (#10922)
### Changes 🏗️

This PR introduces a new high-performance builder interface for the
AutoGPT platform, implementing a React Flow-based visual editor with
optimized state management and rendering.

#### Key Changes:

1. **New Flow Editor Implementation**
   - Built on React Flow for efficient graph rendering and interaction
- Implements a node-based visual workflow builder with custom nodes and
edges
- Dynamic form generation using React JSON Schema Form (RJSF) for block
inputs
   - Intelligent connection handling with visual feedback

2. **State Management Optimization**  
   - Added Zustand for lightweight, performant state management
   - Separated node and edge stores for better data isolation
   - Reduced unnecessary re-renders through granular state updates

3. **Dual Builder View (Temporary)**
   - Added toggle between old and new builder implementations
   - Allows A/B testing and gradual migration
   - Feature flagged for controlled rollout

4. **Enhanced UI Components**
- Custom form widgets for various input types (date, time, file, etc.)
   - Array and object editors with improved UX
   - Connection handles with visual state indicators
   - Advanced mode toggle for complex configurations

5. **Architecture Improvements**
   - Modular component structure for better code organization
   - Comprehensive documentation for the new system
   - Type-safe implementation with TypeScript

#### Dependencies Added:
- `zustand` (v5.0.2) - State management
- `@rjsf/core` (v5.22.8) - JSON Schema Form core
- `@rjsf/utils` (v5.22.8) - RJSF utilities  
- `@rjsf/validator-ajv8` (v5.22.8) - Schema validation

### Performance Improvements 🚀

- **Reduced Re-renders**: Zustand's shallow comparison and selective
subscriptions minimize unnecessary component updates
- **Optimized Graph Rendering**: React Flow provides efficient
canvas-based rendering for large workflows
- **Lazy Loading**: Components are loaded on-demand reducing initial
bundle size
- **Memoized Computations**: Heavy calculations are cached to avoid
redundant processing

### Test Plan 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
#### Test Checklist:
- [x] Create a new agent from scratch with at least 5 blocks
- [x] Connect blocks and verify connections render correctly
- [x] Switch between old and new builder views 
- [x] Test all form input types (text, number, boolean, array, object)
- [x] Verify data persistence when switching views
- [x] Test advanced mode toggle functionality
- [x] Performance test with 50+ blocks to verify smooth interaction

### Migration Strategy

The implementation includes a temporary toggle to switch between the old
and new builder. This allows for:
- Gradual user migration
- A/B testing to measure performance improvements
- Fallback option if issues are discovered
- Collecting user feedback before full rollout

### Documentation

Comprehensive documentation has been added:
- `/components/FlowEditor/docs/README.md` - Architecture overview and
store management
- `/components/FlowEditor/docs/FORM_CREATOR.md` - Detailed form system
documentation

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-26 10:42:05 +00:00
Swifty
634fffb967 fix(blocks): Handle NoneType in DataForSEO Blocks and Add missing Err (#11004)
This PR fixes critical issues in the DataForSEO blocks to improve error
handling and prevent runtime exceptions.

### Changes 🏗️

1. **Fixed NoneType error in DataForSEO Related Keywords Block**
(#10990)
- Added null check to ensure `items` is always a list before iteration
   - Prevents TypeError when API returns None for items field
   - Ensures robust handling of unexpected API responses

2. **Added error output pins to DataForSEO blocks** (#10981)
- Added `error` field to Output schema in both `related_keywords.py` and
`keyword_suggestions.py`
   - Wrapped entire `run` methods in try-except blocks
- Errors are now properly yielded to the error output pin, allowing
agents to handle failures gracefully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that DataForSEO blocks handle None responses without
throwing TypeError
- [x] Confirmed error output pins capture and yield exceptions properly
- [x] Ensured backwards compatibility with existing block
implementations
  - [x] Tested both Related Keywords and Keyword Suggestions blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---

Fixes #10990
Fixes #10981

Generated with [Claude Code](https://claude.ai/code)

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2025-09-26 11:23:14 +02:00
Toran Bruce Richards
f3ec426c82 fix(blocks): Handle NoneType in DataForSEO Blocks and Add missing Error pins (#10995)
This PR fixes critical issues in the DataForSEO blocks to improve error
handling and prevent runtime exceptions.

### Changes 🏗️

1. **Fixed NoneType error in DataForSEO Related Keywords Block**
(#10990)
- Added null check to ensure `items` is always a list before iteration
   - Prevents TypeError when API returns None for items field
   - Ensures robust handling of unexpected API responses

2. **Added error output pins to DataForSEO blocks** (#10981)
- Added `error` field to Output schema in both `related_keywords.py` and
`keyword_suggestions.py`
   - Wrapped entire `run` methods in try-except blocks
- Errors are now properly yielded to the error output pin, allowing
agents to handle failures gracefully

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that DataForSEO blocks handle None responses without
throwing TypeError
- [x] Confirmed error output pins capture and yield exceptions properly
- [x] Ensured backwards compatibility with existing block
implementations
  - [x] Tested both Related Keywords and Keyword Suggestions blocks

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---

Fixes #10990
Fixes #10981

Generated with [Claude Code](https://claude.ai/code)
2025-09-25 20:22:10 +00:00
592 changed files with 28901 additions and 8578 deletions

View File

@@ -12,6 +12,7 @@ This file provides comprehensive onboarding information for GitHub Copilot codin
- **Infrastructure** - Docker configurations, CI/CD, and development tools
**Primary Languages & Frameworks:**
- **Backend**: Python 3.10-3.13, FastAPI, Prisma ORM, PostgreSQL, RabbitMQ
- **Frontend**: TypeScript, Next.js 15, React, Tailwind CSS, Radix UI
- **Development**: Docker, Poetry, pnpm, Playwright, Storybook
@@ -23,15 +24,17 @@ This file provides comprehensive onboarding information for GitHub Copilot codin
**Always run these commands in the correct directory and in this order:**
1. **Initial Setup** (required once):
```bash
# Clone and enter repository
git clone <repo> && cd AutoGPT
# Start all services (database, redis, rabbitmq, clamav)
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
2. **Backend Setup** (always run before backend development):
```bash
cd autogpt_platform/backend
poetry install # Install dependencies
@@ -48,6 +51,7 @@ This file provides comprehensive onboarding information for GitHub Copilot codin
### Runtime Requirements
**Critical:** Always ensure Docker services are running before starting development:
```bash
cd autogpt_platform && docker compose --profile local up deps --build --detach
```
@@ -58,6 +62,7 @@ cd autogpt_platform && docker compose --profile local up deps --build --detach
### Development Commands
**Backend Development:**
```bash
cd autogpt_platform/backend
poetry run serve # Start development server (port 8000)
@@ -68,6 +73,7 @@ poetry run lint # Lint code (ruff) - run after format
```
**Frontend Development:**
```bash
cd autogpt_platform/frontend
pnpm dev # Start development server (port 3000) - use for active development
@@ -81,23 +87,27 @@ pnpm storybook # Start component development server
### Testing Strategy
**Backend Tests:**
- **Block Tests**: `poetry run pytest backend/blocks/test/test_block.py -xvs` (validates all blocks)
- **Specific Block**: `poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[BlockName]' -xvs`
- **Snapshot Tests**: Use `--snapshot-update` when output changes, always review with `git diff`
**Frontend Tests:**
- **E2E Tests**: Always run `pnpm dev` before `pnpm test` (Playwright requires running instance)
- **Component Tests**: Use Storybook for isolated component development
### Critical Validation Steps
**Before committing changes:**
1. Run `poetry run format` (backend) and `pnpm format` (frontend)
2. Ensure all tests pass in modified areas
3. Verify Docker services are still running
4. Check that database migrations apply cleanly
**Common Issues & Workarounds:**
- **Prisma issues**: Run `poetry run prisma generate` after schema changes
- **Permission errors**: Ensure Docker has proper permissions
- **Port conflicts**: Check the `docker-compose.yml` file for the current list of exposed ports. You can list all mapped ports with:
@@ -108,6 +118,7 @@ pnpm storybook # Start component development server
### Core Architecture
**AutoGPT Platform** (`autogpt_platform/`):
- `backend/` - FastAPI server with async support
- `backend/backend/` - Core API logic
- `backend/blocks/` - Agent execution blocks
@@ -121,6 +132,7 @@ pnpm storybook # Start component development server
- `docker-compose.yml` - Development stack orchestration
**Key Configuration Files:**
- `pyproject.toml` - Python dependencies and tooling
- `package.json` - Node.js dependencies and scripts
- `schema.prisma` - Database schema and migrations
@@ -136,6 +148,7 @@ pnpm storybook # Start component development server
### Development Workflow
**GitHub Actions**: Multiple CI/CD workflows in `.github/workflows/`
- `platform-backend-ci.yml` - Backend testing and validation
- `platform-frontend-ci.yml` - Frontend testing and validation
- `platform-fullstack-ci.yml` - End-to-end integration tests
@@ -146,11 +159,13 @@ pnpm storybook # Start component development server
### Key Source Files
**Backend Entry Points:**
- `backend/backend/server/server.py` - FastAPI application setup
- `backend/backend/data/` - Database models and user management
- `backend/blocks/` - Agent execution blocks and logic
**Frontend Entry Points:**
- `frontend/src/app/layout.tsx` - Root application layout
- `frontend/src/app/page.tsx` - Home page
- `frontend/src/lib/supabase/` - Authentication and database client
@@ -160,6 +175,7 @@ pnpm storybook # Start component development server
### Agent Block System
Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include:
- Block definition with input/output schemas
- Execution logic with proper error handling
- Tests validating functionality
@@ -167,6 +183,7 @@ Agents are built using a visual block-based system where each block performs a s
### Database & ORM
**Prisma ORM** with PostgreSQL backend including pgvector for embeddings:
- Schema in `schema.prisma`
- Migrations in `backend/migrations/`
- Always run `prisma migrate dev` and `prisma generate` after schema changes
@@ -174,13 +191,15 @@ Agents are built using a visual block-based system where each block performs a s
## Environment Configuration
### Configuration Files Priority Order
1. **Backend**: `/backend/.env.default` → `/backend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
3. **Platform**: `/.env.default` (Supabase/shared) → `/.env` (user overrides)
4. Docker Compose `environment:` sections override file-based config
5. Shell environment variables have highest precedence
### Docker Environment Setup
- All services use hardcoded defaults (no `${VARIABLE}` substitutions)
- The `env_file` directive loads variables INTO containers at runtime
- Backend/Frontend services use YAML anchors for consistent configuration
@@ -189,6 +208,7 @@ Agents are built using a visual block-based system where each block performs a s
## Advanced Development Patterns
### Adding New Blocks
1. Create file in `/backend/backend/blocks/`
2. Inherit from `Block` base class with input/output schemas
3. Implement `run` method with proper error handling
@@ -198,6 +218,7 @@ Agents are built using a visual block-based system where each block performs a s
7. Consider how inputs/outputs connect with other blocks in graph editor
### API Development
1. Update routes in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside route files
@@ -205,21 +226,76 @@ Agents are built using a visual block-based system where each block performs a s
5. Run `poetry run test` to verify changes
### Frontend Development
1. Components in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for component development
4. Test user-facing features with Playwright E2E tests
5. Update protected routes in middleware when needed
**📖 Complete Frontend Guide**: See `autogpt_platform/frontend/CONTRIBUTING.md` and `autogpt_platform/frontend/.cursorrules` for comprehensive patterns and conventions.
**Quick Reference:**
**Component Structure:**
- Separate render logic from data/behavior
- Structure: `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Exception: Small components (3-4 lines of logic) can be inline
- Render-only components can be direct files without folders
**Data Fetching:**
- Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Generated via Orval from backend OpenAPI spec
- Pattern: `use{Method}{Version}{OperationName}`
- Example: `useGetV2ListLibraryAgents`
- Regenerate with: `pnpm generate:api`
- **Never** use deprecated `BackendAPI` or `src/lib/autogpt-server-api/*`
**Code Conventions:**
- Use function declarations for components and handlers (not arrow functions)
- Only arrow functions for small inline lambdas (map, filter, etc.)
- Components: `PascalCase`, Hooks: `camelCase` with `use` prefix
- No barrel files or `index.ts` re-exports
- Minimal comments (code should be self-documenting)
**Styling:**
- Use Tailwind CSS utilities only
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
- Only use Phosphor Icons (`@phosphor-icons/react`)
- Prefer design tokens over hardcoded values
**Error Handling:**
- Render errors: Use `<ErrorCard />` component
- Mutation errors: Display with toast notifications
- Manual exceptions: Use `Sentry.captureException()`
- Global error boundaries already configured
**Testing:**
- Add/update Storybook stories for UI components (`pnpm storybook`)
- Run Playwright E2E tests with `pnpm test`
- Verify in Chromatic after PR
**Architecture:**
- Default to client components ("use client")
- Server components only for SEO or extreme TTFB needs
- Use React Query for server state (via generated hooks)
- Co-locate UI state in components/hooks
### Security Guidelines
**Cache Protection Middleware** (`/backend/backend/server/middleware/security.py`):
- Default: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses allow list approach for cacheable paths (static assets, health checks, public pages)
- Prevents sensitive data caching in browsers/proxies
- Add new cacheable endpoints to `CACHEABLE_PATHS`
### CI/CD Alignment
The repository has comprehensive CI workflows that test:
- **Backend**: Python 3.11-3.13, services (Redis/RabbitMQ/ClamAV), Prisma migrations, Poetry lock validation
- **Frontend**: Node.js 21, pnpm, Playwright with Docker Compose stack, API schema validation
- **Integration**: Full-stack type checking and E2E testing
@@ -229,6 +305,7 @@ Match these patterns when developing locally - the copilot setup environment mir
## Collaboration with Other AI Assistants
This repository is actively developed with assistance from Claude (via CLAUDE.md files). When working on this codebase:
- Check for existing CLAUDE.md files that provide additional context
- Follow established patterns and conventions already in the codebase
- Maintain consistency with existing code style and architecture
@@ -237,8 +314,9 @@ This repository is actively developed with assistance from Claude (via CLAUDE.md
## Trust These Instructions
These instructions are comprehensive and tested. Only perform additional searches if:
1. Information here is incomplete for your specific task
2. You encounter errors not covered by the workarounds
3. You need to understand implementation details not covered above
For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root.
For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root.

View File

@@ -1,6 +1,3 @@
[pr_reviewer]
num_code_suggestions=0
[pr_code_suggestions]
commitable_code_suggestions=false
num_code_suggestions=0

View File

@@ -63,6 +63,9 @@ poetry run pytest path/to/test.py --snapshot-update
# Install dependencies
cd frontend && pnpm i
# Generate API client from OpenAPI spec
pnpm generate:api
# Start development server
pnpm dev
@@ -75,12 +78,23 @@ pnpm storybook
# Build production
pnpm build
# Format and lint
pnpm format
# Type checking
pnpm types
```
We have a components library in autogpt_platform/frontend/src/components/atoms that should be used when adding new pages and components.
**📖 Complete Guide**: See `/frontend/CONTRIBUTING.md` and `/frontend/.cursorrules` for comprehensive frontend patterns.
**Key Frontend Conventions:**
- Separate render logic from data/behavior in components
- Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Use function declarations (not arrow functions) for components/handlers
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Only use Phosphor Icons
- Never use `src/components/__legacy__/*` or deprecated `BackendAPI`
## Architecture Overview
@@ -95,11 +109,16 @@ We have a components library in autogpt_platform/frontend/src/components/atoms t
### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Framework**: Next.js 15 App Router (client-first approach)
- **Data Fetching**: Type-safe generated API hooks via Orval + React Query
- **State Management**: React Query for server state, co-located UI state in components/hooks
- **Component Structure**: Separate render logic (`.tsx`) from business logic (`use*.ts` hooks)
- **Workflow Builder**: Visual graph editor using @xyflow/react
- **UI Components**: Radix UI primitives with Tailwind CSS styling
- **UI Components**: shadcn/ui (Radix UI primitives) with Tailwind CSS styling
- **Icons**: Phosphor Icons only
- **Feature Flags**: LaunchDarkly integration
- **Error Handling**: ErrorCard for render errors, toast for mutations, Sentry for exceptions
- **Testing**: Playwright for E2E, Storybook for component development
### Key Concepts
@@ -153,6 +172,7 @@ Key models (defined in `/backend/schema.prisma`):
**Adding a new block:**
Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers:
- Provider configuration with `ProviderBuilder`
- Block schema definition
- Authentication (API keys, OAuth, webhooks)
@@ -160,6 +180,7 @@ Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-
- File organization
Quick steps:
1. Create new file in `/backend/backend/blocks/`
2. Configure provider using `ProviderBuilder` in `_config.py`
3. Inherit from `Block` base class
@@ -180,10 +201,20 @@ ex: do the inputs and outputs tie well together?
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
4. Test with Playwright if user-facing
See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference:
1. **Pages**: Create in `src/app/(platform)/feature-name/page.tsx`
- Add `usePageName.ts` hook for logic
- Put sub-components in local `components/` folder
2. **Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
- Use design system components from `src/components/` (atoms, molecules, organisms)
- Never use `src/components/__legacy__/*`
3. **Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/`
- Regenerate with `pnpm generate:api`
- Pattern: `use{Method}{Version}{OperationName}`
4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only
5. **Testing**: Add Storybook stories for new components, Playwright for E2E
6. **Code conventions**: Function declarations (not arrow functions) for components/handlers
### Security Implementation

57
autogpt_platform/Makefile Normal file
View File

@@ -0,0 +1,57 @@
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend
# Run just Supabase + Redis + RabbitMQ
start-core:
docker compose up -d deps
# Stop core services
stop-core:
docker compose stop deps
reset-db:
rm -rf db/docker/volumes/db/data
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
# View logs for core services
logs-core:
docker compose logs -f deps
# Run formatting and linting for backend and frontend
format:
cd backend && poetry run format
cd frontend && pnpm format
cd frontend && pnpm lint
init-env:
cp -n .env.default .env || true
cd backend && cp -n .env.default .env || true
cd frontend && cp -n .env.default .env || true
# Run migrations for backend
migrate:
cd backend && poetry run prisma migrate deploy
cd backend && poetry run prisma generate
run-backend:
cd backend && poetry run app
run-frontend:
cd frontend && pnpm dev
test-data:
cd backend && poetry run python test/test_data_creator.py
help:
@echo "Usage: make <target>"
@echo "Targets:"
@echo " start-core - Start just the core services (Supabase, Redis, RabbitMQ) in background"
@echo " stop-core - Stop the core services"
@echo " reset-db - Reset the database by deleting the volume"
@echo " logs-core - Tail the logs for core services"
@echo " format - Format & lint backend (Python) and frontend (TypeScript) code"
@echo " migrate - Run backend database migrations"
@echo " run-backend - Run the backend FastAPI server"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"

View File

@@ -38,6 +38,37 @@ To run the AutoGPT Platform, follow these steps:
4. After all the services are in ready state, open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
### Running Just Core services
You can now run the following to enable just the core services.
```
# For help
make help
# Run just Supabase + Redis + RabbitMQ
make start-core
# Stop core services
make stop-core
# View logs from core services
make logs-core
# Run formatting and linting for backend and frontend
make format
# Run migrations for backend database
make migrate
# Run backend server
make run-backend
# Run frontend development server
make run-frontend
```
### Docker Compose Commands
Here are some useful Docker Compose commands for managing your AutoGPT Platform:

View File

@@ -10,7 +10,7 @@ from .jwt_utils import get_jwt_payload, verify_user
from .models import User
def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
async def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
"""
FastAPI dependency that requires a valid authenticated user.
@@ -20,7 +20,9 @@ def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User
return verify_user(jwt_payload, admin_only=False)
def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User:
async def requires_admin_user(
jwt_payload: dict = fastapi.Security(get_jwt_payload),
) -> User:
"""
FastAPI dependency that requires a valid admin user.
@@ -30,7 +32,7 @@ def requires_admin_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -
return verify_user(jwt_payload, admin_only=True)
def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
async def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str:
"""
FastAPI dependency that returns the ID of the authenticated user.

View File

@@ -45,7 +45,7 @@ class TestAuthDependencies:
"""Create a test client."""
return TestClient(app)
def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
async def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user with valid JWT payload."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
@@ -53,12 +53,12 @@ class TestAuthDependencies:
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
user = await requires_user(jwt_payload)
assert isinstance(user, User)
assert user.user_id == "user-123"
assert user.role == "user"
def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
async def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture):
"""Test requires_user accepts admin users."""
jwt_payload = {
"sub": "admin-456",
@@ -69,28 +69,28 @@ class TestAuthDependencies:
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_user(jwt_payload)
user = await requires_user(jwt_payload)
assert user.user_id == "admin-456"
assert user.role == "admin"
def test_requires_user_missing_sub(self):
async def test_requires_user_missing_sub(self):
"""Test requires_user with missing user ID."""
jwt_payload = {"role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
await requires_user(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_requires_user_empty_sub(self):
async def test_requires_user_empty_sub(self):
"""Test requires_user with empty user ID."""
jwt_payload = {"sub": "", "role": "user"}
with pytest.raises(HTTPException) as exc_info:
requires_user(jwt_payload)
await requires_user(jwt_payload)
assert exc_info.value.status_code == 401
def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
async def test_requires_admin_user_with_admin(self, mocker: MockerFixture):
"""Test requires_admin_user with admin role."""
jwt_payload = {
"sub": "admin-789",
@@ -101,51 +101,51 @@ class TestAuthDependencies:
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user = requires_admin_user(jwt_payload)
user = await requires_admin_user(jwt_payload)
assert user.user_id == "admin-789"
assert user.role == "admin"
def test_requires_admin_user_with_regular_user(self):
async def test_requires_admin_user_with_regular_user(self):
"""Test requires_admin_user rejects regular users."""
jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"}
with pytest.raises(HTTPException) as exc_info:
requires_admin_user(jwt_payload)
await requires_admin_user(jwt_payload)
assert exc_info.value.status_code == 403
assert "Admin access required" in exc_info.value.detail
def test_requires_admin_user_missing_role(self):
async def test_requires_admin_user_missing_role(self):
"""Test requires_admin_user with missing role."""
jwt_payload = {"sub": "user-123", "email": "user@example.com"}
with pytest.raises(KeyError):
requires_admin_user(jwt_payload)
await requires_admin_user(jwt_payload)
def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
async def test_get_user_id_with_valid_payload(self, mocker: MockerFixture):
"""Test get_user_id extracts user ID correctly."""
jwt_payload = {"sub": "user-id-xyz", "role": "user"}
mocker.patch(
"autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload
)
user_id = get_user_id(jwt_payload)
user_id = await get_user_id(jwt_payload)
assert user_id == "user-id-xyz"
def test_get_user_id_missing_sub(self):
async def test_get_user_id_missing_sub(self):
"""Test get_user_id with missing user ID."""
jwt_payload = {"role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
await get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
assert "User ID not found" in exc_info.value.detail
def test_get_user_id_none_sub(self):
async def test_get_user_id_none_sub(self):
"""Test get_user_id with None user ID."""
jwt_payload = {"sub": None, "role": "user"}
with pytest.raises(HTTPException) as exc_info:
get_user_id(jwt_payload)
await get_user_id(jwt_payload)
assert exc_info.value.status_code == 401
@@ -170,7 +170,7 @@ class TestAuthDependenciesIntegration:
return _create_token
def test_endpoint_auth_enabled_no_token(self):
async def test_endpoint_auth_enabled_no_token(self):
"""Test endpoints require token when auth is enabled."""
app = FastAPI()
@@ -184,7 +184,7 @@ class TestAuthDependenciesIntegration:
response = client.get("/test")
assert response.status_code == 401
def test_endpoint_with_valid_token(self, create_token):
async def test_endpoint_with_valid_token(self, create_token):
"""Test endpoint with valid JWT token."""
app = FastAPI()
@@ -203,7 +203,7 @@ class TestAuthDependenciesIntegration:
assert response.status_code == 200
assert response.json()["user_id"] == "test-user"
def test_admin_endpoint_requires_admin_role(self, create_token):
async def test_admin_endpoint_requires_admin_role(self, create_token):
"""Test admin endpoint rejects non-admin users."""
app = FastAPI()
@@ -240,7 +240,7 @@ class TestAuthDependenciesIntegration:
class TestAuthDependenciesEdgeCases:
"""Edge case tests for authentication dependencies."""
def test_dependency_with_complex_payload(self):
async def test_dependency_with_complex_payload(self):
"""Test dependencies handle complex JWT payloads."""
complex_payload = {
"sub": "user-123",
@@ -256,14 +256,14 @@ class TestAuthDependenciesEdgeCases:
"exp": 9999999999,
}
user = requires_user(complex_payload)
user = await requires_user(complex_payload)
assert user.user_id == "user-123"
assert user.email == "test@example.com"
admin = requires_admin_user(complex_payload)
admin = await requires_admin_user(complex_payload)
assert admin.role == "admin"
def test_dependency_with_unicode_in_payload(self):
async def test_dependency_with_unicode_in_payload(self):
"""Test dependencies handle unicode in JWT payloads."""
unicode_payload = {
"sub": "user-😀-123",
@@ -272,11 +272,11 @@ class TestAuthDependenciesEdgeCases:
"name": "日本語",
}
user = requires_user(unicode_payload)
user = await requires_user(unicode_payload)
assert "😀" in user.user_id
assert user.email == "测试@example.com"
def test_dependency_with_null_values(self):
async def test_dependency_with_null_values(self):
"""Test dependencies handle null values in payload."""
null_payload = {
"sub": "user-123",
@@ -286,18 +286,18 @@ class TestAuthDependenciesEdgeCases:
"metadata": None,
}
user = requires_user(null_payload)
user = await requires_user(null_payload)
assert user.user_id == "user-123"
assert user.email is None
def test_concurrent_requests_isolation(self):
async def test_concurrent_requests_isolation(self):
"""Test that concurrent requests don't interfere with each other."""
payload1 = {"sub": "user-1", "role": "user"}
payload2 = {"sub": "user-2", "role": "admin"}
# Simulate concurrent processing
user1 = requires_user(payload1)
user2 = requires_admin_user(payload2)
user1 = await requires_user(payload1)
user2 = await requires_admin_user(payload2)
assert user1.user_id == "user-1"
assert user2.user_id == "user-2"
@@ -314,7 +314,7 @@ class TestAuthDependenciesEdgeCases:
({"sub": "user", "role": "user"}, "Admin access required", True),
],
)
def test_dependency_error_cases(
async def test_dependency_error_cases(
self, payload, expected_error: str, admin_only: bool
):
"""Test that errors propagate correctly through dependencies."""
@@ -325,7 +325,7 @@ class TestAuthDependenciesEdgeCases:
verify_user(payload, admin_only=admin_only)
assert expected_error in exc_info.value.detail
def test_dependency_valid_user(self):
async def test_dependency_valid_user(self):
"""Test valid user case for dependency."""
# Import verify_user to test it directly since dependencies use FastAPI Security
from autogpt_libs.auth.jwt_utils import verify_user

View File

@@ -16,7 +16,7 @@ bearer_jwt_auth = HTTPBearer(
)
def get_jwt_payload(
async def get_jwt_payload(
credentials: HTTPAuthorizationCredentials | None = Security(bearer_jwt_auth),
) -> dict[str, Any]:
"""

View File

@@ -116,32 +116,32 @@ def test_parse_jwt_token_missing_audience():
assert "Invalid token" in str(exc_info.value)
def test_get_jwt_payload_with_valid_token():
async def test_get_jwt_payload_with_valid_token():
"""Test extracting JWT payload with valid bearer token."""
token = create_token(TEST_USER_PAYLOAD)
credentials = HTTPAuthorizationCredentials(scheme="Bearer", credentials=token)
result = jwt_utils.get_jwt_payload(credentials)
result = await jwt_utils.get_jwt_payload(credentials)
assert result["sub"] == "test-user-id"
assert result["role"] == "user"
def test_get_jwt_payload_no_credentials():
async def test_get_jwt_payload_no_credentials():
"""Test JWT payload when no credentials provided."""
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(None)
await jwt_utils.get_jwt_payload(None)
assert exc_info.value.status_code == 401
assert "Authorization header is missing" in exc_info.value.detail
def test_get_jwt_payload_invalid_token():
async def test_get_jwt_payload_invalid_token():
"""Test JWT payload extraction with invalid token."""
credentials = HTTPAuthorizationCredentials(
scheme="Bearer", credentials="invalid.token.here"
)
with pytest.raises(HTTPException) as exc_info:
jwt_utils.get_jwt_payload(credentials)
await jwt_utils.get_jwt_payload(credentials)
assert exc_info.value.status_code == 401
assert "Invalid token" in exc_info.value.detail

View File

@@ -4,6 +4,7 @@ import logging
import os
import socket
import sys
from logging.handlers import RotatingFileHandler
from pathlib import Path
from pydantic import Field, field_validator
@@ -93,42 +94,36 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
config = LoggingConfig()
log_handlers: list[logging.Handler] = []
structured_logging = config.enable_cloud_logging or force_cloud_logging
# Console output handlers
stdout = logging.StreamHandler(stream=sys.stdout)
stdout.setLevel(config.level)
stdout.addFilter(BelowLevelFilter(logging.WARNING))
if config.level == logging.DEBUG:
stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
if not structured_logging:
stdout = logging.StreamHandler(stream=sys.stdout)
stdout.setLevel(config.level)
stdout.addFilter(BelowLevelFilter(logging.WARNING))
if config.level == logging.DEBUG:
stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
stderr = logging.StreamHandler()
stderr.setLevel(logging.WARNING)
if config.level == logging.DEBUG:
stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
stderr = logging.StreamHandler()
stderr.setLevel(logging.WARNING)
if config.level == logging.DEBUG:
stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT))
else:
stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT))
log_handlers += [stdout, stderr]
log_handlers += [stdout, stderr]
# Cloud logging setup
if config.enable_cloud_logging or force_cloud_logging:
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
from google.cloud.logging_v2.handlers.transports import (
BackgroundThreadTransport,
)
else:
# Use Google Cloud Structured Log Handler. Log entries are printed to stdout
# in a JSON format which is automatically picked up by Google Cloud Logging.
from google.cloud.logging.handlers import StructuredLogHandler
client = google.cloud.logging.Client()
# Use BackgroundThreadTransport to prevent blocking the main thread
# and deadlocks when gRPC calls to Google Cloud Logging hang
cloud_handler = CloudLoggingHandler(
client,
name="autogpt_logs",
transport=BackgroundThreadTransport,
)
cloud_handler.setLevel(config.level)
log_handlers.append(cloud_handler)
structured_log_handler = StructuredLogHandler(stream=sys.stdout)
structured_log_handler.setLevel(config.level)
log_handlers.append(structured_log_handler)
# File logging setup
if config.enable_file_logging:
@@ -139,8 +134,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
print(f"Log directory: {config.log_dir}")
# Activity log handler (INFO and above)
activity_log_handler = logging.FileHandler(
config.log_dir / LOG_FILE, "a", "utf-8"
# Security fix: Use RotatingFileHandler with size limits to prevent disk exhaustion
activity_log_handler = RotatingFileHandler(
config.log_dir / LOG_FILE,
mode="a",
encoding="utf-8",
maxBytes=10 * 1024 * 1024, # 10MB per file
backupCount=3, # Keep 3 backup files (40MB total)
)
activity_log_handler.setLevel(config.level)
activity_log_handler.setFormatter(
@@ -150,8 +150,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
if config.level == logging.DEBUG:
# Debug log handler (all levels)
debug_log_handler = logging.FileHandler(
config.log_dir / DEBUG_LOG_FILE, "a", "utf-8"
# Security fix: Use RotatingFileHandler with size limits
debug_log_handler = RotatingFileHandler(
config.log_dir / DEBUG_LOG_FILE,
mode="a",
encoding="utf-8",
maxBytes=10 * 1024 * 1024, # 10MB per file
backupCount=3, # Keep 3 backup files (40MB total)
)
debug_log_handler.setLevel(logging.DEBUG)
debug_log_handler.setFormatter(
@@ -160,8 +165,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
log_handlers.append(debug_log_handler)
# Error log handler (ERROR and above)
error_log_handler = logging.FileHandler(
config.log_dir / ERROR_LOG_FILE, "a", "utf-8"
# Security fix: Use RotatingFileHandler with size limits
error_log_handler = RotatingFileHandler(
config.log_dir / ERROR_LOG_FILE,
mode="a",
encoding="utf-8",
maxBytes=10 * 1024 * 1024, # 10MB per file
backupCount=3, # Keep 3 backup files (40MB total)
)
error_log_handler.setLevel(logging.ERROR)
error_log_handler.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT, no_color=True))
@@ -169,7 +179,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None:
# Configure the root logger
logging.basicConfig(
format=DEBUG_LOG_FORMAT if config.level == logging.DEBUG else SIMPLE_LOG_FORMAT,
format=(
"%(levelname)s %(message)s"
if structured_logging
else (
DEBUG_LOG_FORMAT if config.level == logging.DEBUG else SIMPLE_LOG_FORMAT
)
),
level=config.level,
handlers=log_handlers,
)

View File

@@ -1,328 +0,0 @@
import asyncio
import inspect
import logging
import threading
import time
from functools import wraps
from typing import (
Any,
Callable,
ParamSpec,
Protocol,
TypeVar,
cast,
runtime_checkable,
)
P = ParamSpec("P")
R = TypeVar("R")
R_co = TypeVar("R_co", covariant=True)
logger = logging.getLogger(__name__)
def _make_hashable_key(
args: tuple[Any, ...], kwargs: dict[str, Any]
) -> tuple[Any, ...]:
"""
Convert args and kwargs into a hashable cache key.
Handles unhashable types like dict, list, set by converting them to
their sorted string representations.
"""
def make_hashable(obj: Any) -> Any:
"""Recursively convert an object to a hashable representation."""
if isinstance(obj, dict):
# Sort dict items to ensure consistent ordering
return (
"__dict__",
tuple(sorted((k, make_hashable(v)) for k, v in obj.items())),
)
elif isinstance(obj, (list, tuple)):
return ("__list__", tuple(make_hashable(item) for item in obj))
elif isinstance(obj, set):
return ("__set__", tuple(sorted(make_hashable(item) for item in obj)))
elif hasattr(obj, "__dict__"):
# Handle objects with __dict__ attribute
return ("__obj__", obj.__class__.__name__, make_hashable(obj.__dict__))
else:
# For basic hashable types (str, int, bool, None, etc.)
try:
hash(obj)
return obj
except TypeError:
# Fallback: convert to string representation
return ("__str__", str(obj))
hashable_args = tuple(make_hashable(arg) for arg in args)
hashable_kwargs = tuple(sorted((k, make_hashable(v)) for k, v in kwargs.items()))
return (hashable_args, hashable_kwargs)
@runtime_checkable
class CachedFunction(Protocol[P, R_co]):
"""Protocol for cached functions with cache management methods."""
def cache_clear(self) -> None:
"""Clear all cached entries."""
return None
def cache_info(self) -> dict[str, int | None]:
"""Get cache statistics."""
return {}
def cache_delete(self, *args: P.args, **kwargs: P.kwargs) -> bool:
"""Delete a specific cache entry by its arguments. Returns True if entry existed."""
return False
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R_co:
"""Call the cached function."""
return None # type: ignore
def cached(
*,
maxsize: int = 128,
ttl_seconds: int | None = None,
) -> Callable[[Callable], CachedFunction]:
"""
Thundering herd safe cache decorator for both sync and async functions.
Uses double-checked locking to prevent multiple threads/coroutines from
executing the expensive operation simultaneously during cache misses.
Args:
func: The function to cache (when used without parentheses)
maxsize: Maximum number of cached entries
ttl_seconds: Time to live in seconds. If None, entries never expire
Returns:
Decorated function or decorator
Example:
@cache() # Default: maxsize=128, no TTL
def expensive_sync_operation(param: str) -> dict:
return {"result": param}
@cache() # Works with async too
async def expensive_async_operation(param: str) -> dict:
return {"result": param}
@cache(maxsize=1000, ttl_seconds=300) # Custom maxsize and TTL
def another_operation(param: str) -> dict:
return {"result": param}
"""
def decorator(target_func):
# Cache storage and locks
cache_storage = {}
if inspect.iscoroutinefunction(target_func):
# Async function with asyncio.Lock
cache_lock = asyncio.Lock()
@wraps(target_func)
async def async_wrapper(*args: P.args, **kwargs: P.kwargs):
key = _make_hashable_key(args, kwargs)
current_time = time.time()
# Fast path: check cache without lock
if key in cache_storage:
if ttl_seconds is None:
logger.debug(f"Cache hit for {target_func.__name__}")
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(f"Cache hit for {target_func.__name__}")
return result
# Slow path: acquire lock for cache miss/expiry
async with cache_lock:
# Double-check: another coroutine might have populated cache
if key in cache_storage:
if ttl_seconds is None:
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
return result
# Cache miss - execute function
logger.debug(f"Cache miss for {target_func.__name__}")
result = await target_func(*args, **kwargs)
# Store result
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Cleanup if needed
if len(cache_storage) > maxsize:
cutoff = maxsize // 2
oldest_keys = (
list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
)
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
return result
wrapper = async_wrapper
else:
# Sync function with threading.Lock
cache_lock = threading.Lock()
@wraps(target_func)
def sync_wrapper(*args: P.args, **kwargs: P.kwargs):
key = _make_hashable_key(args, kwargs)
current_time = time.time()
# Fast path: check cache without lock
if key in cache_storage:
if ttl_seconds is None:
logger.debug(f"Cache hit for {target_func.__name__}")
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
logger.debug(f"Cache hit for {target_func.__name__}")
return result
# Slow path: acquire lock for cache miss/expiry
with cache_lock:
# Double-check: another thread might have populated cache
if key in cache_storage:
if ttl_seconds is None:
return cache_storage[key]
else:
cached_data = cache_storage[key]
if isinstance(cached_data, tuple):
result, timestamp = cached_data
if current_time - timestamp < ttl_seconds:
return result
# Cache miss - execute function
logger.debug(f"Cache miss for {target_func.__name__}")
result = target_func(*args, **kwargs)
# Store result
if ttl_seconds is None:
cache_storage[key] = result
else:
cache_storage[key] = (result, current_time)
# Cleanup if needed
if len(cache_storage) > maxsize:
cutoff = maxsize // 2
oldest_keys = (
list(cache_storage.keys())[:-cutoff] if cutoff > 0 else []
)
for old_key in oldest_keys:
cache_storage.pop(old_key, None)
return result
wrapper = sync_wrapper
# Add cache management methods
def cache_clear() -> None:
cache_storage.clear()
def cache_info() -> dict[str, int | None]:
return {
"size": len(cache_storage),
"maxsize": maxsize,
"ttl_seconds": ttl_seconds,
}
def cache_delete(*args, **kwargs) -> bool:
"""Delete a specific cache entry. Returns True if entry existed."""
key = _make_hashable_key(args, kwargs)
if key in cache_storage:
del cache_storage[key]
return True
return False
setattr(wrapper, "cache_clear", cache_clear)
setattr(wrapper, "cache_info", cache_info)
setattr(wrapper, "cache_delete", cache_delete)
return cast(CachedFunction, wrapper)
return decorator
def thread_cached(func):
"""
Thread-local cache decorator for both sync and async functions.
Each thread gets its own cache, which is useful for request-scoped caching
in web applications where you want to cache within a single request but
not across requests.
Args:
func: The function to cache
Returns:
Decorated function with thread-local caching
Example:
@thread_cached
def expensive_operation(param: str) -> dict:
return {"result": param}
@thread_cached # Works with async too
async def expensive_async_operation(param: str) -> dict:
return {"result": param}
"""
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
@wraps(func)
async def async_wrapper(*args, **kwargs):
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = _make_hashable_key(args, kwargs)
if key not in cache:
cache[key] = await func(*args, **kwargs)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
@wraps(func)
def sync_wrapper(*args, **kwargs):
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = _make_hashable_key(args, kwargs)
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable) -> None:
"""Clear thread-local cache for a function."""
if clear := getattr(func, "clear_cache", None):
clear()

View File

@@ -47,6 +47,7 @@ RUN poetry install --no-ansi --no-root
# Generate Prisma client
COPY autogpt_platform/backend/schema.prisma ./
COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py
RUN poetry run prisma generate
FROM debian:13-slim AS server_dependencies
@@ -92,6 +93,7 @@ FROM server_dependencies AS migrate
# Migration stage only needs schema and migrations - much lighter than full backend
COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/
COPY autogpt_platform/backend/backend/data/partial_types.py /app/autogpt_platform/backend/backend/data/partial_types.py
COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations
FROM server_dependencies AS server

View File

@@ -5,7 +5,7 @@ import re
from pathlib import Path
from typing import TYPE_CHECKING, TypeVar
from autogpt_libs.utils.cache import cached
from backend.util.cache import cached
logger = logging.getLogger(__name__)
@@ -16,7 +16,7 @@ if TYPE_CHECKING:
T = TypeVar("T")
@cached()
@cached(ttl_seconds=3600)
def load_all_blocks() -> dict[str, type["Block"]]:
from backend.data.block import Block
from backend.util.settings import Config

View File

@@ -7,6 +7,7 @@ from backend.data.block import (
BlockInput,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockType,
get_block,
)
@@ -19,7 +20,7 @@ _logger = logging.getLogger(__name__)
class AgentExecutorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
user_id: str = SchemaField(description="User ID")
graph_id: str = SchemaField(description="Graph ID")
graph_version: int = SchemaField(description="Graph Version")
@@ -53,6 +54,7 @@ class AgentExecutorBlock(Block):
return validate_with_jsonschema(cls.get_input_schema(data), data)
class Output(BlockSchema):
# Use BlockSchema to avoid automatic error field that could clash with graph outputs
pass
def __init__(self):
@@ -65,7 +67,13 @@ class AgentExecutorBlock(Block):
categories={BlockCategory.AGENT},
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(
self,
input_data: Input,
*,
graph_exec_id: str,
**kwargs,
) -> BlockOutput:
from backend.executor import utils as execution_utils
@@ -75,6 +83,7 @@ class AgentExecutorBlock(Block):
user_id=input_data.user_id,
inputs=input_data.inputs,
nodes_input_masks=input_data.nodes_input_masks,
parent_graph_exec_id=graph_exec_id,
)
logger = execution_utils.LogMetadata(

View File

@@ -0,0 +1,219 @@
from typing import Any
from backend.blocks.llm import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
AIBlockBase,
AICredentials,
AICredentialsField,
LlmModel,
LLMResponse,
llm_call,
)
from backend.data.block import (
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, NodeExecutionStats, SchemaField
class AIConditionBlock(AIBlockBase):
"""
An AI-powered condition block that uses natural language to evaluate conditions.
This block allows users to define conditions in plain English (e.g., "the input is an email address",
"the input is a city in the USA") and uses AI to determine if the input satisfies the condition.
It provides the same yes/no data pass-through functionality as the standard ConditionBlock.
"""
class Input(BlockSchemaInput):
input_value: Any = SchemaField(
description="The input value to evaluate with the AI condition",
placeholder="Enter the value to be evaluated (text, number, or any data)",
)
condition: str = SchemaField(
description="A plaintext English description of the condition to evaluate",
placeholder="E.g., 'the input is the body of an email', 'the input is a City in the USA', 'the input is an error or a refusal'",
)
yes_value: Any = SchemaField(
description="(Optional) Value to output if the condition is true. If not provided, input_value will be used.",
placeholder="Leave empty to use input_value, or enter a specific value",
default=None,
)
no_value: Any = SchemaField(
description="(Optional) Value to output if the condition is false. If not provided, input_value will be used.",
placeholder="Leave empty to use input_value, or enter a specific value",
default=None,
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
description="The language model to use for evaluating the condition.",
advanced=False,
)
credentials: AICredentials = AICredentialsField()
class Output(BlockSchemaOutput):
result: bool = SchemaField(
description="The result of the AI condition evaluation (True or False)"
)
yes_output: Any = SchemaField(
description="The output value if the condition is true"
)
no_output: Any = SchemaField(
description="The output value if the condition is false"
)
error: str = SchemaField(
description="Error message if the AI evaluation is uncertain or fails"
)
def __init__(self):
super().__init__(
id="553ec5b8-6c45-4299-8d75-b394d05f72ff",
input_schema=AIConditionBlock.Input,
output_schema=AIConditionBlock.Output,
description="Uses AI to evaluate natural language conditions and provide conditional outputs",
categories={BlockCategory.AI, BlockCategory.LOGIC},
test_input={
"input_value": "john@example.com",
"condition": "the input is an email address",
"yes_value": "Valid email",
"no_value": "Not an email",
"model": LlmModel.GPT4O,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("result", True),
("yes_output", "Valid email"),
],
test_mock={
"llm_call": lambda *args, **kwargs: LLMResponse(
raw_response="",
prompt=[],
response="true",
tool_calls=None,
prompt_tokens=50,
completion_tokens=10,
reasoning=None,
)
},
)
async def llm_call(
self,
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list,
max_tokens: int,
) -> LLMResponse:
"""Wrapper method for llm_call to enable mocking in tests."""
return await llm_call(
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
force_json_output=False,
max_tokens=max_tokens,
)
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
"""
Evaluate the AI condition and return appropriate outputs.
"""
# Prepare the yes and no values, using input_value as default
yes_value = (
input_data.yes_value
if input_data.yes_value is not None
else input_data.input_value
)
no_value = (
input_data.no_value
if input_data.no_value is not None
else input_data.input_value
)
# Convert input_value to string for AI evaluation
input_str = str(input_data.input_value)
# Create the prompt for AI evaluation
prompt = [
{
"role": "system",
"content": (
"You are an AI assistant that evaluates conditions based on input data. "
"You must respond with only 'true' or 'false' (lowercase) to indicate whether "
"the given condition is met by the input value. Be accurate and consider the "
"context and meaning of both the input and the condition."
),
},
{
"role": "user",
"content": (
f"Input value: {input_str}\n"
f"Condition to evaluate: {input_data.condition}\n\n"
f"Does the input value satisfy the condition? Respond with only 'true' or 'false'."
),
},
]
# Call the LLM
try:
response = await self.llm_call(
credentials=credentials,
llm_model=input_data.model,
prompt=prompt,
max_tokens=10, # We only expect a true/false response
)
# Extract the boolean result from the response
response_text = response.response.strip().lower()
if response_text == "true":
result = True
elif response_text == "false":
result = False
else:
# If the response is not clear, try to interpret it using word boundaries
import re
# Use word boundaries to avoid false positives like 'untrue' or '10'
tokens = set(re.findall(r"\b(true|false|yes|no|1|0)\b", response_text))
if tokens == {"true"} or tokens == {"yes"} or tokens == {"1"}:
result = True
elif tokens == {"false"} or tokens == {"no"} or tokens == {"0"}:
result = False
else:
# Unclear or conflicting response - default to False and yield error
result = False
yield "error", f"Unclear AI response: '{response.response}'"
# Update internal stats
self.merge_stats(
NodeExecutionStats(
input_token_count=response.prompt_tokens,
output_token_count=response.completion_tokens,
)
)
self.prompt = response.prompt
except Exception as e:
# In case of any error, default to False to be safe
result = False
# Log the error but don't fail the block execution
import logging
logger = logging.getLogger(__name__)
logger.error(f"AI condition evaluation failed: {str(e)}")
yield "error", f"AI evaluation failed: {str(e)}"
# Yield results
yield "result", result
if result:
yield "yes_output", yes_value
else:
yield "no_output", no_value

View File

@@ -5,7 +5,13 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -42,7 +48,7 @@ TEST_CREDENTIALS_INPUT = {
class AIImageCustomizerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -68,9 +74,8 @@ class AIImageCustomizerBlock(Block):
title="Output Format",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
image_url: MediaFileType = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(

View File

@@ -5,7 +5,7 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockSchema
from backend.data.block import Block, BlockCategory, BlockSchemaInput, BlockSchemaOutput
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -101,7 +101,7 @@ class ImageGenModel(str, Enum):
class AIImageGeneratorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -135,9 +135,8 @@ class AIImageGeneratorBlock(Block):
title="Image Style",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
image_url: str = SchemaField(description="URL of the generated image")
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(

View File

@@ -6,7 +6,13 @@ from typing import Literal
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -54,7 +60,7 @@ class NormalizationStrategy(str, Enum):
class AIMusicGeneratorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -107,9 +113,8 @@ class AIMusicGeneratorBlock(Block):
title="Normalization Strategy",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: str = SchemaField(description="URL of the generated audio file")
error: str = SchemaField(description="Error message if the model run failed")
def __init__(self):
super().__init__(

View File

@@ -6,7 +6,13 @@ from typing import Literal
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -148,7 +154,7 @@ logger = logging.getLogger(__name__)
class AIShortformVideoCreatorBlock(Block):
"""Creates a shortform texttovideo clip using stock or AI imagery."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(
@@ -187,9 +193,8 @@ class AIShortformVideoCreatorBlock(Block):
placeholder=VisualMediaType.STOCK_VIDEOS,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="The URL of the created video")
error: str = SchemaField(description="Error message if the request failed")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
@@ -336,7 +341,7 @@ class AIShortformVideoCreatorBlock(Block):
class AIAdMakerVideoCreatorBlock(Block):
"""Generates a 30second vertical AI advert using optional usersupplied imagery."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(
@@ -364,9 +369,8 @@ class AIAdMakerVideoCreatorBlock(Block):
description="Restrict visuals to supplied images only.", default=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="URL of the finished advert")
error: str = SchemaField(description="Error message on failure")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
@@ -524,7 +528,7 @@ class AIAdMakerVideoCreatorBlock(Block):
class AIScreenshotToVideoAdBlock(Block):
"""Creates an advert where the supplied screenshot is narrated by an AI avatar."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(description="Revid.ai API key")
@@ -542,9 +546,8 @@ class AIScreenshotToVideoAdBlock(Block):
default=AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="Rendered video URL")
error: str = SchemaField(description="Error, if encountered")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -23,7 +24,7 @@ class AirtableCreateBaseBlock(Block):
Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -53,7 +54,7 @@ class AirtableCreateBaseBlock(Block):
],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
base_id: str = SchemaField(description="The ID of the created or found base")
tables: list[dict] = SchemaField(description="Array of table objects")
table: dict = SchemaField(description="A single table object")
@@ -118,7 +119,7 @@ class AirtableListBasesBlock(Block):
Lists all bases in an Airtable workspace that the user has access to.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -129,7 +130,7 @@ class AirtableListBasesBlock(Block):
description="Pagination offset from previous request", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
bases: list[dict] = SchemaField(description="Array of base objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more bases)", default=None

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -31,7 +32,7 @@ class AirtableListRecordsBlock(Block):
Lists records from an Airtable table with optional filtering, sorting, and pagination.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -65,7 +66,7 @@ class AirtableListRecordsBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of record objects")
offset: Optional[str] = SchemaField(
description="Offset for next page (null if no more records)", default=None
@@ -137,7 +138,7 @@ class AirtableGetRecordBlock(Block):
Retrieves a single record from an Airtable table by its ID.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -153,7 +154,7 @@ class AirtableGetRecordBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
id: str = SchemaField(description="The record ID")
fields: dict = SchemaField(description="The record fields")
created_time: str = SchemaField(description="The record created time")
@@ -217,7 +218,7 @@ class AirtableCreateRecordsBlock(Block):
Creates one or more records in an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -239,7 +240,7 @@ class AirtableCreateRecordsBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of created record objects")
details: dict = SchemaField(description="Details of the created records")
@@ -290,7 +291,7 @@ class AirtableUpdateRecordsBlock(Block):
Updates one or more existing records in an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -306,7 +307,7 @@ class AirtableUpdateRecordsBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of updated record objects")
def __init__(self):
@@ -339,7 +340,7 @@ class AirtableDeleteRecordsBlock(Block):
Deletes one or more records from an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -351,7 +352,7 @@ class AirtableDeleteRecordsBlock(Block):
description="Array of upto 10 record IDs to delete"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
records: list[dict] = SchemaField(description="Array of deletion results")
def __init__(self):

View File

@@ -7,7 +7,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -23,13 +24,13 @@ class AirtableListSchemaBlock(Block):
fields, and views.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
base_id: str = SchemaField(description="The Airtable base ID")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
base_schema: dict = SchemaField(
description="Complete base schema with tables, fields, and views"
)
@@ -66,7 +67,7 @@ class AirtableCreateTableBlock(Block):
Creates a new table in an Airtable base with specified fields and views.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -77,7 +78,7 @@ class AirtableCreateTableBlock(Block):
default=[{"name": "Name", "type": "singleLineText"}],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
table: dict = SchemaField(description="Created table object")
table_id: str = SchemaField(description="ID of the created table")
@@ -109,7 +110,7 @@ class AirtableUpdateTableBlock(Block):
Updates an existing table's properties such as name or description.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -125,7 +126,7 @@ class AirtableUpdateTableBlock(Block):
description="The date dependency of the table to update", default=None
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
table: dict = SchemaField(description="Updated table object")
def __init__(self):
@@ -157,7 +158,7 @@ class AirtableCreateFieldBlock(Block):
Adds a new field (column) to an existing Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -176,7 +177,7 @@ class AirtableCreateFieldBlock(Block):
description="The options of the field to create", default=None
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
field: dict = SchemaField(description="Created field object")
field_id: str = SchemaField(description="ID of the created field")
@@ -209,7 +210,7 @@ class AirtableUpdateFieldBlock(Block):
Updates an existing field's properties in an Airtable table.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -225,7 +226,7 @@ class AirtableUpdateFieldBlock(Block):
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
field: dict = SchemaField(description="Updated field object")
def __init__(self):

View File

@@ -3,7 +3,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
BlockWebhookConfig,
CredentialsMetaInput,
@@ -32,7 +33,7 @@ class AirtableWebhookTriggerBlock(Block):
Thin wrapper just forwards the payloads one at a time to the next block.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = airtable.credentials_field(
description="Airtable API credentials"
)
@@ -43,7 +44,7 @@ class AirtableWebhookTriggerBlock(Block):
description="Airtable webhook event filter"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
payload: WebhookPayload = SchemaField(description="Airtable webhook payload")
def __init__(self):

View File

@@ -10,14 +10,20 @@ from backend.blocks.apollo.models import (
PrimaryPhone,
SearchOrganizationsRequest,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField
class SearchOrganizationsBlock(Block):
"""Search for organizations in Apollo"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
organization_num_employees_range: list[int] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
@@ -69,7 +75,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
description="Apollo credentials",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
organizations: list[Organization] = SchemaField(
description="List of organizations found",
default_factory=list,

View File

@@ -14,14 +14,20 @@ from backend.blocks.apollo.models import (
SearchPeopleRequest,
SenorityLevels,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField
class SearchPeopleBlock(Block):
"""Search for people in Apollo"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
person_titles: list[str] = SchemaField(
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
@@ -109,7 +115,7 @@ class SearchPeopleBlock(Block):
description="Apollo credentials",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
people: list[Contact] = SchemaField(
description="List of people found",
default_factory=list,

View File

@@ -6,14 +6,20 @@ from backend.blocks.apollo._auth import (
ApolloCredentialsInput,
)
from backend.blocks.apollo.models import Contact, EnrichPersonRequest
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import CredentialsField, SchemaField
class GetPersonDetailBlock(Block):
"""Get detailed person data with Apollo API, including email reveal"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
person_id: str = SchemaField(
description="Apollo person ID to enrich (most accurate method)",
default="",
@@ -68,7 +74,7 @@ class GetPersonDetailBlock(Block):
description="Apollo credentials",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
contact: Contact = SchemaField(
description="Enriched contact information",
)

View File

@@ -3,7 +3,7 @@ from typing import Optional
from pydantic import BaseModel, Field
from backend.data.block import BlockSchema
from backend.data.block import BlockSchemaInput
from backend.data.model import SchemaField, UserIntegrations
from backend.integrations.ayrshare import AyrshareClient
from backend.util.clients import get_database_manager_async_client
@@ -17,7 +17,7 @@ async def get_profile_key(user_id: str):
return user_integrations.managed_credentials.ayrshare_profile_key
class BaseAyrshareInput(BlockSchema):
class BaseAyrshareInput(BlockSchemaInput):
"""Base input model for Ayrshare social media posts with common fields."""
post: str = SchemaField(

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -38,7 +38,7 @@ class PostToBlueskyBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -101,7 +101,7 @@ class PostToFacebookBlock(Block):
description="URL for custom link preview", default="", advanced=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -94,7 +94,7 @@ class PostToGMBBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -5,7 +5,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -94,7 +94,7 @@ class PostToInstagramBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -94,7 +94,7 @@ class PostToLinkedInBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -73,7 +73,7 @@ class PostToPinterestBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -19,7 +19,7 @@ class PostToRedditBlock(Block):
pass # Uses all base fields
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -43,7 +43,7 @@ class PostToSnapchatBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -38,7 +38,7 @@ class PostToTelegramBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -31,7 +31,7 @@ class PostToThreadsBlock(Block):
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -5,7 +5,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -98,7 +98,7 @@ class PostToTikTokBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -3,7 +3,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -97,7 +97,7 @@ class PostToXBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -6,7 +6,7 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaOutput,
BlockType,
SchemaField,
)
@@ -119,7 +119,7 @@ class PostToYouTubeBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
post_result: PostResponse = SchemaField(description="The result of the post")
post: PostIds = SchemaField(description="The result of the post")

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -23,7 +24,7 @@ class BaasBotJoinMeetingBlock(Block):
Deploy a bot immediately or at a scheduled start_time to join and record a meeting.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
@@ -57,7 +58,7 @@ class BaasBotJoinMeetingBlock(Block):
description="Custom metadata to attach to the bot", default={}
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
bot_id: str = SchemaField(description="UUID of the deployed bot")
join_response: dict = SchemaField(
description="Full response from join operation"
@@ -103,13 +104,13 @@ class BaasBotLeaveMeetingBlock(Block):
Force the bot to exit the call.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot to remove from meeting")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
left: bool = SchemaField(description="Whether the bot successfully left")
def __init__(self):
@@ -138,7 +139,7 @@ class BaasBotFetchMeetingDataBlock(Block):
Pull MP4 URL, transcript & metadata for a completed meeting.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
@@ -147,7 +148,7 @@ class BaasBotFetchMeetingDataBlock(Block):
description="Include transcript data in response", default=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
mp4_url: str = SchemaField(
description="URL to download the meeting recording (time-limited)"
)
@@ -185,13 +186,13 @@ class BaasBotDeleteRecordingBlock(Block):
Purge MP4 + transcript data for privacy or storage management.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = baas.credentials_field(
description="Meeting BaaS API credentials"
)
bot_id: str = SchemaField(description="UUID of the bot whose data to delete")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
deleted: bool = SchemaField(
description="Whether the data was successfully deleted"
)

View File

@@ -11,7 +11,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -27,7 +28,7 @@ TEST_CREDENTIALS = APIKeyCredentials(
)
class TextModification(BlockSchema):
class TextModification(BlockSchemaInput):
name: str = SchemaField(
description="The name of the layer to modify in the template"
)
@@ -60,7 +61,7 @@ class TextModification(BlockSchema):
class BannerbearTextOverlayBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = bannerbear.credentials_field(
description="API credentials for Bannerbear"
)
@@ -96,7 +97,7 @@ class BannerbearTextOverlayBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
success: bool = SchemaField(
description="Whether the image generation was successfully initiated"
)
@@ -105,7 +106,6 @@ class BannerbearTextOverlayBlock(Block):
)
uid: str = SchemaField(description="Unique identifier for the generated image")
status: str = SchemaField(description="Status of the image generation")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(

View File

@@ -1,14 +1,21 @@
import enum
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
)
from backend.data.model import SchemaField
from backend.util.file import store_media_file
from backend.util.type import MediaFileType, convert
class FileStoreBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
file_in: MediaFileType = SchemaField(
description="The file to store in the temporary directory, it can be a URL, data URI, or local path."
)
@@ -19,7 +26,7 @@ class FileStoreBlock(Block):
title="Produce Base64 Output",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
file_out: MediaFileType = SchemaField(
description="The relative path to the stored file in the temporary directory."
)
@@ -57,7 +64,7 @@ class StoreValueBlock(Block):
The block output will be static, the output can be consumed multiple times.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
input: Any = SchemaField(
description="Trigger the block to produce the output. "
"The value is only used when `data` is None."
@@ -68,7 +75,7 @@ class StoreValueBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: Any = SchemaField(description="The stored data retained in the block.")
def __init__(self):
@@ -94,10 +101,10 @@ class StoreValueBlock(Block):
class PrintToConsoleBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: Any = SchemaField(description="The data to print to the console.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: Any = SchemaField(description="The data printed to the console.")
status: str = SchemaField(description="The status of the print operation.")
@@ -121,10 +128,10 @@ class PrintToConsoleBlock(Block):
class NoteBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(description="The text to display in the sticky note.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: str = SchemaField(description="The text to display in the sticky note.")
def __init__(self):
@@ -154,15 +161,14 @@ class TypeOptions(enum.Enum):
class UniversalTypeConverterBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
value: Any = SchemaField(
description="The value to convert to a universal type."
)
type: TypeOptions = SchemaField(description="The type to convert the value to.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
value: Any = SchemaField(description="The converted value.")
error: str = SchemaField(description="Error message if conversion failed.")
def __init__(self):
super().__init__(
@@ -195,10 +201,10 @@ class ReverseListOrderBlock(Block):
A block which takes in a list and returns it in the opposite order.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
input_list: list[Any] = SchemaField(description="The list to reverse")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
reversed_list: list[Any] = SchemaField(description="The list in reversed order")
def __init__(self):

View File

@@ -2,7 +2,13 @@ import os
import re
from typing import Type
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
@@ -15,12 +21,12 @@ class BlockInstallationBlock(Block):
for development purposes only.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
code: str = SchemaField(
description="Python code of the block to be installed",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
success: str = SchemaField(
description="Success message if the block is installed successfully",
)

View File

@@ -1,7 +1,13 @@
from enum import Enum
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.type import convert
@@ -16,7 +22,7 @@ class ComparisonOperator(Enum):
class ConditionBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
value1: Any = SchemaField(
description="Enter the first value for comparison",
placeholder="For example: 10 or 'hello' or True",
@@ -40,7 +46,7 @@ class ConditionBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: bool = SchemaField(
description="The result of the condition evaluation (True or False)"
)
@@ -111,7 +117,7 @@ class ConditionBlock(Block):
class IfInputMatchesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
input: Any = SchemaField(
description="The input to match against",
placeholder="For example: 10 or 'hello' or True",
@@ -131,7 +137,7 @@ class IfInputMatchesBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: bool = SchemaField(
description="The result of the condition evaluation (True or False)"
)

View File

@@ -1,10 +1,18 @@
from enum import Enum
from typing import Literal
from typing import Any, Literal, Optional
from e2b_code_interpreter import AsyncSandbox
from pydantic import SecretStr
from e2b_code_interpreter import Result as E2BExecutionResult
from e2b_code_interpreter.charts import Chart as E2BExecutionResultChart
from pydantic import BaseModel, Field, JsonValue, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -36,14 +44,135 @@ class ProgrammingLanguage(Enum):
JAVA = "java"
class CodeExecutionBlock(Block):
class MainCodeExecutionResult(BaseModel):
"""
*Pydantic model mirroring `e2b_code_interpreter.Result`*
Represents the data to be displayed as a result of executing a cell in a Jupyter notebook.
The result is similar to the structure returned by ipython kernel: https://ipython.readthedocs.io/en/stable/development/execution.html#execution-semantics
The result can contain multiple types of data, such as text, images, plots, etc. Each type of data is represented
as a string, and the result can contain multiple types of data. The display calls don't have to have text representation,
for the actual result the representation is always present for the result, the other representations are always optional.
""" # noqa
class Chart(BaseModel, E2BExecutionResultChart):
pass
text: Optional[str] = None
html: Optional[str] = None
markdown: Optional[str] = None
svg: Optional[str] = None
png: Optional[str] = None
jpeg: Optional[str] = None
pdf: Optional[str] = None
latex: Optional[str] = None
json_data: Optional[JsonValue] = Field(None, alias="json")
javascript: Optional[str] = None
data: Optional[dict] = None
chart: Optional[Chart] = None
extra: Optional[dict] = None
"""Extra data that can be included. Not part of the standard types."""
class CodeExecutionResult(MainCodeExecutionResult):
__doc__ = MainCodeExecutionResult.__doc__
is_main_result: bool = False
"""Whether this data is the main result of the cell. Data can be produced by display calls of which can be multiple in a cell.""" # noqa
class BaseE2BExecutorMixin:
"""Shared implementation methods for E2B executor blocks."""
async def execute_code(
self,
api_key: str,
code: str,
language: ProgrammingLanguage,
template_id: str = "",
setup_commands: Optional[list[str]] = None,
timeout: Optional[int] = None,
sandbox_id: Optional[str] = None,
dispose_sandbox: bool = False,
):
"""
Unified code execution method that handles all three use cases:
1. Create new sandbox and execute (ExecuteCodeBlock)
2. Create new sandbox, execute, and return sandbox_id (InstantiateCodeSandboxBlock)
3. Connect to existing sandbox and execute (ExecuteCodeStepBlock)
""" # noqa
sandbox = None
try:
if sandbox_id:
# Connect to existing sandbox (ExecuteCodeStepBlock case)
sandbox = await AsyncSandbox.connect(
sandbox_id=sandbox_id, api_key=api_key
)
else:
# Create new sandbox (ExecuteCodeBlock/InstantiateCodeSandboxBlock case)
sandbox = await AsyncSandbox.create(
api_key=api_key, template=template_id, timeout=timeout
)
if setup_commands:
for cmd in setup_commands:
await sandbox.commands.run(cmd)
# Execute the code
execution = await sandbox.run_code(
code,
language=language.value,
on_error=lambda e: sandbox.kill(), # Kill the sandbox on error
)
if execution.error:
raise Exception(execution.error)
results = execution.results
text_output = execution.text
stdout_logs = "".join(execution.logs.stdout)
stderr_logs = "".join(execution.logs.stderr)
return results, text_output, stdout_logs, stderr_logs, sandbox.sandbox_id
finally:
# Dispose of sandbox if requested to reduce usage costs
if dispose_sandbox and sandbox:
await sandbox.kill()
def process_execution_results(
self, results: list[E2BExecutionResult]
) -> tuple[dict[str, Any] | None, list[dict[str, Any]]]:
"""Process and filter execution results."""
# Filter out empty formats and convert to dicts
processed_results = [
{
f: value
for f in [*r.formats(), "extra", "is_main_result"]
if (value := getattr(r, f, None)) is not None
}
for r in results
]
if main_result := next(
(r for r in processed_results if r.get("is_main_result")), None
):
# Make main_result a copy we can modify & remove is_main_result
(main_result := {**main_result}).pop("is_main_result")
return main_result, processed_results
class ExecuteCodeBlock(Block, BaseE2BExecutorMixin):
# TODO : Add support to upload and download files
# Currently, You can customized the CPU and Memory, only by creating a pre customized sandbox template
class Input(BlockSchema):
# NOTE: Currently, you can only customize the CPU and Memory
# by creating a pre customized sandbox template
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.E2B], Literal["api_key"]
] = CredentialsField(
description="Enter your api key for the E2B Sandbox. You can get it in here - https://e2b.dev/docs",
description=(
"Enter your API key for the E2B platform. "
"You can get it in here - https://e2b.dev/docs"
),
)
# Todo : Option to run commond in background
@@ -76,6 +205,14 @@ class CodeExecutionBlock(Block):
description="Execution timeout in seconds", default=300
)
dispose_sandbox: bool = SchemaField(
description=(
"Whether to dispose of the sandbox immediately after execution. "
"If disabled, the sandbox will run until its timeout expires."
),
default=True,
)
template_id: str = SchemaField(
description=(
"You can use an E2B sandbox template by entering its ID here. "
@@ -86,21 +223,29 @@ class CodeExecutionBlock(Block):
advanced=True,
)
class Output(BlockSchema):
response: str = SchemaField(description="Response from code execution")
class Output(BlockSchemaOutput):
main_result: MainCodeExecutionResult = SchemaField(
title="Main Result", description="The main result from the code execution"
)
results: list[CodeExecutionResult] = SchemaField(
description="List of results from the code execution"
)
response: str = SchemaField(
title="Main Text Output",
description="Text output (if any) of the main execution result",
)
stdout_logs: str = SchemaField(
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
error: str = SchemaField(description="Error message if execution failed")
def __init__(self):
super().__init__(
id="0b02b072-abe7-11ef-8372-fb5d162dd712",
description="Executes code in an isolated sandbox environment with internet access.",
description="Executes code in a sandbox environment with internet access.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=CodeExecutionBlock.Input,
output_schema=CodeExecutionBlock.Output,
input_schema=ExecuteCodeBlock.Input,
output_schema=ExecuteCodeBlock.Output,
test_credentials=TEST_CREDENTIALS,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
@@ -111,91 +256,59 @@ class CodeExecutionBlock(Block):
"template_id": "",
},
test_output=[
("results", []),
("response", "Hello World"),
("stdout_logs", "Hello World\n"),
],
test_mock={
"execute_code": lambda code, language, setup_commands, timeout, api_key, template_id: (
"Hello World",
"Hello World\n",
"",
"execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox: ( # noqa
[], # results
"Hello World", # text_output
"Hello World\n", # stdout_logs
"", # stderr_logs
"sandbox_id", # sandbox_id
),
},
)
async def execute_code(
self,
code: str,
language: ProgrammingLanguage,
setup_commands: list[str],
timeout: int,
api_key: str,
template_id: str,
):
try:
sandbox = None
if template_id:
sandbox = await AsyncSandbox.create(
template=template_id, api_key=api_key, timeout=timeout
)
else:
sandbox = await AsyncSandbox.create(api_key=api_key, timeout=timeout)
if not sandbox:
raise Exception("Sandbox not created")
# Running setup commands
for cmd in setup_commands:
await sandbox.commands.run(cmd)
# Executing the code
execution = await sandbox.run_code(
code,
language=language.value,
on_error=lambda e: sandbox.kill(), # Kill the sandbox if there is an error
)
if execution.error:
raise Exception(execution.error)
response = execution.text
stdout_logs = "".join(execution.logs.stdout)
stderr_logs = "".join(execution.logs.stderr)
return response, stdout_logs, stderr_logs
except Exception as e:
raise e
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
response, stdout_logs, stderr_logs = await self.execute_code(
input_data.code,
input_data.language,
input_data.setup_commands,
input_data.timeout,
credentials.api_key.get_secret_value(),
input_data.template_id,
results, text_output, stdout, stderr, _ = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.code,
language=input_data.language,
template_id=input_data.template_id,
setup_commands=input_data.setup_commands,
timeout=input_data.timeout,
dispose_sandbox=input_data.dispose_sandbox,
)
if response:
yield "response", response
if stdout_logs:
yield "stdout_logs", stdout_logs
if stderr_logs:
yield "stderr_logs", stderr_logs
# Determine result object shape & filter out empty formats
main_result, results = self.process_execution_results(results)
if main_result:
yield "main_result", main_result
yield "results", results
if text_output:
yield "response", text_output
if stdout:
yield "stdout_logs", stdout
if stderr:
yield "stderr_logs", stderr
except Exception as e:
yield "error", str(e)
class InstantiationBlock(Block):
class Input(BlockSchema):
class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.E2B], Literal["api_key"]
] = CredentialsField(
description="Enter your api key for the E2B Sandbox. You can get it in here - https://e2b.dev/docs",
description=(
"Enter your API key for the E2B platform. "
"You can get it in here - https://e2b.dev/docs"
)
)
# Todo : Option to run commond in background
@@ -238,22 +351,27 @@ class InstantiationBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
sandbox_id: str = SchemaField(description="ID of the sandbox instance")
response: str = SchemaField(description="Response from code execution")
response: str = SchemaField(
title="Text Result",
description="Text result (if any) of the setup code execution",
)
stdout_logs: str = SchemaField(
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
error: str = SchemaField(description="Error message if execution failed")
def __init__(self):
super().__init__(
id="ff0861c9-1726-4aec-9e5b-bf53f3622112",
description="Instantiate an isolated sandbox environment with internet access where to execute code in.",
description=(
"Instantiate a sandbox environment with internet access "
"in which you can execute code with the Execute Code Step block."
),
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=InstantiationBlock.Input,
output_schema=InstantiationBlock.Output,
input_schema=InstantiateCodeSandboxBlock.Input,
output_schema=InstantiateCodeSandboxBlock.Output,
test_credentials=TEST_CREDENTIALS,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
@@ -269,11 +387,12 @@ class InstantiationBlock(Block):
("stdout_logs", "Hello World\n"),
],
test_mock={
"execute_code": lambda setup_code, language, setup_commands, timeout, api_key, template_id: (
"sandbox_id",
"Hello World",
"Hello World\n",
"",
"execute_code": lambda api_key, code, language, template_id, setup_commands, timeout: ( # noqa
[], # results
"Hello World", # text_output
"Hello World\n", # stdout_logs
"", # stderr_logs
"sandbox_id", # sandbox_id
),
},
)
@@ -282,78 +401,38 @@ class InstantiationBlock(Block):
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
sandbox_id, response, stdout_logs, stderr_logs = await self.execute_code(
input_data.setup_code,
input_data.language,
input_data.setup_commands,
input_data.timeout,
credentials.api_key.get_secret_value(),
input_data.template_id,
_, text_output, stdout, stderr, sandbox_id = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.setup_code,
language=input_data.language,
template_id=input_data.template_id,
setup_commands=input_data.setup_commands,
timeout=input_data.timeout,
)
if sandbox_id:
yield "sandbox_id", sandbox_id
else:
yield "error", "Sandbox ID not found"
if response:
yield "response", response
if stdout_logs:
yield "stdout_logs", stdout_logs
if stderr_logs:
yield "stderr_logs", stderr_logs
if text_output:
yield "response", text_output
if stdout:
yield "stdout_logs", stdout
if stderr:
yield "stderr_logs", stderr
except Exception as e:
yield "error", str(e)
async def execute_code(
self,
code: str,
language: ProgrammingLanguage,
setup_commands: list[str],
timeout: int,
api_key: str,
template_id: str,
):
try:
sandbox = None
if template_id:
sandbox = await AsyncSandbox.create(
template=template_id, api_key=api_key, timeout=timeout
)
else:
sandbox = await AsyncSandbox.create(api_key=api_key, timeout=timeout)
if not sandbox:
raise Exception("Sandbox not created")
# Running setup commands
for cmd in setup_commands:
await sandbox.commands.run(cmd)
# Executing the code
execution = await sandbox.run_code(
code,
language=language.value,
on_error=lambda e: sandbox.kill(), # Kill the sandbox if there is an error
)
if execution.error:
raise Exception(execution.error)
response = execution.text
stdout_logs = "".join(execution.logs.stdout)
stderr_logs = "".join(execution.logs.stderr)
return sandbox.sandbox_id, response, stdout_logs, stderr_logs
except Exception as e:
raise e
class StepExecutionBlock(Block):
class Input(BlockSchema):
class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.E2B], Literal["api_key"]
] = CredentialsField(
description="Enter your api key for the E2B Sandbox. You can get it in here - https://e2b.dev/docs",
description=(
"Enter your API key for the E2B platform. "
"You can get it in here - https://e2b.dev/docs"
),
)
sandbox_id: str = SchemaField(
@@ -374,21 +453,34 @@ class StepExecutionBlock(Block):
advanced=False,
)
class Output(BlockSchema):
response: str = SchemaField(description="Response from code execution")
dispose_sandbox: bool = SchemaField(
description="Whether to dispose of the sandbox after executing this code.",
default=False,
)
class Output(BlockSchemaOutput):
main_result: MainCodeExecutionResult = SchemaField(
title="Main Result", description="The main result from the code execution"
)
results: list[CodeExecutionResult] = SchemaField(
description="List of results from the code execution"
)
response: str = SchemaField(
title="Main Text Output",
description="Text output (if any) of the main execution result",
)
stdout_logs: str = SchemaField(
description="Standard output logs from execution"
)
stderr_logs: str = SchemaField(description="Standard error logs from execution")
error: str = SchemaField(description="Error message if execution failed")
def __init__(self):
super().__init__(
id="82b59b8e-ea10-4d57-9161-8b169b0adba6",
description="Execute code in a previously instantiated sandbox environment.",
description="Execute code in a previously instantiated sandbox.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=StepExecutionBlock.Input,
output_schema=StepExecutionBlock.Output,
input_schema=ExecuteCodeStepBlock.Input,
output_schema=ExecuteCodeStepBlock.Output,
test_credentials=TEST_CREDENTIALS,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
@@ -397,61 +489,43 @@ class StepExecutionBlock(Block):
"language": ProgrammingLanguage.PYTHON.value,
},
test_output=[
("results", []),
("response", "Hello World"),
("stdout_logs", "Hello World\n"),
],
test_mock={
"execute_step_code": lambda sandbox_id, step_code, language, api_key: (
"Hello World",
"Hello World\n",
"",
"execute_code": lambda api_key, code, language, sandbox_id, dispose_sandbox: ( # noqa
[], # results
"Hello World", # text_output
"Hello World\n", # stdout_logs
"", # stderr_logs
sandbox_id, # sandbox_id
),
},
)
async def execute_step_code(
self,
sandbox_id: str,
code: str,
language: ProgrammingLanguage,
api_key: str,
):
try:
sandbox = await AsyncSandbox.connect(sandbox_id=sandbox_id, api_key=api_key)
if not sandbox:
raise Exception("Sandbox not found")
# Executing the code
execution = await sandbox.run_code(code, language=language.value)
if execution.error:
raise Exception(execution.error)
response = execution.text
stdout_logs = "".join(execution.logs.stdout)
stderr_logs = "".join(execution.logs.stderr)
return response, stdout_logs, stderr_logs
except Exception as e:
raise e
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
response, stdout_logs, stderr_logs = await self.execute_step_code(
input_data.sandbox_id,
input_data.step_code,
input_data.language,
credentials.api_key.get_secret_value(),
results, text_output, stdout, stderr, _ = await self.execute_code(
api_key=credentials.api_key.get_secret_value(),
code=input_data.step_code,
language=input_data.language,
sandbox_id=input_data.sandbox_id,
dispose_sandbox=input_data.dispose_sandbox,
)
if response:
yield "response", response
if stdout_logs:
yield "stdout_logs", stdout_logs
if stderr_logs:
yield "stderr_logs", stderr_logs
# Determine result object shape & filter out empty formats
main_result, results = self.process_execution_results(results)
if main_result:
yield "main_result", main_result
yield "results", results
if text_output:
yield "response", text_output
if stdout:
yield "stdout_logs", stdout
if stderr:
yield "stderr_logs", stderr
except Exception as e:
yield "error", str(e)

View File

@@ -1,17 +1,23 @@
import re
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class CodeExtractionBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(
description="Text containing code blocks to extract (e.g., AI response)",
placeholder="Enter text containing code blocks",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
html: str = SchemaField(description="Extracted HTML code")
css: str = SchemaField(description="Extracted CSS code")
javascript: str = SchemaField(description="Extracted JavaScript code")
@@ -90,7 +96,7 @@ class CodeExtractionBlock(Block):
for aliases in language_aliases.values()
for alias in aliases
)
+ r")\s+[\s\S]*?```"
+ r")[ \t]*\n[\s\S]*?```"
)
remaining_text = re.sub(pattern, "", input_data.text).strip()
@@ -103,7 +109,9 @@ class CodeExtractionBlock(Block):
# Escape special regex characters in the language string
language = re.escape(language)
# Extract all code blocks enclosed in ```language``` blocks
pattern = re.compile(rf"```{language}\s+(.*?)```", re.DOTALL | re.IGNORECASE)
pattern = re.compile(
rf"```{language}[ \t]*\n(.*?)\n```", re.DOTALL | re.IGNORECASE
)
matches = pattern.finditer(text)
# Combine all code blocks for this language with newlines between them
code_blocks = [match.group(1).strip() for match in matches]

View File

@@ -5,7 +5,8 @@ from backend.data.block import (
BlockCategory,
BlockManualWebhookConfig,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.integrations.providers import ProviderName
@@ -27,10 +28,10 @@ class TranscriptionDataModel(BaseModel):
class CompassAITriggerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
payload: TranscriptionDataModel = SchemaField(hidden=True)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
transcription: str = SchemaField(
description="The contents of the compass transcription."
)

View File

@@ -1,16 +1,22 @@
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class WordCharacterCountBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(
description="Input text to count words and characters",
placeholder="Enter your text here",
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
word_count: int = SchemaField(description="Number of words in the input text")
character_count: int = SchemaField(
description="Number of characters in the input text"

View File

@@ -1,6 +1,12 @@
from typing import Any, List
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.json import loads
from backend.util.mock import MockObject
@@ -12,13 +18,13 @@ from backend.util.prompt import estimate_token_count_str
class CreateDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
values: dict[str, Any] = SchemaField(
description="Key-value pairs to create the dictionary with",
placeholder="e.g., {'name': 'Alice', 'age': 25}",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
dictionary: dict[str, Any] = SchemaField(
description="The created dictionary containing the specified key-value pairs"
)
@@ -62,10 +68,11 @@ class CreateDictionaryBlock(Block):
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
advanced=False,
)
key: str = SchemaField(
default="",
@@ -85,11 +92,10 @@ class AddToDictionaryBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_dictionary: dict = SchemaField(
description="The dictionary with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -140,11 +146,11 @@ class AddToDictionaryBlock(Block):
class FindInDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
input: Any = SchemaField(description="Dictionary to lookup from")
key: str | int = SchemaField(description="Key to lookup in the dictionary")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output: Any = SchemaField(description="Value found for the given key")
missing: Any = SchemaField(
description="Value of the input that missing the key"
@@ -200,7 +206,7 @@ class FindInDictionaryBlock(Block):
class RemoveFromDictionaryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
@@ -209,12 +215,11 @@ class RemoveFromDictionaryBlock(Block):
default=False, description="Whether to return the removed value."
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after removal."
)
removed_value: Any = SchemaField(description="The removed value if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -250,19 +255,18 @@ class RemoveFromDictionaryBlock(Block):
class ReplaceDictionaryValueBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
key: str | int = SchemaField(description="Key to replace the value for.")
value: Any = SchemaField(description="The new value for the given key.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after replacement."
)
old_value: Any = SchemaField(description="The value that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -299,10 +303,10 @@ class ReplaceDictionaryValueBlock(Block):
class DictionaryIsEmptyBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
dictionary: dict[Any, Any] = SchemaField(description="The dictionary to check.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
is_empty: bool = SchemaField(description="True if the dictionary is empty.")
def __init__(self):
@@ -326,7 +330,7 @@ class DictionaryIsEmptyBlock(Block):
class CreateListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
values: List[Any] = SchemaField(
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
@@ -342,11 +346,10 @@ class CreateListBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
list: List[Any] = SchemaField(
description="The created list containing the specified values."
)
error: str = SchemaField(description="Error message if list creation failed.")
def __init__(self):
super().__init__(
@@ -403,7 +406,7 @@ class CreateListBlock(Block):
class AddToListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(
default_factory=list,
advanced=False,
@@ -424,11 +427,10 @@ class AddToListBlock(Block):
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_list: List[Any] = SchemaField(
description="The list with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -483,11 +485,11 @@ class AddToListBlock(Block):
class FindInListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to search in.")
value: Any = SchemaField(description="The value to search for.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
index: int = SchemaField(description="The index of the value in the list.")
found: bool = SchemaField(
description="Whether the value was found in the list."
@@ -525,15 +527,14 @@ class FindInListBlock(Block):
class GetListItemBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to get the item from.")
index: int = SchemaField(
description="The 0-based index of the item (supports negative indices)."
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
item: Any = SchemaField(description="The item at the specified index.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -560,7 +561,7 @@ class GetListItemBlock(Block):
class RemoveFromListBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to modify.")
value: Any = SchemaField(
default=None, description="Value to remove from the list."
@@ -573,10 +574,9 @@ class RemoveFromListBlock(Block):
default=False, description="Whether to return the removed item."
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_list: List[Any] = SchemaField(description="The list after removal.")
removed_item: Any = SchemaField(description="The removed item if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -617,17 +617,16 @@ class RemoveFromListBlock(Block):
class ReplaceListItemBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to modify.")
index: int = SchemaField(
description="Index of the item to replace (supports negative indices)."
)
value: Any = SchemaField(description="The new value for the given index.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
updated_list: List[Any] = SchemaField(description="The list after replacement.")
old_item: Any = SchemaField(description="The item that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
@@ -662,10 +661,10 @@ class ReplaceListItemBlock(Block):
class ListIsEmptyBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
list: List[Any] = SchemaField(description="The list to check.")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
is_empty: bool = SchemaField(description="True if the list is empty.")
def __init__(self):

View File

@@ -8,7 +8,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
@@ -18,7 +19,7 @@ from ._api import DataForSeoClient
from ._config import dataforseo
class KeywordSuggestion(BlockSchema):
class KeywordSuggestion(BlockSchemaInput):
"""Schema for a keyword suggestion result."""
keyword: str = SchemaField(description="The keyword suggestion")
@@ -45,7 +46,7 @@ class KeywordSuggestion(BlockSchema):
class DataForSeoKeywordSuggestionsBlock(Block):
"""Block for getting keyword suggestions from DataForSEO Labs."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
@@ -77,7 +78,7 @@ class DataForSeoKeywordSuggestionsBlock(Block):
le=3000,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
suggestions: List[KeywordSuggestion] = SchemaField(
description="List of keyword suggestions with metrics"
)
@@ -161,54 +162,63 @@ class DataForSeoKeywordSuggestionsBlock(Block):
**kwargs,
) -> BlockOutput:
"""Execute the keyword suggestions query."""
client = DataForSeoClient(credentials)
try:
client = DataForSeoClient(credentials)
results = await self._fetch_keyword_suggestions(client, input_data)
results = await self._fetch_keyword_suggestions(client, input_data)
# Process and format the results
suggestions = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", []) if isinstance(first_result, dict) else []
)
for item in items:
# Create the KeywordSuggestion object
suggestion = KeywordSuggestion(
keyword=item.get("keyword", ""),
search_volume=item.get("keyword_info", {}).get("search_volume"),
competition=item.get("keyword_info", {}).get("competition"),
cpc=item.get("keyword_info", {}).get("cpc"),
keyword_difficulty=item.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
item.get("serp_info") if input_data.include_serp_info else None
),
clickstream_data=(
item.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
# Process and format the results
suggestions = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", [])
if isinstance(first_result, dict)
else []
)
yield "suggestion", suggestion
suggestions.append(suggestion)
if items is None:
items = []
for item in items:
# Create the KeywordSuggestion object
suggestion = KeywordSuggestion(
keyword=item.get("keyword", ""),
search_volume=item.get("keyword_info", {}).get("search_volume"),
competition=item.get("keyword_info", {}).get("competition"),
cpc=item.get("keyword_info", {}).get("cpc"),
keyword_difficulty=item.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
item.get("serp_info")
if input_data.include_serp_info
else None
),
clickstream_data=(
item.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
)
yield "suggestion", suggestion
suggestions.append(suggestion)
yield "suggestions", suggestions
yield "total_count", len(suggestions)
yield "seed_keyword", input_data.keyword
yield "suggestions", suggestions
yield "total_count", len(suggestions)
yield "seed_keyword", input_data.keyword
except Exception as e:
yield "error", f"Failed to fetch keyword suggestions: {str(e)}"
class KeywordSuggestionExtractorBlock(Block):
"""Extracts individual fields from a KeywordSuggestion object."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
suggestion: KeywordSuggestion = SchemaField(
description="The keyword suggestion object to extract fields from"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
keyword: str = SchemaField(description="The keyword suggestion")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None

View File

@@ -8,7 +8,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
@@ -18,7 +19,7 @@ from ._api import DataForSeoClient
from ._config import dataforseo
class RelatedKeyword(BlockSchema):
class RelatedKeyword(BlockSchemaInput):
"""Schema for a related keyword result."""
keyword: str = SchemaField(description="The related keyword")
@@ -45,7 +46,7 @@ class RelatedKeyword(BlockSchema):
class DataForSeoRelatedKeywordsBlock(Block):
"""Block for getting related keywords from DataForSEO Labs."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = dataforseo.credentials_field(
description="DataForSEO credentials (username and password)"
)
@@ -85,7 +86,7 @@ class DataForSeoRelatedKeywordsBlock(Block):
le=4,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
related_keywords: List[RelatedKeyword] = SchemaField(
description="List of related keywords with metrics"
)
@@ -171,61 +172,71 @@ class DataForSeoRelatedKeywordsBlock(Block):
**kwargs,
) -> BlockOutput:
"""Execute the related keywords query."""
client = DataForSeoClient(credentials)
try:
client = DataForSeoClient(credentials)
results = await self._fetch_related_keywords(client, input_data)
results = await self._fetch_related_keywords(client, input_data)
# Process and format the results
related_keywords = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", []) if isinstance(first_result, dict) else []
)
for item in items:
# Extract keyword_data from the item
keyword_data = item.get("keyword_data", {})
# Create the RelatedKeyword object
keyword = RelatedKeyword(
keyword=keyword_data.get("keyword", ""),
search_volume=keyword_data.get("keyword_info", {}).get(
"search_volume"
),
competition=keyword_data.get("keyword_info", {}).get("competition"),
cpc=keyword_data.get("keyword_info", {}).get("cpc"),
keyword_difficulty=keyword_data.get("keyword_properties", {}).get(
"keyword_difficulty"
),
serp_info=(
keyword_data.get("serp_info")
if input_data.include_serp_info
else None
),
clickstream_data=(
keyword_data.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
# Process and format the results
related_keywords = []
if results and len(results) > 0:
# results is a list, get the first element
first_result = results[0] if isinstance(results, list) else results
items = (
first_result.get("items", [])
if isinstance(first_result, dict)
else []
)
yield "related_keyword", keyword
related_keywords.append(keyword)
# Ensure items is never None
if items is None:
items = []
for item in items:
# Extract keyword_data from the item
keyword_data = item.get("keyword_data", {})
yield "related_keywords", related_keywords
yield "total_count", len(related_keywords)
yield "seed_keyword", input_data.keyword
# Create the RelatedKeyword object
keyword = RelatedKeyword(
keyword=keyword_data.get("keyword", ""),
search_volume=keyword_data.get("keyword_info", {}).get(
"search_volume"
),
competition=keyword_data.get("keyword_info", {}).get(
"competition"
),
cpc=keyword_data.get("keyword_info", {}).get("cpc"),
keyword_difficulty=keyword_data.get(
"keyword_properties", {}
).get("keyword_difficulty"),
serp_info=(
keyword_data.get("serp_info")
if input_data.include_serp_info
else None
),
clickstream_data=(
keyword_data.get("clickstream_keyword_info")
if input_data.include_clickstream_data
else None
),
)
yield "related_keyword", keyword
related_keywords.append(keyword)
yield "related_keywords", related_keywords
yield "total_count", len(related_keywords)
yield "seed_keyword", input_data.keyword
except Exception as e:
yield "error", f"Failed to fetch related keywords: {str(e)}"
class RelatedKeywordExtractorBlock(Block):
"""Extracts individual fields from a RelatedKeyword object."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
related_keyword: RelatedKeyword = SchemaField(
description="The related keyword object to extract fields from"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
keyword: str = SchemaField(description="The related keyword")
search_volume: Optional[int] = SchemaField(
description="Monthly search volume", default=None

View File

@@ -1,17 +1,23 @@
import codecs
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class TextDecoderBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
text: str = SchemaField(
description="A string containing escaped characters to be decoded",
placeholder='Your entire text block with \\n and \\" escaped characters',
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
decoded_text: str = SchemaField(
description="The decoded text with escape sequences processed"
)

View File

@@ -4,13 +4,19 @@ import mimetypes
from pathlib import Path
from typing import Any
import aiohttp
import discord
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, SchemaField
from backend.util.file import store_media_file
from backend.util.request import Requests
from backend.util.type import MediaFileType
from ._auth import (
@@ -28,10 +34,10 @@ TEST_CREDENTIALS_INPUT = TEST_BOT_CREDENTIALS_INPUT
class ReadDiscordMessagesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
message_content: str = SchemaField(
description="The content of the message received"
)
@@ -114,10 +120,9 @@ class ReadDiscordMessagesBlock(Block):
if message.attachments:
attachment = message.attachments[0] # Process the first attachment
if attachment.filename.endswith((".txt", ".py")):
async with aiohttp.ClientSession() as session:
async with session.get(attachment.url) as response:
file_content = response.text()
self.output_data += f"\n\nFile from user: {attachment.filename}\nContent: {file_content}"
response = await Requests().get(attachment.url)
file_content = response.text()
self.output_data += f"\n\nFile from user: {attachment.filename}\nContent: {file_content}"
await client.close()
@@ -165,21 +170,21 @@ class ReadDiscordMessagesBlock(Block):
class SendDiscordMessageBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
message_content: str = SchemaField(
description="The content of the message to send"
)
channel_name: str = SchemaField(
description="The name of the channel the message will be sent to"
description="Channel ID or channel name to send the message to"
)
server_name: str = SchemaField(
description="The name of the server where the channel is located",
advanced=True, # Optional field for server name
description="Server name (only needed if using channel name)",
advanced=True,
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(
description="The status of the operation (e.g., 'Message sent', 'Error')"
)
@@ -231,25 +236,49 @@ class SendDiscordMessageBlock(Block):
@client.event
async def on_ready():
print(f"Logged in as {client.user}")
for guild in client.guilds:
if server_name and guild.name != server_name:
continue
for channel in guild.text_channels:
if channel.name == channel_name:
# Split message into chunks if it exceeds 2000 characters
chunks = self.chunk_message(message_content)
last_message = None
for chunk in chunks:
last_message = await channel.send(chunk)
result["status"] = "Message sent"
result["message_id"] = (
str(last_message.id) if last_message else ""
)
result["channel_id"] = str(channel.id)
await client.close()
return
channel = None
result["status"] = "Channel not found"
# Try to parse as channel ID first
try:
channel_id = int(channel_name)
channel = client.get_channel(channel_id)
except ValueError:
# Not a valid ID, will try name lookup
pass
# If not found by ID (or not an ID), try name lookup
if not channel:
for guild in client.guilds:
if server_name and guild.name != server_name:
continue
for ch in guild.text_channels:
if ch.name == channel_name:
channel = ch
break
if channel:
break
if not channel:
result["status"] = f"Channel not found: {channel_name}"
await client.close()
return
# Type check - ensure it's a text channel that can send messages
if not hasattr(channel, "send"):
result["status"] = (
f"Channel {channel_name} cannot receive messages (not a text channel)"
)
await client.close()
return
# Split message into chunks if it exceeds 2000 characters
chunks = self.chunk_message(message_content)
last_message = None
for chunk in chunks:
last_message = await channel.send(chunk) # type: ignore
result["status"] = "Message sent"
result["message_id"] = str(last_message.id) if last_message else ""
result["channel_id"] = str(channel.id)
await client.close()
await client.start(token)
@@ -287,7 +316,7 @@ class SendDiscordMessageBlock(Block):
class SendDiscordDMBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
user_id: str = SchemaField(
description="The Discord user ID to send the DM to (e.g., '123456789012345678')"
@@ -296,7 +325,7 @@ class SendDiscordDMBlock(Block):
description="The content of the direct message to send"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="The status of the operation")
message_id: str = SchemaField(description="The ID of the sent message")
@@ -376,7 +405,7 @@ class SendDiscordDMBlock(Block):
class SendDiscordEmbedBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_identifier: str = SchemaField(
description="Channel ID or channel name to send the embed to"
@@ -413,7 +442,7 @@ class SendDiscordEmbedBlock(Block):
default=[],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
message_id: str = SchemaField(description="ID of the sent embed message")
@@ -563,7 +592,7 @@ class SendDiscordEmbedBlock(Block):
class SendDiscordFileBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_identifier: str = SchemaField(
description="Channel ID or channel name to send the file to"
@@ -584,7 +613,7 @@ class SendDiscordFileBlock(Block):
description="Optional message to send with the file", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
message_id: str = SchemaField(description="ID of the sent message")
@@ -675,16 +704,15 @@ class SendDiscordFileBlock(Block):
elif file.startswith(("http://", "https://")):
# URL - download the file
async with aiohttp.ClientSession() as session:
async with session.get(file) as response:
file_bytes = await response.read()
response = await Requests().get(file)
file_bytes = response.content
# Try to get filename from URL if not provided
if not filename:
from urllib.parse import urlparse
# Try to get filename from URL if not provided
if not filename:
from urllib.parse import urlparse
path = urlparse(file).path
detected_filename = Path(path).name or "download"
path = urlparse(file).path
detected_filename = Path(path).name or "download"
else:
# Local file path - read from stored media file
# This would be a path from a previous block's output
@@ -766,7 +794,7 @@ class SendDiscordFileBlock(Block):
class ReplyToDiscordMessageBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_id: str = SchemaField(
description="The channel ID where the message to reply to is located"
@@ -777,7 +805,7 @@ class ReplyToDiscordMessageBlock(Block):
description="Whether to mention the original message author", default=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
reply_id: str = SchemaField(description="ID of the reply message")
@@ -891,13 +919,13 @@ class ReplyToDiscordMessageBlock(Block):
class DiscordUserInfoBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
user_id: str = SchemaField(
description="The Discord user ID to get information about"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
user_id: str = SchemaField(
description="The user's ID (passed through for chaining)"
)
@@ -1008,7 +1036,7 @@ class DiscordUserInfoBlock(Block):
class DiscordChannelInfoBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_identifier: str = SchemaField(
description="Channel name or channel ID to look up"
@@ -1019,7 +1047,7 @@ class DiscordChannelInfoBlock(Block):
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
channel_id: str = SchemaField(description="The channel's ID")
channel_name: str = SchemaField(description="The channel's name")
server_id: str = SchemaField(description="The server's ID")

View File

@@ -2,7 +2,13 @@
Discord OAuth-based blocks.
"""
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import OAuth2Credentials, SchemaField
from ._api import DiscordOAuthUser, get_current_user
@@ -21,12 +27,12 @@ class DiscordGetCurrentUserBlock(Block):
This block requires Discord OAuth2 credentials (not bot tokens).
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: DiscordOAuthCredentialsInput = DiscordOAuthCredentialsField(
["identify"]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
user_id: str = SchemaField(description="The authenticated user's Discord ID")
username: str = SchemaField(description="The user's username")
avatar_url: str = SchemaField(description="URL to the user's avatar image")

View File

@@ -5,7 +5,13 @@ from typing import Literal
from pydantic import BaseModel, ConfigDict, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
CredentialsField,
CredentialsMetaInput,
@@ -51,7 +57,7 @@ class SMTPConfig(BaseModel):
class SendEmailBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
to_email: str = SchemaField(
description="Recipient email address", placeholder="recipient@example.com"
)
@@ -67,7 +73,7 @@ class SendEmailBlock(Block):
)
credentials: SMTPCredentialsInput = SMTPCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Status of the email sending operation")
error: str = SchemaField(
description="Error message if the email sending failed"

View File

@@ -8,7 +8,13 @@ which provides access to LinkedIn profile data and related information.
import logging
from typing import Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField
from backend.util.type import MediaFileType
@@ -29,7 +35,7 @@ logger = logging.getLogger(__name__)
class GetLinkedinProfileBlock(Block):
"""Block to fetch LinkedIn profile data using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for GetLinkedinProfileBlock."""
linkedin_url: str = SchemaField(
@@ -80,13 +86,12 @@ class GetLinkedinProfileBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for GetLinkedinProfileBlock."""
profile: PersonProfileResponse = SchemaField(
description="LinkedIn profile data"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfileBlock."""
@@ -199,7 +204,7 @@ class GetLinkedinProfileBlock(Block):
class LinkedinPersonLookupBlock(Block):
"""Block to look up LinkedIn profiles by person's information using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for LinkedinPersonLookupBlock."""
first_name: str = SchemaField(
@@ -242,13 +247,12 @@ class LinkedinPersonLookupBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for LinkedinPersonLookupBlock."""
lookup_result: PersonLookupResponse = SchemaField(
description="LinkedIn profile lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinPersonLookupBlock."""
@@ -346,7 +350,7 @@ class LinkedinPersonLookupBlock(Block):
class LinkedinRoleLookupBlock(Block):
"""Block to look up LinkedIn profiles by role in a company using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for LinkedinRoleLookupBlock."""
role: str = SchemaField(
@@ -366,13 +370,12 @@ class LinkedinRoleLookupBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for LinkedinRoleLookupBlock."""
role_lookup_result: RoleLookupResponse = SchemaField(
description="LinkedIn role lookup result"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize LinkedinRoleLookupBlock."""
@@ -449,7 +452,7 @@ class LinkedinRoleLookupBlock(Block):
class GetLinkedinProfilePictureBlock(Block):
"""Block to get LinkedIn profile pictures using Enrichlayer API."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
"""Input schema for GetLinkedinProfilePictureBlock."""
linkedin_profile_url: str = SchemaField(
@@ -460,13 +463,12 @@ class GetLinkedinProfilePictureBlock(Block):
description="Enrichlayer API credentials"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
"""Output schema for GetLinkedinProfilePictureBlock."""
profile_picture_url: MediaFileType = SchemaField(
description="LinkedIn profile picture URL"
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
"""Initialize GetLinkedinProfilePictureBlock."""

View File

@@ -4,7 +4,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -49,7 +50,7 @@ class CostDollars(BaseModel):
class ExaAnswerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -69,7 +70,7 @@ class ExaAnswerBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
answer: str = SchemaField(
description="The generated answer based on search results"
)

View File

@@ -3,7 +3,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -14,7 +15,7 @@ from .helpers import ContentSettings
class ExaContentsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -27,7 +28,7 @@ class ExaContentsBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
results: list = SchemaField(
description="List of document contents", default_factory=list
)

View File

@@ -5,7 +5,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -16,7 +17,7 @@ from .helpers import ContentSettings
class ExaSearchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -63,7 +64,7 @@ class ExaSearchBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
results: list = SchemaField(
description="List of search results", default_factory=list
)

View File

@@ -6,7 +6,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -17,7 +18,7 @@ from .helpers import ContentSettings
class ExaFindSimilarBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -65,7 +66,7 @@ class ExaFindSimilarBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
results: list[Any] = SchemaField(
description="List of similar documents with title, URL, published date, author, and score",
default_factory=list,

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
BlockWebhookConfig,
CredentialsMetaInput,
@@ -84,7 +85,7 @@ class ExaWebsetWebhookBlock(Block):
including creation, updates, searches, and exports.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="Exa API credentials for webhook management"
)
@@ -104,7 +105,7 @@ class ExaWebsetWebhookBlock(Block):
description="Webhook payload data", default={}, hidden=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
event_type: str = SchemaField(description="Type of event that occurred")
event_id: str = SchemaField(description="Unique identifier for this event")
webset_id: str = SchemaField(description="ID of the affected webset")

View File

@@ -31,7 +31,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
Requests,
SchemaField,
@@ -104,7 +105,7 @@ class Webset(BaseModel):
class ExaCreateWebsetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -219,7 +220,7 @@ class ExaCreateWebsetBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
webset: Webset = SchemaField(
description="The unique identifier for the created webset"
)
@@ -404,7 +405,7 @@ class ExaCreateWebsetBlock(Block):
class ExaUpdateWebsetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -417,7 +418,7 @@ class ExaUpdateWebsetBlock(Block):
description="Key-value pairs to associate with this webset (set to null to clear)",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
webset_id: str = SchemaField(description="The unique identifier for the webset")
status: str = SchemaField(description="The status of the webset")
external_id: Optional[str] = SchemaField(
@@ -475,7 +476,7 @@ class ExaUpdateWebsetBlock(Block):
class ExaListWebsetsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -497,7 +498,7 @@ class ExaListWebsetsBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
websets: list[Webset] = SchemaField(
description="List of websets", default_factory=list
)
@@ -550,7 +551,7 @@ class ExaListWebsetsBlock(Block):
class ExaGetWebsetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -559,7 +560,7 @@ class ExaGetWebsetBlock(Block):
placeholder="webset-id-or-external-id",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
webset_id: str = SchemaField(description="The unique identifier for the webset")
status: str = SchemaField(description="The status of the webset")
external_id: Optional[str] = SchemaField(
@@ -637,7 +638,7 @@ class ExaGetWebsetBlock(Block):
class ExaDeleteWebsetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -646,7 +647,7 @@ class ExaDeleteWebsetBlock(Block):
placeholder="webset-id-or-external-id",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
webset_id: str = SchemaField(
description="The unique identifier for the deleted webset"
)
@@ -695,7 +696,7 @@ class ExaDeleteWebsetBlock(Block):
class ExaCancelWebsetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = exa.credentials_field(
description="The Exa integration requires an API Key."
)
@@ -704,7 +705,7 @@ class ExaCancelWebsetBlock(Block):
placeholder="webset-id-or-external-id",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
webset_id: str = SchemaField(description="The unique identifier for the webset")
status: str = SchemaField(
description="The status of the webset after cancellation"

View File

@@ -10,7 +10,13 @@ from backend.blocks.fal._auth import (
FalCredentialsField,
FalCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import ClientResponseError, Requests
@@ -24,7 +30,7 @@ class FalModel(str, Enum):
class AIVideoGeneratorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
prompt: str = SchemaField(
description="Description of the video to generate.",
placeholder="A dog running in a field.",
@@ -36,7 +42,7 @@ class AIVideoGeneratorBlock(Block):
)
credentials: FalCredentialsInput = FalCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
video_url: str = SchemaField(description="The URL of the generated video.")
error: str = SchemaField(
description="Error message if video generation failed."

View File

@@ -0,0 +1,12 @@
from enum import Enum
class ScrapeFormat(Enum):
MARKDOWN = "markdown"
HTML = "html"
RAW_HTML = "rawHtml"
LINKS = "links"
SCREENSHOT = "screenshot"
SCREENSHOT_FULL_PAGE = "screenshot@fullPage"
JSON = "json"
CHANGE_TRACKING = "changeTracking"

View File

@@ -0,0 +1,28 @@
"""Utility functions for converting between our ScrapeFormat enum and firecrawl FormatOption types."""
from typing import List
from firecrawl.v2.types import FormatOption, ScreenshotFormat
from backend.blocks.firecrawl._api import ScrapeFormat
def convert_to_format_options(
formats: List[ScrapeFormat],
) -> List[FormatOption]:
"""Convert our ScrapeFormat enum values to firecrawl FormatOption types.
Handles special cases like screenshot@fullPage which needs to be converted
to a ScreenshotFormat object.
"""
result: List[FormatOption] = []
for format_enum in formats:
if format_enum.value == "screenshot@fullPage":
# Special case: convert to ScreenshotFormat with full_page=True
result.append(ScreenshotFormat(type="screenshot", full_page=True))
else:
# Regular string literals
result.append(format_enum.value)
return result

View File

@@ -1,35 +1,26 @@
from enum import Enum
from typing import Any
from firecrawl import FirecrawlApp, ScrapeOptions
from firecrawl import FirecrawlApp
from firecrawl.v2.types import ScrapeOptions
from backend.blocks.firecrawl._api import ScrapeFormat
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import firecrawl
class ScrapeFormat(Enum):
MARKDOWN = "markdown"
HTML = "html"
RAW_HTML = "rawHtml"
LINKS = "links"
SCREENSHOT = "screenshot"
SCREENSHOT_FULL_PAGE = "screenshot@fullPage"
JSON = "json"
CHANGE_TRACKING = "changeTracking"
from ._format_utils import convert_to_format_options
class FirecrawlCrawlBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
url: str = SchemaField(description="The URL to crawl")
limit: int = SchemaField(description="The number of pages to crawl", default=10)
@@ -49,7 +40,7 @@ class FirecrawlCrawlBlock(Block):
description="The format of the crawl", default=[ScrapeFormat.MARKDOWN]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: list[dict[str, Any]] = SchemaField(description="The result of the crawl")
markdown: str = SchemaField(description="The markdown of the crawl")
html: str = SchemaField(description="The html of the crawl")
@@ -65,6 +56,10 @@ class FirecrawlCrawlBlock(Block):
change_tracking: dict[str, Any] = SchemaField(
description="The change tracking of the crawl"
)
error: str = SchemaField(
description="Error message if the crawl failed",
default="",
)
def __init__(self):
super().__init__(
@@ -78,18 +73,17 @@ class FirecrawlCrawlBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
# Sync call
crawl_result = app.crawl_url(
crawl_result = app.crawl(
input_data.url,
limit=input_data.limit,
scrape_options=ScrapeOptions(
formats=[format.value for format in input_data.formats],
onlyMainContent=input_data.only_main_content,
maxAge=input_data.max_age,
waitFor=input_data.wait_for,
formats=convert_to_format_options(input_data.formats),
only_main_content=input_data.only_main_content,
max_age=input_data.max_age,
wait_for=input_data.wait_for,
),
)
yield "data", crawl_result.data
@@ -101,7 +95,7 @@ class FirecrawlCrawlBlock(Block):
elif f == ScrapeFormat.HTML:
yield "html", data.html
elif f == ScrapeFormat.RAW_HTML:
yield "raw_html", data.rawHtml
yield "raw_html", data.raw_html
elif f == ScrapeFormat.LINKS:
yield "links", data.links
elif f == ScrapeFormat.SCREENSHOT:
@@ -109,6 +103,6 @@ class FirecrawlCrawlBlock(Block):
elif f == ScrapeFormat.SCREENSHOT_FULL_PAGE:
yield "screenshot_full_page", data.screenshot
elif f == ScrapeFormat.CHANGE_TRACKING:
yield "change_tracking", data.changeTracking
yield "change_tracking", data.change_tracking
elif f == ScrapeFormat.JSON:
yield "json", data.json

View File

@@ -9,7 +9,8 @@ from backend.sdk import (
BlockCost,
BlockCostType,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
cost,
@@ -20,8 +21,7 @@ from ._config import firecrawl
@cost(BlockCost(2, BlockCostType.RUN))
class FirecrawlExtractBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
urls: list[str] = SchemaField(
description="The URLs to crawl - at least one is required. Wildcards are supported. (/*)"
@@ -38,8 +38,12 @@ class FirecrawlExtractBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: dict[str, Any] = SchemaField(description="The result of the crawl")
error: str = SchemaField(
description="Error message if the extraction failed",
default="",
)
def __init__(self):
super().__init__(
@@ -53,7 +57,6 @@ class FirecrawlExtractBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
extract_result = app.extract(

View File

@@ -1,3 +1,5 @@
from typing import Any
from firecrawl import FirecrawlApp
from backend.sdk import (
@@ -5,7 +7,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
@@ -14,14 +17,20 @@ from ._config import firecrawl
class FirecrawlMapWebsiteBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
url: str = SchemaField(description="The website url to map")
class Output(BlockSchema):
links: list[str] = SchemaField(description="The links of the website")
class Output(BlockSchemaOutput):
links: list[str] = SchemaField(description="List of URLs found on the website")
results: list[dict[str, Any]] = SchemaField(
description="List of search results with url, title, and description"
)
error: str = SchemaField(
description="Error message if the map failed",
default="",
)
def __init__(self):
super().__init__(
@@ -35,12 +44,22 @@ class FirecrawlMapWebsiteBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
# Sync call
map_result = app.map_url(
map_result = app.map(
url=input_data.url,
)
yield "links", map_result.links
# Convert SearchResult objects to dicts
results_data = [
{
"url": link.url,
"title": link.title,
"description": link.description,
}
for link in map_result.links
]
yield "links", [link.url for link in map_result.links]
yield "results", results_data

View File

@@ -1,35 +1,25 @@
from enum import Enum
from typing import Any
from firecrawl import FirecrawlApp
from backend.blocks.firecrawl._api import ScrapeFormat
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import firecrawl
class ScrapeFormat(Enum):
MARKDOWN = "markdown"
HTML = "html"
RAW_HTML = "rawHtml"
LINKS = "links"
SCREENSHOT = "screenshot"
SCREENSHOT_FULL_PAGE = "screenshot@fullPage"
JSON = "json"
CHANGE_TRACKING = "changeTracking"
from ._format_utils import convert_to_format_options
class FirecrawlScrapeBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
url: str = SchemaField(description="The URL to crawl")
limit: int = SchemaField(description="The number of pages to crawl", default=10)
@@ -49,7 +39,7 @@ class FirecrawlScrapeBlock(Block):
description="The format of the crawl", default=[ScrapeFormat.MARKDOWN]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: dict[str, Any] = SchemaField(description="The result of the crawl")
markdown: str = SchemaField(description="The markdown of the crawl")
html: str = SchemaField(description="The html of the crawl")
@@ -65,6 +55,10 @@ class FirecrawlScrapeBlock(Block):
change_tracking: dict[str, Any] = SchemaField(
description="The change tracking of the crawl"
)
error: str = SchemaField(
description="Error message if the scrape failed",
default="",
)
def __init__(self):
super().__init__(
@@ -78,12 +72,11 @@ class FirecrawlScrapeBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
scrape_result = app.scrape_url(
scrape_result = app.scrape(
input_data.url,
formats=[format.value for format in input_data.formats],
formats=convert_to_format_options(input_data.formats),
only_main_content=input_data.only_main_content,
max_age=input_data.max_age,
wait_for=input_data.wait_for,
@@ -96,7 +89,7 @@ class FirecrawlScrapeBlock(Block):
elif f == ScrapeFormat.HTML:
yield "html", scrape_result.html
elif f == ScrapeFormat.RAW_HTML:
yield "raw_html", scrape_result.rawHtml
yield "raw_html", scrape_result.raw_html
elif f == ScrapeFormat.LINKS:
yield "links", scrape_result.links
elif f == ScrapeFormat.SCREENSHOT:
@@ -104,6 +97,6 @@ class FirecrawlScrapeBlock(Block):
elif f == ScrapeFormat.SCREENSHOT_FULL_PAGE:
yield "screenshot_full_page", scrape_result.screenshot
elif f == ScrapeFormat.CHANGE_TRACKING:
yield "change_tracking", scrape_result.changeTracking
yield "change_tracking", scrape_result.change_tracking
elif f == ScrapeFormat.JSON:
yield "json", scrape_result.json

View File

@@ -1,35 +1,26 @@
from enum import Enum
from typing import Any
from firecrawl import FirecrawlApp, ScrapeOptions
from firecrawl import FirecrawlApp
from firecrawl.v2.types import ScrapeOptions
from backend.blocks.firecrawl._api import ScrapeFormat
from backend.sdk import (
APIKeyCredentials,
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
SchemaField,
)
from ._config import firecrawl
class ScrapeFormat(Enum):
MARKDOWN = "markdown"
HTML = "html"
RAW_HTML = "rawHtml"
LINKS = "links"
SCREENSHOT = "screenshot"
SCREENSHOT_FULL_PAGE = "screenshot@fullPage"
JSON = "json"
CHANGE_TRACKING = "changeTracking"
from ._format_utils import convert_to_format_options
class FirecrawlSearchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = firecrawl.credentials_field()
query: str = SchemaField(description="The query to search for")
limit: int = SchemaField(description="The number of pages to crawl", default=10)
@@ -45,9 +36,13 @@ class FirecrawlSearchBlock(Block):
description="Returns the content of the search if specified", default=[]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
data: dict[str, Any] = SchemaField(description="The result of the search")
site: dict[str, Any] = SchemaField(description="The site of the search")
error: str = SchemaField(
description="Error message if the search failed",
default="",
)
def __init__(self):
super().__init__(
@@ -61,7 +56,6 @@ class FirecrawlSearchBlock(Block):
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
# Sync call
@@ -69,11 +63,12 @@ class FirecrawlSearchBlock(Block):
input_data.query,
limit=input_data.limit,
scrape_options=ScrapeOptions(
formats=[format.value for format in input_data.formats],
maxAge=input_data.max_age,
waitFor=input_data.wait_for,
formats=convert_to_format_options(input_data.formats) or None,
max_age=input_data.max_age,
wait_for=input_data.wait_for,
),
)
yield "data", scrape_result
for site in scrape_result.data:
yield "site", site
if hasattr(scrape_result, "web") and scrape_result.web:
for site in scrape_result.web:
yield "site", site

View File

@@ -5,7 +5,13 @@ from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -57,7 +63,7 @@ class AspectRatio(str, Enum):
class AIImageEditorBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
@@ -90,11 +96,10 @@ class AIImageEditorBlock(Block):
title="Model",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
output_image: MediaFileType = SchemaField(
description="URL of the transformed image"
)
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(

View File

@@ -3,7 +3,8 @@ from backend.sdk import (
BlockCategory,
BlockManualWebhookConfig,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
ProviderBuilder,
ProviderName,
SchemaField,
@@ -19,14 +20,14 @@ generic_webhook = (
class GenericWebhookTriggerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
payload: dict = SchemaField(hidden=True, default_factory=dict)
constants: dict = SchemaField(
description="The constants to be set when the block is put on the graph",
default_factory=dict,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
payload: dict = SchemaField(
description="The complete webhook payload that was received from the generic webhook."
)

View File

@@ -3,7 +3,13 @@ from typing import Optional
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import get_api
@@ -39,7 +45,7 @@ class ChecksConclusion(Enum):
class GithubCreateCheckRunBlock(Block):
"""Block for creating a new check run on a GitHub repository."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo:status")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -76,7 +82,7 @@ class GithubCreateCheckRunBlock(Block):
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class CheckRunResult(BaseModel):
id: int
html_url: str
@@ -211,7 +217,7 @@ class GithubCreateCheckRunBlock(Block):
class GithubUpdateCheckRunBlock(Block):
"""Block for updating an existing check run on a GitHub repository."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo:status")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -239,7 +245,7 @@ class GithubUpdateCheckRunBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class CheckRunResult(BaseModel):
id: int
html_url: str
@@ -249,7 +255,6 @@ class GithubUpdateCheckRunBlock(Block):
check_run: CheckRunResult = SchemaField(
description="Details of the updated check run"
)
error: str = SchemaField(description="Error message if check run update failed")
def __init__(self):
super().__init__(

View File

@@ -5,7 +5,13 @@ from typing import Optional
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import get_api
@@ -37,7 +43,7 @@ class CheckRunConclusion(Enum):
class GithubGetCIResultsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
@@ -60,7 +66,7 @@ class GithubGetCIResultsBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class CheckRunItem(TypedDict, total=False):
id: int
name: str
@@ -104,7 +110,6 @@ class GithubGetCIResultsBlock(Block):
total_checks: int = SchemaField(description="Total number of CI checks")
passed_checks: int = SchemaField(description="Number of passed checks")
failed_checks: int = SchemaField(description="Number of failed checks")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(

View File

@@ -3,7 +3,13 @@ from urllib.parse import urlparse
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import convert_comment_url_to_api_endpoint, get_api
@@ -24,7 +30,7 @@ def is_github_url(url: str) -> bool:
# --8<-- [start:GithubCommentBlockExample]
class GithubCommentBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
@@ -35,7 +41,7 @@ class GithubCommentBlock(Block):
placeholder="Enter your comment",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
id: int = SchemaField(description="ID of the created comment")
url: str = SchemaField(description="URL to the comment on GitHub")
error: str = SchemaField(
@@ -112,7 +118,7 @@ class GithubCommentBlock(Block):
class GithubUpdateCommentBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
comment_url: str = SchemaField(
description="URL of the GitHub comment",
@@ -135,7 +141,7 @@ class GithubUpdateCommentBlock(Block):
placeholder="Enter your comment",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
id: int = SchemaField(description="ID of the updated comment")
url: str = SchemaField(description="URL to the comment on GitHub")
error: str = SchemaField(
@@ -219,14 +225,14 @@ class GithubUpdateCommentBlock(Block):
class GithubListCommentsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
placeholder="https://github.com/owner/repo/issues/1",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class CommentItem(TypedDict):
id: int
body: str
@@ -239,7 +245,6 @@ class GithubListCommentsBlock(Block):
comments: list[CommentItem] = SchemaField(
description="List of comments with their ID, body, user, and URL"
)
error: str = SchemaField(description="Error message if listing comments failed")
def __init__(self):
super().__init__(
@@ -335,7 +340,7 @@ class GithubListCommentsBlock(Block):
class GithubMakeIssueBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -348,7 +353,7 @@ class GithubMakeIssueBlock(Block):
description="Body of the issue", placeholder="Enter the issue body"
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
number: int = SchemaField(description="Number of the created issue")
url: str = SchemaField(description="URL of the created issue")
error: str = SchemaField(
@@ -410,14 +415,14 @@ class GithubMakeIssueBlock(Block):
class GithubReadIssueBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue",
placeholder="https://github.com/owner/repo/issues/1",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
title: str = SchemaField(description="Title of the issue")
body: str = SchemaField(description="Body of the issue")
user: str = SchemaField(description="User who created the issue")
@@ -483,14 +488,14 @@ class GithubReadIssueBlock(Block):
class GithubListIssuesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class IssueItem(TypedDict):
title: str
url: str
@@ -501,7 +506,6 @@ class GithubListIssuesBlock(Block):
issues: list[IssueItem] = SchemaField(
description="List of issues with their title and URL"
)
error: str = SchemaField(description="Error message if listing issues failed")
def __init__(self):
super().__init__(
@@ -573,7 +577,7 @@ class GithubListIssuesBlock(Block):
class GithubAddLabelBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
@@ -584,7 +588,7 @@ class GithubAddLabelBlock(Block):
placeholder="Enter the label",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Status of the label addition operation")
error: str = SchemaField(
description="Error message if the label addition failed"
@@ -633,7 +637,7 @@ class GithubAddLabelBlock(Block):
class GithubRemoveLabelBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue or pull request",
@@ -644,7 +648,7 @@ class GithubRemoveLabelBlock(Block):
placeholder="Enter the label",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Status of the label removal operation")
error: str = SchemaField(
description="Error message if the label removal failed"
@@ -694,7 +698,7 @@ class GithubRemoveLabelBlock(Block):
class GithubAssignIssueBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue",
@@ -705,7 +709,7 @@ class GithubAssignIssueBlock(Block):
placeholder="Enter the username",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(
description="Status of the issue assignment operation"
)
@@ -760,7 +764,7 @@ class GithubAssignIssueBlock(Block):
class GithubUnassignIssueBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
issue_url: str = SchemaField(
description="URL of the GitHub issue",
@@ -771,7 +775,7 @@ class GithubUnassignIssueBlock(Block):
placeholder="Enter the username",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(
description="Status of the issue unassignment operation"
)

View File

@@ -2,7 +2,13 @@ import re
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import get_api
@@ -16,14 +22,14 @@ from ._auth import (
class GithubListPullRequestsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class PRItem(TypedDict):
title: str
url: str
@@ -108,7 +114,7 @@ class GithubListPullRequestsBlock(Block):
class GithubMakePullRequestBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -135,7 +141,7 @@ class GithubMakePullRequestBlock(Block):
placeholder="Enter the base branch",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
number: int = SchemaField(description="Number of the created pull request")
url: str = SchemaField(description="URL of the created pull request")
error: str = SchemaField(
@@ -209,7 +215,7 @@ class GithubMakePullRequestBlock(Block):
class GithubReadPullRequestBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
@@ -221,7 +227,7 @@ class GithubReadPullRequestBlock(Block):
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
title: str = SchemaField(description="Title of the pull request")
body: str = SchemaField(description="Body of the pull request")
author: str = SchemaField(description="User who created the pull request")
@@ -325,7 +331,7 @@ class GithubReadPullRequestBlock(Block):
class GithubAssignPRReviewerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
@@ -336,7 +342,7 @@ class GithubAssignPRReviewerBlock(Block):
placeholder="Enter the reviewer's username",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(
description="Status of the reviewer assignment operation"
)
@@ -392,7 +398,7 @@ class GithubAssignPRReviewerBlock(Block):
class GithubUnassignPRReviewerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
@@ -403,7 +409,7 @@ class GithubUnassignPRReviewerBlock(Block):
placeholder="Enter the reviewer's username",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(
description="Status of the reviewer unassignment operation"
)
@@ -459,14 +465,14 @@ class GithubUnassignPRReviewerBlock(Block):
class GithubListPRReviewersBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
pr_url: str = SchemaField(
description="URL of the GitHub pull request",
placeholder="https://github.com/owner/repo/pull/1",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class ReviewerItem(TypedDict):
username: str
url: str

View File

@@ -2,7 +2,13 @@ import base64
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import get_api
@@ -16,14 +22,14 @@ from ._auth import (
class GithubListTagsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class TagItem(TypedDict):
name: str
url: str
@@ -34,7 +40,6 @@ class GithubListTagsBlock(Block):
tags: list[TagItem] = SchemaField(
description="List of tags with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing tags failed")
def __init__(self):
super().__init__(
@@ -111,14 +116,14 @@ class GithubListTagsBlock(Block):
class GithubListBranchesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class BranchItem(TypedDict):
name: str
url: str
@@ -130,7 +135,6 @@ class GithubListBranchesBlock(Block):
branches: list[BranchItem] = SchemaField(
description="List of branches with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing branches failed")
def __init__(self):
super().__init__(
@@ -207,7 +211,7 @@ class GithubListBranchesBlock(Block):
class GithubListDiscussionsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -217,7 +221,7 @@ class GithubListDiscussionsBlock(Block):
description="Number of discussions to fetch", default=5
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class DiscussionItem(TypedDict):
title: str
url: str
@@ -323,14 +327,14 @@ class GithubListDiscussionsBlock(Block):
class GithubListReleasesBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class ReleaseItem(TypedDict):
name: str
url: str
@@ -342,7 +346,6 @@ class GithubListReleasesBlock(Block):
releases: list[ReleaseItem] = SchemaField(
description="List of releases with their name and file tree browser URL"
)
error: str = SchemaField(description="Error message if listing releases failed")
def __init__(self):
super().__init__(
@@ -414,7 +417,7 @@ class GithubListReleasesBlock(Block):
class GithubReadFileBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -430,7 +433,7 @@ class GithubReadFileBlock(Block):
default="master",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
text_content: str = SchemaField(
description="Content of the file (decoded as UTF-8 text)"
)
@@ -438,7 +441,6 @@ class GithubReadFileBlock(Block):
description="Raw base64-encoded content of the file"
)
size: int = SchemaField(description="The size of the file (in bytes)")
error: str = SchemaField(description="Error message if the file reading failed")
def __init__(self):
super().__init__(
@@ -501,7 +503,7 @@ class GithubReadFileBlock(Block):
class GithubReadFolderBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -517,7 +519,7 @@ class GithubReadFolderBlock(Block):
default="master",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class DirEntry(TypedDict):
name: str
path: str
@@ -625,7 +627,7 @@ class GithubReadFolderBlock(Block):
class GithubMakeBranchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -640,7 +642,7 @@ class GithubMakeBranchBlock(Block):
placeholder="source_branch_name",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Status of the branch creation operation")
error: str = SchemaField(
description="Error message if the branch creation failed"
@@ -705,7 +707,7 @@ class GithubMakeBranchBlock(Block):
class GithubDeleteBranchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -716,7 +718,7 @@ class GithubDeleteBranchBlock(Block):
placeholder="branch_name",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Status of the branch deletion operation")
error: str = SchemaField(
description="Error message if the branch deletion failed"
@@ -766,7 +768,7 @@ class GithubDeleteBranchBlock(Block):
class GithubCreateFileBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -789,7 +791,7 @@ class GithubCreateFileBlock(Block):
default="Create new file",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
url: str = SchemaField(description="URL of the created file")
sha: str = SchemaField(description="SHA of the commit")
error: str = SchemaField(
@@ -868,7 +870,7 @@ class GithubCreateFileBlock(Block):
class GithubUpdateFileBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
@@ -891,10 +893,9 @@ class GithubUpdateFileBlock(Block):
default="Update file",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
url: str = SchemaField(description="URL of the updated file")
sha: str = SchemaField(description="SHA of the commit")
error: str = SchemaField(description="Error message if the file update failed")
def __init__(self):
super().__init__(
@@ -974,7 +975,7 @@ class GithubUpdateFileBlock(Block):
class GithubCreateRepositoryBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
name: str = SchemaField(
description="Name of the repository to create",
@@ -998,7 +999,7 @@ class GithubCreateRepositoryBlock(Block):
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
url: str = SchemaField(description="URL of the created repository")
clone_url: str = SchemaField(description="Git clone URL of the repository")
error: str = SchemaField(
@@ -1077,14 +1078,14 @@ class GithubCreateRepositoryBlock(Block):
class GithubListStargazersBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo_url: str = SchemaField(
description="URL of the GitHub repository",
placeholder="https://github.com/owner/repo",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class StargazerItem(TypedDict):
username: str
url: str

View File

@@ -4,7 +4,13 @@ from typing import Any, List, Optional
from typing_extensions import TypedDict
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import get_api
@@ -26,7 +32,7 @@ class ReviewEvent(Enum):
class GithubCreatePRReviewBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
class ReviewComment(TypedDict, total=False):
path: str
position: Optional[int]
@@ -61,7 +67,7 @@ class GithubCreatePRReviewBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
review_id: int = SchemaField(description="ID of the created review")
state: str = SchemaField(
description="State of the review (e.g., PENDING, COMMENTED, APPROVED, CHANGES_REQUESTED)"
@@ -197,7 +203,7 @@ class GithubCreatePRReviewBlock(Block):
class GithubListPRReviewsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
@@ -208,7 +214,7 @@ class GithubListPRReviewsBlock(Block):
placeholder="123",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class ReviewItem(TypedDict):
id: int
user: str
@@ -223,7 +229,6 @@ class GithubListPRReviewsBlock(Block):
reviews: list[ReviewItem] = SchemaField(
description="List of all reviews on the pull request"
)
error: str = SchemaField(description="Error message if listing reviews failed")
def __init__(self):
super().__init__(
@@ -317,7 +322,7 @@ class GithubListPRReviewsBlock(Block):
class GithubSubmitPendingReviewBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
@@ -336,7 +341,7 @@ class GithubSubmitPendingReviewBlock(Block):
default=ReviewEvent.COMMENT,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
state: str = SchemaField(description="State of the submitted review")
html_url: str = SchemaField(description="URL of the submitted review")
error: str = SchemaField(
@@ -415,7 +420,7 @@ class GithubSubmitPendingReviewBlock(Block):
class GithubResolveReviewDiscussionBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
@@ -434,9 +439,8 @@ class GithubResolveReviewDiscussionBlock(Block):
default=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
success: bool = SchemaField(description="Whether the operation was successful")
error: str = SchemaField(description="Error message if the operation failed")
def __init__(self):
super().__init__(
@@ -579,7 +583,7 @@ class GithubResolveReviewDiscussionBlock(Block):
class GithubGetPRReviewCommentsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description="GitHub repository",
@@ -596,7 +600,7 @@ class GithubGetPRReviewCommentsBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class CommentItem(TypedDict):
id: int
user: str
@@ -616,7 +620,6 @@ class GithubGetPRReviewCommentsBlock(Block):
comments: list[CommentItem] = SchemaField(
description="List of all review comments on the pull request"
)
error: str = SchemaField(description="Error message if getting comments failed")
def __init__(self):
super().__init__(
@@ -744,7 +747,7 @@ class GithubGetPRReviewCommentsBlock(Block):
class GithubCreateCommentObjectBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
path: str = SchemaField(
description="The file path to comment on",
placeholder="src/main.py",
@@ -781,7 +784,7 @@ class GithubCreateCommentObjectBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
comment_object: dict = SchemaField(
description="The comment object formatted for GitHub API"
)

View File

@@ -3,7 +3,13 @@ from typing import Optional
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from ._api import get_api
@@ -26,7 +32,7 @@ class StatusState(Enum):
class GithubCreateStatusBlock(Block):
"""Block for creating a commit status on a GitHub repository."""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubFineGrainedAPICredentialsInput = (
GithubFineGrainedAPICredentialsField("repo:status")
)
@@ -54,7 +60,7 @@ class GithubCreateStatusBlock(Block):
advanced=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
class StatusResult(BaseModel):
id: int
url: str
@@ -66,7 +72,6 @@ class GithubCreateStatusBlock(Block):
updated_at: str
status: StatusResult = SchemaField(description="Details of the created status")
error: str = SchemaField(description="Error message if status creation failed")
def __init__(self):
super().__init__(

View File

@@ -8,7 +8,8 @@ from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
BlockWebhookConfig,
)
from backend.data.model import SchemaField
@@ -26,7 +27,7 @@ logger = logging.getLogger(__name__)
# --8<-- [start:GithubTriggerExample]
class GitHubTriggerBase:
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GithubCredentialsInput = GithubCredentialsField("repo")
repo: str = SchemaField(
description=(
@@ -40,7 +41,7 @@ class GitHubTriggerBase:
payload: dict = SchemaField(hidden=True, default_factory=dict)
# --8<-- [end:example-payload-field]
class Output(BlockSchema):
class Output(BlockSchemaOutput):
payload: dict = SchemaField(
description="The complete webhook payload that was received from GitHub. "
"Includes information about the affected resource (e.g. pull request), "

View File

@@ -8,7 +8,13 @@ from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.settings import Settings
@@ -43,7 +49,7 @@ class CalendarEvent(BaseModel):
class GoogleCalendarReadEventsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/calendar.readonly"]
)
@@ -73,7 +79,7 @@ class GoogleCalendarReadEventsBlock(Block):
description="Include events you've declined", default=False
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
events: list[CalendarEvent] = SchemaField(
description="List of calendar events in the requested time range",
default_factory=list,
@@ -379,7 +385,7 @@ class RecurringEvent(BaseModel):
class GoogleCalendarCreateEventBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/calendar"]
)
@@ -433,12 +439,11 @@ class GoogleCalendarCreateEventBlock(Block):
default_factory=lambda: [ReminderPreset.TEN_MINUTES],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
event_id: str = SchemaField(description="ID of the created event")
event_link: str = SchemaField(
description="Link to view the event in Google Calendar"
)
error: str = SchemaField(description="Error message if event creation failed")
def __init__(self):
super().__init__(

View File

@@ -14,7 +14,13 @@ from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from pydantic import BaseModel, Field
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.file import MediaFileType, get_exec_file_path, store_media_file
from backend.util.settings import Settings
@@ -320,7 +326,7 @@ class GmailBase(Block, ABC):
class GmailReadBlock(GmailBase):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.readonly"]
)
@@ -333,7 +339,7 @@ class GmailReadBlock(GmailBase):
default=10,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
email: Email = SchemaField(
description="Email data",
)
@@ -516,7 +522,7 @@ class GmailSendBlock(GmailBase):
- Attachment support for multiple files
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.send"]
)
@@ -540,7 +546,7 @@ class GmailSendBlock(GmailBase):
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: GmailSendResult = SchemaField(
description="Send confirmation",
)
@@ -618,7 +624,7 @@ class GmailCreateDraftBlock(GmailBase):
- Attachment support for multiple files
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.modify"]
)
@@ -642,7 +648,7 @@ class GmailCreateDraftBlock(GmailBase):
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: GmailDraftResult = SchemaField(
description="Draft creation result",
)
@@ -721,12 +727,12 @@ class GmailCreateDraftBlock(GmailBase):
class GmailListLabelsBlock(GmailBase):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.labels"]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: list[dict] = SchemaField(
description="List of labels",
)
@@ -779,7 +785,7 @@ class GmailListLabelsBlock(GmailBase):
class GmailAddLabelBlock(GmailBase):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.modify"]
)
@@ -790,7 +796,7 @@ class GmailAddLabelBlock(GmailBase):
description="Label name to add",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: GmailLabelResult = SchemaField(
description="Label addition result",
)
@@ -865,7 +871,7 @@ class GmailAddLabelBlock(GmailBase):
class GmailRemoveLabelBlock(GmailBase):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.modify"]
)
@@ -876,7 +882,7 @@ class GmailRemoveLabelBlock(GmailBase):
description="Label name to remove",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: GmailLabelResult = SchemaField(
description="Label removal result",
)
@@ -941,17 +947,16 @@ class GmailRemoveLabelBlock(GmailBase):
class GmailGetThreadBlock(GmailBase):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.readonly"]
)
threadId: str = SchemaField(description="Gmail thread ID")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
thread: Thread = SchemaField(
description="Gmail thread with decoded message bodies"
)
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
@@ -1218,7 +1223,7 @@ class GmailReplyBlock(GmailBase):
- Full Unicode/emoji support with UTF-8 encoding
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
[
"https://www.googleapis.com/auth/gmail.send",
@@ -1246,14 +1251,13 @@ class GmailReplyBlock(GmailBase):
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
messageId: str = SchemaField(description="Sent message ID")
threadId: str = SchemaField(description="Thread ID")
message: dict = SchemaField(description="Raw Gmail message object")
email: Email = SchemaField(
description="Parsed email object with decoded body and attachments"
)
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
@@ -1368,7 +1372,7 @@ class GmailDraftReplyBlock(GmailBase):
- Full Unicode/emoji support with UTF-8 encoding
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
[
"https://www.googleapis.com/auth/gmail.modify",
@@ -1396,12 +1400,11 @@ class GmailDraftReplyBlock(GmailBase):
description="Files to attach", default_factory=list, advanced=True
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
draftId: str = SchemaField(description="Created draft ID")
messageId: str = SchemaField(description="Draft message ID")
threadId: str = SchemaField(description="Thread ID")
status: str = SchemaField(description="Draft creation status")
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
@@ -1482,14 +1485,13 @@ class GmailDraftReplyBlock(GmailBase):
class GmailGetProfileBlock(GmailBase):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/gmail.readonly"]
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
profile: Profile = SchemaField(description="Gmail user profile information")
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(
@@ -1555,7 +1557,7 @@ class GmailForwardBlock(GmailBase):
- Manual content type override option
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
[
"https://www.googleapis.com/auth/gmail.send",
@@ -1589,11 +1591,10 @@ class GmailForwardBlock(GmailBase):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
messageId: str = SchemaField(description="Forwarded message ID")
threadId: str = SchemaField(description="Thread ID")
status: str = SchemaField(description="Forward status")
error: str = SchemaField(description="Error message if any")
def __init__(self):
super().__init__(

View File

@@ -5,7 +5,13 @@ from typing import Any
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.settings import Settings
@@ -195,7 +201,7 @@ class BatchOperationType(str, Enum):
CLEAR = "clear"
class BatchOperation(BlockSchema):
class BatchOperation(BlockSchemaInput):
type: BatchOperationType = SchemaField(
description="The type of operation to perform"
)
@@ -206,7 +212,7 @@ class BatchOperation(BlockSchema):
class GoogleSheetsReadBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets.readonly"]
)
@@ -218,7 +224,7 @@ class GoogleSheetsReadBlock(Block):
description="The A1 notation of the range to read",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: list[list[str]] = SchemaField(
description="The data read from the spreadsheet",
)
@@ -274,7 +280,7 @@ class GoogleSheetsReadBlock(Block):
class GoogleSheetsWriteBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -289,7 +295,7 @@ class GoogleSheetsWriteBlock(Block):
description="The data to write to the spreadsheet",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The result of the write operation",
)
@@ -363,7 +369,7 @@ class GoogleSheetsWriteBlock(Block):
class GoogleSheetsAppendBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -403,9 +409,8 @@ class GoogleSheetsAppendBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(description="Append API response")
error: str = SchemaField(description="Error message, if any")
def __init__(self):
super().__init__(
@@ -503,7 +508,7 @@ class GoogleSheetsAppendBlock(Block):
class GoogleSheetsClearBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -515,7 +520,7 @@ class GoogleSheetsClearBlock(Block):
description="The A1 notation of the range to clear",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The result of the clear operation",
)
@@ -571,7 +576,7 @@ class GoogleSheetsClearBlock(Block):
class GoogleSheetsMetadataBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets.readonly"]
)
@@ -580,7 +585,7 @@ class GoogleSheetsMetadataBlock(Block):
title="Spreadsheet ID or URL",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The metadata of the spreadsheet including sheets info",
)
@@ -652,7 +657,7 @@ class GoogleSheetsMetadataBlock(Block):
class GoogleSheetsManageSheetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -672,9 +677,8 @@ class GoogleSheetsManageSheetBlock(Block):
description="New sheet name for copy", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(description="Operation result")
error: str = SchemaField(description="Error message, if any")
def __init__(self):
super().__init__(
@@ -760,7 +764,7 @@ class GoogleSheetsManageSheetBlock(Block):
class GoogleSheetsBatchOperationsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -772,7 +776,7 @@ class GoogleSheetsBatchOperationsBlock(Block):
description="List of operations to perform",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The result of the batch operations",
)
@@ -877,7 +881,7 @@ class GoogleSheetsBatchOperationsBlock(Block):
class GoogleSheetsFindReplaceBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -904,7 +908,7 @@ class GoogleSheetsFindReplaceBlock(Block):
default=False,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The result of the find/replace operation including number of replacements",
)
@@ -987,7 +991,7 @@ class GoogleSheetsFindReplaceBlock(Block):
class GoogleSheetsFindBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets.readonly"]
)
@@ -1020,7 +1024,7 @@ class GoogleSheetsFindBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The result of the find operation including locations and count",
)
@@ -1255,7 +1259,7 @@ class GoogleSheetsFindBlock(Block):
class GoogleSheetsFormatBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -1270,9 +1274,8 @@ class GoogleSheetsFormatBlock(Block):
italic: bool = SchemaField(default=False)
font_size: int = SchemaField(default=10)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(description="API response or success flag")
error: str = SchemaField(description="Error message, if any")
def __init__(self):
super().__init__(
@@ -1383,7 +1386,7 @@ class GoogleSheetsFormatBlock(Block):
class GoogleSheetsCreateSpreadsheetBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
@@ -1395,7 +1398,7 @@ class GoogleSheetsCreateSpreadsheetBlock(Block):
default=["Sheet1"],
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(
description="The result containing spreadsheet ID and URL",
)

View File

@@ -3,7 +3,13 @@ from typing import Literal
import googlemaps
from pydantic import BaseModel, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -37,7 +43,7 @@ class Place(BaseModel):
class GoogleMapsSearchBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.GOOGLE_MAPS], Literal["api_key"]
] = CredentialsField(description="Google Maps API Key")
@@ -58,9 +64,8 @@ class GoogleMapsSearchBlock(Block):
le=60,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
place: Place = SchemaField(description="Place found")
error: str = SchemaField(description="Error message if the search failed")
def __init__(self):
super().__init__(

View File

@@ -8,7 +8,13 @@ from typing import Literal
import aiofiles
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
CredentialsField,
CredentialsMetaInput,
@@ -62,7 +68,7 @@ class HttpMethod(Enum):
class SendWebRequestBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
url: str = SchemaField(
description="The URL to send the request to",
placeholder="https://api.example.com",
@@ -93,7 +99,7 @@ class SendWebRequestBlock(Block):
default_factory=list,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
response: object = SchemaField(description="The response from the server")
client_error: object = SchemaField(description="Errors on 4xx status codes")
server_error: object = SchemaField(description="Errors on 5xx status codes")

View File

@@ -3,13 +3,19 @@ from backend.blocks.hubspot._auth import (
HubSpotCredentialsField,
HubSpotCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import Requests
class HubSpotCompanyBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: HubSpotCredentialsInput = HubSpotCredentialsField()
operation: str = SchemaField(
description="Operation to perform (create, update, get)", default="get"
@@ -22,7 +28,7 @@ class HubSpotCompanyBlock(Block):
description="Company domain for get/update operations", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
company: dict = SchemaField(description="Company information")
status: str = SchemaField(description="Operation status")

View File

@@ -3,13 +3,19 @@ from backend.blocks.hubspot._auth import (
HubSpotCredentialsField,
HubSpotCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import Requests
class HubSpotContactBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: HubSpotCredentialsInput = HubSpotCredentialsField()
operation: str = SchemaField(
description="Operation to perform (create, update, get)", default="get"
@@ -22,7 +28,7 @@ class HubSpotContactBlock(Block):
description="Email address for get/update operations", default=""
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
contact: dict = SchemaField(description="Contact information")
status: str = SchemaField(description="Operation status")

View File

@@ -5,13 +5,19 @@ from backend.blocks.hubspot._auth import (
HubSpotCredentialsField,
HubSpotCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import Requests
class HubSpotEngagementBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: HubSpotCredentialsInput = HubSpotCredentialsField()
operation: str = SchemaField(
description="Operation to perform (send_email, track_engagement)",
@@ -29,7 +35,7 @@ class HubSpotEngagementBlock(Block):
default=30,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: dict = SchemaField(description="Operation result")
status: str = SchemaField(description="Operation status")

View File

@@ -4,7 +4,13 @@ from typing import Any, Dict, Literal, Optional
from pydantic import SecretStr
from requests.exceptions import RequestException
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
@@ -84,7 +90,7 @@ class UpscaleOption(str, Enum):
class IdeogramModelBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput[
Literal[ProviderName.IDEOGRAM], Literal["api_key"]
] = CredentialsField(
@@ -154,9 +160,8 @@ class IdeogramModelBlock(Block):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
result: str = SchemaField(description="Generated image URL")
error: str = SchemaField(description="Error message if the model run failed")
def __init__(self):
super().__init__(

View File

@@ -2,7 +2,14 @@ import copy
from datetime import date, time
from typing import Any, Optional
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockType,
)
from backend.data.model import SchemaField
from backend.util.file import store_media_file
from backend.util.mock import MockObject
@@ -22,7 +29,7 @@ class AgentInputBlock(Block):
It Outputs the value passed as input.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
name: str = SchemaField(description="The name of the input.")
value: Any = SchemaField(
description="The value to be passed as input.",
@@ -60,6 +67,7 @@ class AgentInputBlock(Block):
return schema
class Output(BlockSchema):
# Use BlockSchema to avoid automatic error field for interface definition
result: Any = SchemaField(description="The value passed as input.")
def __init__(self, **kwargs):
@@ -109,7 +117,7 @@ class AgentOutputBlock(Block):
If formatting fails or no `format` is provided, the raw `value` is output.
"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
value: Any = SchemaField(
description="The value to be recorded as output.",
default=None,
@@ -151,6 +159,7 @@ class AgentOutputBlock(Block):
return self.get_field_schema("value")
class Output(BlockSchema):
# Use BlockSchema to avoid automatic error field for interface definition
output: Any = SchemaField(description="The value recorded as output.")
name: Any = SchemaField(description="The name of the value recorded as output.")
@@ -554,6 +563,89 @@ class AgentToggleInputBlock(AgentInputBlock):
)
class AgentTableInputBlock(AgentInputBlock):
"""
This block allows users to input data in a table format.
Configure the table columns at build time, then users can input
rows of data at runtime. Each row is output as a dictionary
with column names as keys.
"""
class Input(AgentInputBlock.Input):
value: Optional[list[dict[str, Any]]] = SchemaField(
description="The table data as a list of dictionaries.",
default=None,
advanced=False,
title="Default Value",
)
column_headers: list[str] = SchemaField(
description="Column headers for the table.",
default_factory=lambda: ["Column 1", "Column 2", "Column 3"],
advanced=False,
title="Column Headers",
)
def generate_schema(self):
"""Generate schema for the value field with table format."""
schema = super().generate_schema()
schema["type"] = "array"
schema["format"] = "table"
schema["items"] = {
"type": "object",
"properties": {
header: {"type": "string"}
for header in (
self.column_headers or ["Column 1", "Column 2", "Column 3"]
)
},
}
if self.value is not None:
schema["default"] = self.value
return schema
class Output(AgentInputBlock.Output):
result: list[dict[str, Any]] = SchemaField(
description="The table data as a list of dictionaries with headers as keys."
)
def __init__(self):
super().__init__(
id="5603b273-f41e-4020-af7d-fbc9c6a8d928",
description="Block for table data input with customizable headers.",
disabled=not config.enable_agent_input_subtype_blocks,
input_schema=AgentTableInputBlock.Input,
output_schema=AgentTableInputBlock.Output,
test_input=[
{
"name": "test_table",
"column_headers": ["Name", "Age", "City"],
"value": [
{"Name": "John", "Age": "30", "City": "New York"},
{"Name": "Jane", "Age": "25", "City": "London"},
],
"description": "Example table input",
}
],
test_output=[
(
"result",
[
{"Name": "John", "Age": "30", "City": "New York"},
{"Name": "Jane", "Age": "25", "City": "London"},
],
)
],
)
async def run(self, input_data: Input, *args, **kwargs) -> BlockOutput:
"""
Yields the table data as a list of dictionaries.
"""
# Pass through the value, defaulting to empty list if None
yield "result", input_data.value if input_data.value is not None else []
IO_BLOCK_IDs = [
AgentInputBlock().id,
AgentOutputBlock().id,
@@ -565,4 +657,5 @@ IO_BLOCK_IDs = [
AgentFileInputBlock().id,
AgentDropdownInputBlock().id,
AgentToggleInputBlock().id,
AgentTableInputBlock().id,
]

View File

@@ -1,12 +1,18 @@
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.json import json
from backend.util.json import loads
class StepThroughItemsBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
items: list = SchemaField(
advanced=False,
description="The list or dictionary of items to iterate over",
@@ -26,7 +32,7 @@ class StepThroughItemsBlock(Block):
default="",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
item: Any = SchemaField(description="The current item in the iteration")
key: Any = SchemaField(
description="The key or index of the current item in the iteration",
@@ -54,20 +60,43 @@ class StepThroughItemsBlock(Block):
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Security fix: Add limits to prevent DoS from large iterations
MAX_ITEMS = 10000 # Maximum items to iterate
MAX_ITEM_SIZE = 1024 * 1024 # 1MB per item
for data in [input_data.items, input_data.items_object, input_data.items_str]:
if not data:
continue
# Limit string size before parsing
if isinstance(data, str):
items = json.loads(data)
if len(data) > MAX_ITEM_SIZE:
raise ValueError(
f"Input too large: {len(data)} bytes > {MAX_ITEM_SIZE} bytes"
)
items = loads(data)
else:
items = data
# Check total item count
if isinstance(items, (list, dict)):
if len(items) > MAX_ITEMS:
raise ValueError(f"Too many items: {len(items)} > {MAX_ITEMS}")
iteration_count = 0
if isinstance(items, dict):
# If items is a dictionary, iterate over its values
for item in items.values():
yield "item", item
yield "key", item
for key, value in items.items():
if iteration_count >= MAX_ITEMS:
break
yield "item", value
yield "key", key # Fixed: should yield key, not item
iteration_count += 1
else:
# If items is a list, iterate over the list
for index, item in enumerate(items):
if iteration_count >= MAX_ITEMS:
break
yield "item", item
yield "key", index
iteration_count += 1

View File

@@ -3,13 +3,19 @@ from backend.blocks.jina._auth import (
JinaCredentialsField,
JinaCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import Requests
class JinaChunkingBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
texts: list = SchemaField(description="List of texts to chunk")
credentials: JinaCredentialsInput = JinaCredentialsField()
@@ -20,7 +26,7 @@ class JinaChunkingBlock(Block):
description="Whether to return token information", default=False
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
chunks: list = SchemaField(description="List of chunked texts")
tokens: list = SchemaField(
description="List of token information for each chunk",

View File

@@ -3,13 +3,19 @@ from backend.blocks.jina._auth import (
JinaCredentialsField,
JinaCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import Requests
class JinaEmbeddingBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
texts: list = SchemaField(description="List of texts to embed")
credentials: JinaCredentialsInput = JinaCredentialsField()
model: str = SchemaField(
@@ -17,7 +23,7 @@ class JinaEmbeddingBlock(Block):
default="jina-embeddings-v2-base-en",
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
embeddings: list = SchemaField(description="List of embeddings")
def __init__(self):

View File

@@ -1,29 +1,47 @@
from typing import List
from urllib.parse import quote
from typing_extensions import TypedDict
from backend.blocks.jina._auth import (
JinaCredentials,
JinaCredentialsField,
JinaCredentialsInput,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.request import Requests
class Reference(TypedDict):
url: str
keyQuote: str
isSupportive: bool
class FactCheckerBlock(Block):
class Input(BlockSchema):
class Input(BlockSchemaInput):
statement: str = SchemaField(
description="The statement to check for factuality"
)
credentials: JinaCredentialsInput = JinaCredentialsField()
class Output(BlockSchema):
class Output(BlockSchemaOutput):
factuality: float = SchemaField(
description="The factuality score of the statement"
)
result: bool = SchemaField(description="The result of the factuality check")
reason: str = SchemaField(description="The reason for the factuality result")
error: str = SchemaField(description="Error message if the check fails")
references: List[Reference] = SchemaField(
description="List of references supporting or contradicting the statement",
default=[],
)
def __init__(self):
super().__init__(
@@ -53,5 +71,11 @@ class FactCheckerBlock(Block):
yield "factuality", data["factuality"]
yield "result", data["result"]
yield "reason", data["reason"]
# Yield references if present in the response
if "references" in data:
yield "references", data["references"]
else:
yield "references", []
else:
raise RuntimeError(f"Expected 'data' key not found in response: {data}")

View File

@@ -8,20 +8,25 @@ from backend.blocks.jina._auth import (
JinaCredentialsInput,
)
from backend.blocks.search import GetRequest
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
class SearchTheWebBlock(Block, GetRequest):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: JinaCredentialsInput = JinaCredentialsField()
query: str = SchemaField(description="The search query to search the web for")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
results: str = SchemaField(
description="The search results including content from top 5 URLs"
)
error: str = SchemaField(description="Error message if the search fails")
def __init__(self):
super().__init__(
@@ -58,7 +63,7 @@ class SearchTheWebBlock(Block, GetRequest):
class ExtractWebsiteContentBlock(Block, GetRequest):
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: JinaCredentialsInput = JinaCredentialsField()
url: str = SchemaField(description="The URL to scrape the content from")
raw_content: bool = SchemaField(
@@ -68,7 +73,7 @@ class ExtractWebsiteContentBlock(Block, GetRequest):
advanced=True,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
content: str = SchemaField(description="The scraped content from the given URL")
error: str = SchemaField(
description="Error message if the content cannot be retrieved"

View File

@@ -62,10 +62,10 @@ TEST_CREDENTIALS_OAUTH = OAuth2Credentials(
title="Mock Linear API key",
username="mock-linear-username",
access_token=SecretStr("mock-linear-access-token"),
access_token_expires_at=None,
access_token_expires_at=1672531200, # Mock expiration time for short-lived token
refresh_token=SecretStr("mock-linear-refresh-token"),
refresh_token_expires_at=None,
scopes=["mock-linear-scopes"],
scopes=["read", "write"],
)
TEST_CREDENTIALS_API_KEY = APIKeyCredentials(

View File

@@ -2,7 +2,9 @@
Linear OAuth handler implementation.
"""
import base64
import json
import time
from typing import Optional
from urllib.parse import urlencode
@@ -38,8 +40,9 @@ class LinearOAuthHandler(BaseOAuthHandler):
self.client_secret = client_secret
self.redirect_uri = redirect_uri
self.auth_base_url = "https://linear.app/oauth/authorize"
self.token_url = "https://api.linear.app/oauth/token" # Correct token URL
self.token_url = "https://api.linear.app/oauth/token"
self.revoke_url = "https://api.linear.app/oauth/revoke"
self.migrate_url = "https://api.linear.app/oauth/migrate_old_token"
def get_login_url(
self, scopes: list[str], state: str, code_challenge: Optional[str]
@@ -82,19 +85,84 @@ class LinearOAuthHandler(BaseOAuthHandler):
return True # Linear doesn't return JSON on successful revoke
async def migrate_old_token(
self, credentials: OAuth2Credentials
) -> OAuth2Credentials:
"""
Migrate an old long-lived token to a new short-lived token with refresh token.
This uses Linear's /oauth/migrate_old_token endpoint to exchange current
long-lived tokens for short-lived tokens with refresh tokens without
requiring users to re-authorize.
"""
if not credentials.access_token:
raise ValueError("No access token to migrate")
request_body = {
"client_id": self.client_id,
"client_secret": self.client_secret,
}
headers = {
"Authorization": f"Bearer {credentials.access_token.get_secret_value()}",
"Content-Type": "application/x-www-form-urlencoded",
}
response = await Requests().post(
self.migrate_url, data=request_body, headers=headers
)
if not response.ok:
try:
error_data = response.json()
error_message = error_data.get("error", "Unknown error")
error_description = error_data.get("error_description", "")
if error_description:
error_message = f"{error_message}: {error_description}"
except json.JSONDecodeError:
error_message = response.text
raise LinearAPIException(
f"Failed to migrate Linear token ({response.status}): {error_message}",
response.status,
)
token_data = response.json()
# Extract token expiration
now = int(time.time())
expires_in = token_data.get("expires_in")
access_token_expires_at = None
if expires_in:
access_token_expires_at = now + expires_in
new_credentials = OAuth2Credentials(
provider=self.PROVIDER_NAME,
title=credentials.title,
username=credentials.username,
access_token=token_data["access_token"],
scopes=credentials.scopes, # Preserve original scopes
refresh_token=token_data.get("refresh_token"),
access_token_expires_at=access_token_expires_at,
refresh_token_expires_at=None,
)
new_credentials.id = credentials.id
return new_credentials
async def _refresh_tokens(
self, credentials: OAuth2Credentials
) -> OAuth2Credentials:
if not credentials.refresh_token:
raise ValueError(
"No refresh token available."
) # Linear uses non-expiring tokens
"No refresh token available. Token may need to be migrated to the new refresh token system."
)
return await self._request_tokens(
{
"refresh_token": credentials.refresh_token.get_secret_value(),
"grant_type": "refresh_token",
}
},
current_credentials=credentials,
)
async def _request_tokens(
@@ -102,16 +170,33 @@ class LinearOAuthHandler(BaseOAuthHandler):
params: dict[str, str],
current_credentials: Optional[OAuth2Credentials] = None,
) -> OAuth2Credentials:
# Determine if this is a refresh token request
is_refresh = params.get("grant_type") == "refresh_token"
# Build request body with appropriate grant_type
request_body = {
"client_id": self.client_id,
"client_secret": self.client_secret,
"grant_type": "authorization_code", # Ensure grant_type is correct
**params,
}
headers = {
"Content-Type": "application/x-www-form-urlencoded"
} # Correct header for token request
# Set default grant_type if not provided
if "grant_type" not in request_body:
request_body["grant_type"] = "authorization_code"
headers = {"Content-Type": "application/x-www-form-urlencoded"}
# For refresh token requests, support HTTP Basic Authentication as recommended
if is_refresh:
# Option 1: Use HTTP Basic Auth (preferred by Linear)
client_credentials = f"{self.client_id}:{self.client_secret}"
encoded_credentials = base64.b64encode(client_credentials.encode()).decode()
headers["Authorization"] = f"Basic {encoded_credentials}"
# Remove client credentials from body when using Basic Auth
request_body.pop("client_id", None)
request_body.pop("client_secret", None)
response = await Requests().post(
self.token_url, data=request_body, headers=headers
)
@@ -120,6 +205,9 @@ class LinearOAuthHandler(BaseOAuthHandler):
try:
error_data = response.json()
error_message = error_data.get("error", "Unknown error")
error_description = error_data.get("error_description", "")
if error_description:
error_message = f"{error_message}: {error_description}"
except json.JSONDecodeError:
error_message = response.text
raise LinearAPIException(
@@ -129,27 +217,84 @@ class LinearOAuthHandler(BaseOAuthHandler):
token_data = response.json()
# Note: Linear access tokens do not expire, so we set expires_at to None
# Extract token expiration if provided (for new refresh token implementation)
now = int(time.time())
expires_in = token_data.get("expires_in")
access_token_expires_at = None
if expires_in:
access_token_expires_at = now + expires_in
# Get username - preserve from current credentials if refreshing
username = None
if current_credentials and is_refresh:
username = current_credentials.username
elif "user" in token_data:
username = token_data["user"].get("name", "Unknown User")
else:
# Fetch username using the access token
username = await self._request_username(token_data["access_token"])
new_credentials = OAuth2Credentials(
provider=self.PROVIDER_NAME,
title=current_credentials.title if current_credentials else None,
username=token_data.get("user", {}).get(
"name", "Unknown User"
), # extract name or set appropriate
username=username or "Unknown User",
access_token=token_data["access_token"],
scopes=token_data["scope"].split(
","
), # Linear returns comma-separated scopes
refresh_token=token_data.get(
"refresh_token"
), # Linear uses non-expiring tokens so this might be null
access_token_expires_at=None,
refresh_token_expires_at=None,
scopes=(
token_data["scope"].split(",")
if "scope" in token_data
else (current_credentials.scopes if current_credentials else [])
),
refresh_token=token_data.get("refresh_token"),
access_token_expires_at=access_token_expires_at,
refresh_token_expires_at=None, # Linear doesn't provide refresh token expiration
)
if current_credentials:
new_credentials.id = current_credentials.id
return new_credentials
async def get_access_token(self, credentials: OAuth2Credentials) -> str:
"""
Returns a valid access token, handling migration and refresh as needed.
This overrides the base implementation to handle Linear's token migration
from old long-lived tokens to new short-lived tokens with refresh tokens.
"""
# If token has no expiration and no refresh token, it might be an old token
# that needs migration
if (
credentials.access_token_expires_at is None
and credentials.refresh_token is None
):
try:
# Attempt to migrate the old token
migrated_credentials = await self.migrate_old_token(credentials)
# Update the credentials store would need to be handled by the caller
# For now, use the migrated credentials for this request
credentials = migrated_credentials
except LinearAPIException:
# Migration failed, try to use the old token as-is
# This maintains backward compatibility
pass
# Use the standard refresh logic from the base class
if self.needs_refresh(credentials):
credentials = await self.refresh_tokens(credentials)
return credentials.access_token.get_secret_value()
def needs_migration(self, credentials: OAuth2Credentials) -> bool:
"""
Check if credentials represent an old long-lived token that needs migration.
Old tokens have no expiration time and no refresh token.
"""
return (
credentials.access_token_expires_at is None
and credentials.refresh_token is None
)
async def _request_username(self, access_token: str) -> Optional[str]:
# Use the LinearClient to fetch user details using GraphQL
from ._api import LinearClient

View File

@@ -3,7 +3,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
OAuth2Credentials,
SchemaField,
@@ -22,7 +23,7 @@ from .models import CreateCommentResponse
class LinearCreateCommentBlock(Block):
"""Block for creating comments on Linear issues"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with comment creation permissions",
required_scopes={LinearScope.COMMENTS_CREATE},
@@ -30,12 +31,11 @@ class LinearCreateCommentBlock(Block):
issue_id: str = SchemaField(description="ID of the issue to comment on")
comment: str = SchemaField(description="Comment text to add to the issue")
class Output(BlockSchema):
class Output(BlockSchemaOutput):
comment_id: str = SchemaField(description="ID of the created comment")
comment_body: str = SchemaField(
description="Text content of the created comment"
)
error: str = SchemaField(description="Error message if comment creation failed")
def __init__(self):
super().__init__(

View File

@@ -3,7 +3,8 @@ from backend.sdk import (
Block,
BlockCategory,
BlockOutput,
BlockSchema,
BlockSchemaInput,
BlockSchemaOutput,
CredentialsMetaInput,
OAuth2Credentials,
SchemaField,
@@ -22,7 +23,7 @@ from .models import CreateIssueResponse, Issue
class LinearCreateIssueBlock(Block):
"""Block for creating issues on Linear"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with issue creation permissions",
required_scopes={LinearScope.ISSUES_CREATE},
@@ -43,10 +44,9 @@ class LinearCreateIssueBlock(Block):
default=None,
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
issue_id: str = SchemaField(description="ID of the created issue")
issue_title: str = SchemaField(description="Title of the created issue")
error: str = SchemaField(description="Error message if issue creation failed")
def __init__(self):
super().__init__(
@@ -129,14 +129,14 @@ class LinearCreateIssueBlock(Block):
class LinearSearchIssuesBlock(Block):
"""Block for searching issues on Linear"""
class Input(BlockSchema):
class Input(BlockSchemaInput):
term: str = SchemaField(description="Term to search for issues")
credentials: CredentialsMetaInput = linear.credentials_field(
description="Linear credentials with read permissions",
required_scopes={LinearScope.READ},
)
class Output(BlockSchema):
class Output(BlockSchemaOutput):
issues: list[Issue] = SchemaField(description="List of issues")
def __init__(self):

Some files were not shown because too many files have changed in this diff Show More