Compare commits

...

8 Commits

Author SHA1 Message Date
Zamil Majdy
701fce83ca fix(backend): add missing metadata attribute to mock nodes in SmartDecisionMaker tests (#11750)
This PR fixes failing SmartDecisionMaker tests by adding missing
`metadata` attribute to mock nodes.

### Changes 🏗️

Mock nodes in SmartDecisionMaker tests were missing the `metadata = {}`
attribute, which was introduced in commit 4a52b7eca for the
customized_name feature. This caused tests to fail with:

```
TypeError: expected string or bytes-like object, got 'Mock'
```

**Files fixed**:
- `backend/blocks/test/test_smart_decision_maker_dict.py`: Added
`metadata = {}` to mock nodes in all 3 tests
- `backend/blocks/test/test_smart_decision_maker_dynamic_fields.py`:
Added `metadata = {}` to mock nodes in all 8 tests

**Root cause**: The `_create_block_function_signature` method calls
`sink_node.metadata.get("customized_name")`, but mock nodes in tests
didn't have the metadata attribute initialized.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `poetry run pytest
backend/blocks/test/test_smart_decision_maker_dict.py -xvs` - 3 passed
- [x] Run `poetry run pytest
backend/blocks/test/test_smart_decision_maker_dynamic_fields.py -xvs` -
8 passed
  - [x] All tests pass successfully

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

* **Tests**
* Updated test infrastructure to enhance mock object configuration for
improved test reliability and consistency across test suites.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-11 17:00:36 -06:00
Zamil Majdy
78d89d0faf Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2026-01-11 13:09:23 -06:00
Zamil Majdy
f482eb668b hotfix(backend): resolve tool pin name mismatch in SmartDecisionMakerBlock (#11749)
## Root Cause

Execution a40bdb4a-964d-4684-94e8-b148eb6bcfc2 and all similar
executions have been failing since Nov 12, 2025 when tool pin routing
was refactored to use node IDs. The SmartDecisionMakerBlock was
double-sanitizing field names when emitting tool call outputs:

```python
# Original field name from link: "Max Keyword Difficulty"
original_field_name = field_mapping.get(clean_arg_name)  #  Retrieved correctly
sanitized_arg_name = self.cleanup(original_field_name)   #  Sanitized AGAIN!
emit_key = f"tools_^_{node_id}_~_{sanitized_arg_name}"   # Emits "max_keyword_difficulty"
```

But the parser expected original names from graph links:
```python
# Parser expects: "Max Keyword Difficulty" (from link.sink_name)
# Emit provides: "max_keyword_difficulty" (sanitized)
# Result: Mismatch → Tool never executes
```

### Changes 🏗️

**1. Fixed Emit Logic** (`smart_decision_maker.py` line 1135)
- Removed double sanitization: `sanitized_arg_name =
self.cleanup(original_field_name)`
- Now emits with original field names: `emit_key =
f"tools_^_{node_id}_~_{original_field_name}"`

**2. Made Agent Nodes Consistent** (`smart_decision_maker.py` lines
497-530)
- Added `field_mapping` to agent function signatures (was missing)
- Agent signatures now sanitize property keys for Anthropic API (like
block signatures)
- Stores field_mapping for use during emit

### Impact

**Fixes:**
-  All graphs with multi-word field names (e.g., "Max Keyword
Difficulty", "Minimum Volume")
-  All graphs with special characters in field names (e.g., "API-Key")
-  Both block nodes AND agent nodes now work consistently

**Unaffected:**
- Single-word lowercase field names (e.g., "keyword", "url") - these
were already working

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified parse_execution_output handles exact match correctly
  - [x] Verified emit uses original field names
  - [x] Verified field_mapping works for both block and agent nodes
- [x] Re-run execution a40bdb4a-964d-4684-94e8-b148eb6bcfc2 after
deployment to verify fix

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
(no changes)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes)
- [x] No configuration changes in this PR

### Test Plan

1. **Unit test validation** (completed):
- Field name cleanup: "Max Keyword Difficulty" →
"max_keyword_difficulty" 
   - Parse with exact match: Success 
   - Parse with mismatch: Returns None 

2. **Production validation** (to be done after deployment):
   - Re-run execution a40bdb4a-964d-4684-94e8-b148eb6bcfc2
- Verify AgentExecutor (node 767682f5-694f-4b2a-bf52-fbdcad6a4a4f)
executes successfully
   - Verify execution completes with high correctness score (not 0.20)
   - Monitor for any regressions in existing graphs

### Files Changed

- `backend/blocks/smart_decision_maker.py`: Remove double sanitization,
add agent field_mapping

### Related Issues

- Resolves execution failure a40bdb4a-964d-4684-94e8-b148eb6bcfc2
- Fixes bug introduced in commit 536e2a5ec (Nov 12, 2025)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved field name mapping consistency in the SmartDecisionMaker
block to ensure proper handling of field names throughout function
signatures and tool execution workflows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 02:08:12 +07:00
Nicholas Tindle
4a52b7eca0 fix(backend): use customized block names in smart decision maker
The SmartDecisionMakerBlock now respects the customized_name field from
node metadata when generating tool function signatures for the LLM.

Previously, the block always used the static block.name from the block
class definition, ignoring any custom names users set in the builder UI.

Changes:
- _create_block_function_signature: Check sink_node.metadata for
  customized_name before falling back to block.name
- _create_agent_function_signature: Check sink_node.metadata for
  customized_name before falling back to sink_graph_meta.name
- Added 4 unit tests for the customized_name feature

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:51:39 -07:00
Zamil Majdy
97847f59f7 feat(backend): add human-in-the-loop review system for blocks requiring approval (#11732)
## Summary
Introduces a comprehensive Human-In-The-Loop (HITL) review system that
allows any block to require human approval before execution. This
extends the existing HITL infrastructure to support automatic review
requests for potentially dangerous operations.

## 🚀 Key Features

### **Automatic HITL for Any Block**
- **Simple opt-in**: Set `self.requires_human_review = True` in any
block constructor
- **Safe mode integration**: Only activates when
`execution_context.safe_mode = True`
- **Seamless workflow**: Blocks pause execution → Human reviews via
existing UI → Execution continues or stops

### **Unified Review Infrastructure**
- **Shared HITLReviewHelper**: Clean, reusable helper class for all
review operations
- **Single API**: `handle_review_decision()` method with structured
return type
- **Type-safe**: Proper typing with non-nullable
`ReviewDecision.review_result`

### **Smart Graph Detection** 
- **Updated `has_human_in_the_loop`**: Now detects both dedicated HITL
blocks and blocks with `requires_human_review = True`
- **Frontend awareness**: UI can properly indicate graphs requiring
human intervention

## 🏗️ Implementation

### **Block Usage**
```python
class MyBlock(Block):
    def __init__(self):
        super().__init__(...)
        self.requires_human_review = True  # Enable automatic HITL
        
    async def run(self, input_data, **kwargs):
        # If we reach here, either safe mode is off OR human approved
        # No additional HITL code needed - handled automatically by base class
        yield "result", "Operation completed"
```

### **Review Workflow**
1. **Block execution starts** → Base class checks
`requires_human_review` flag
2. **Safe mode enabled** → Creates review entry, pauses execution 
3. **Human reviews** → Uses existing review UI to approve/reject
4. **Execution resumes** → Continues if approved, raises error if
rejected
5. **Safe mode disabled** → Executes normally without review

## 🔧 Technical Improvements

### **Code Quality Enhancements**
- **Better naming**: `risky_block` → `requires_human_review` (clearer
intent)
- **Type safety**: Non-nullable `ReviewDecision.review_result`
(eliminates Optional checks)
- **Exhaustive handling**: Proper error handling for unexpected review
statuses
- **Clean exception handling**: Removed redundant try-catch-log-reraise
patterns

### **Architecture Fixes**
- **Circular import resolution**: Fixed `ExecutionContext` import issues
breaking 444+ block tests
- **Early returns**: Cleaner control flow without nested conditionals
- **Defensive programming**: Handles edge cases with clear error
messages

## 📊 Changes Made

### **Core Files**
- **`Block.requires_human_review`**: New flag for marking blocks
requiring approval
- **`HITLReviewHelper`**: Shared helper class with clean, testable API
- **`HumanInTheLoopBlock`**: Refactored to use shared infrastructure
- **`Graph.has_human_in_the_loop`**: Updated to include review-requiring
blocks

### **Quality Improvements**
- **Type hints**: Proper typing throughout with runtime compatibility
- **Error handling**: Exhaustive status handling with descriptive errors
- **Code reduction**: -16 lines through removal of redundant exception
handling
- **Test compatibility**: All 444/445 block tests pass

##  Testing & Validation

- **All tests pass**: 444/445 block tests passing 
- **Type checking**: All pyright/mypy checks pass   
- **Formatting**: All linting and formatting checks pass 
- **Circular imports**: Resolved import issues that were breaking tests

- **Backward compatibility**: Existing HITL functionality unchanged 

## 🎯 Use Cases

This enables automatic human oversight for blocks performing:
- **File operations**: Deletion, modification, system access
- **External API calls**: Payments, data modifications, destructive
operations
- **System commands**: Shell execution, configuration changes
- **Data processing**: Sensitive data handling, compliance-required
operations

## 🔄 Migration Path

**Existing code**: No changes required - fully backward compatible
**New blocks**: Simply set `self.requires_human_review = True` to enable
automatic HITL
**Safe mode**: Controls whether review requests are created (production
vs development)

---

This creates a robust, type-safe foundation for human oversight in
automated workflows while maintaining the existing HITL user experience
and API compatibility.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Human-in-the-loop review support so executions can pause for human
review and resume based on decisions.

* **Improvements**
* Blocks can opt into requiring human review and will use reviewed input
when proceeding.
* Unified review decision flow with clearer approved/rejected outcomes
and messaging.
* Graph detection expanded to recognize nodes that require human review.

* **Chores**
  * Test config adjusted to avoid pytest plugin conflicts.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 21:14:37 +00:00
Zamil Majdy
22ca8955c5 fix(backend): library agent creation and version update improvements (#11731)
## Summary
Fixes library agent creation and version update logic to properly handle
both user-created and marketplace agents.

## Changes
- **Remove useGraphIsActiveVersion filter** from
`update_agent_version_in_library` to allow both manual and auto updates
- **Set useGraphIsActiveVersion correctly**:
- `False` for marketplace agents (require manual updates to avoid
breaking workflows)
- `True` for user-created agents (can safely auto-update since user
controls source)
- Update function documentation to reflect new behavior

## Problem Solved
- Marketplace agents can now be updated manually via API
- User-created agents maintain auto-update capability  
- Resolves Sentry error AUTOGPT-SERVER-722 about "Expected a record,
found none"
- Fixes store submission modal issues

## Test Plan
- [x] Verify marketplace agents are created with
`useGraphIsActiveVersion: False`
- [x] Verify user agents are created with `useGraphIsActiveVersion:
True`
- [x] Confirm `update_agent_version_in_library` works for both types
- [x] Test store submission flow works without modal issues

## Review Notes
This change ensures proper separation between user-controlled agents
(auto-update) and marketplace agents (manual update), while allowing the
API to service both use cases.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

* **New Features**
* Enhanced agent publishing workflow with improved version tracking and
change detection for marketplace updates

* **Bug Fixes**
  * Improved error handling when updating agent versions in the library
  * Better detection of unpublished changes before publishing agents

* **Improvements**
* Changes Summary field now supports longer descriptions (up to 500
characters) with multi-line editing capability

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 21:14:05 +00:00
Nicholas Tindle
43cbe2e011 feat!(blocks): Add Reddit OAuth2 integration and advanced Reddit blocks (#11623)
Replaces user/password Reddit credentials with OAuth2, adds
RedditOAuthHandler, and updates Reddit blocks to support OAuth2
authentication. Introduces new blocks for creating posts, fetching post
details, searching, editing posts, and retrieving subreddit info.
Updates test credentials and input handling to use OAuth2 tokens.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Rebuild the reddit blocks to support oauth2 rather than requiring users
to provide their password and username.
This is done via a swap from script based to web based authentication on
the reddit side faciliatated by the approval of an oauth app by reddit
on the account `ntindle`
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build a super agent
  - [x] Upload the super agent and a video of it working

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces full Reddit OAuth2 support and substantially expands Reddit
capabilities across the platform.
> 
> - Adds `RedditOAuthHandler` with token exchange, refresh, revoke;
registers handler in `integrations/oauth/__init__.py`
> - Refactors Reddit blocks to use `OAuth2Credentials` and `praw` via
refresh tokens; updates models (e.g., `post_id`, richer outputs) and
adds `strip_reddit_prefix`
> - New blocks: create/edit/delete posts, post/get/delete comments,
reply to comments, get post details, user posts (self/others), search,
inbox, subreddit info/rules/flairs, send messages
> - Updates default `settings.config.reddit_user_agent` and test
credentials; minor `.branchlet.json` addition
> - Docs: clarifies block error-handling with
`BlockInputError`/`BlockExecutionError` guidance
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4f1f26c7e7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

* **New Features**
* Added OAuth2-based authentication for Reddit integration, replacing
legacy credential methods
* Expanded Reddit capabilities with new blocks for creating posts,
retrieving post details, managing comments, accessing inbox, and
fetching user/subreddit information
* Enhanced data models to support richer Reddit interactions and
chainable workflows

* **Documentation**
* Updated error handling guidance to distinguish between validation
errors and runtime errors with improved exception patterns

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2026-01-09 20:53:03 +00:00
Nicholas Tindle
a318832414 feat(docs): update dev from gitbook changes (#11740)
<!-- Clearly explain the need for these changes: -->
gitbook branch has changes that need synced to dev
### Changes 🏗️
Pull changes from gitbook into dev
<!-- Concisely describe all of the changes made in this pull request:
-->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Migrates documentation to GitBook and removes the old MkDocs setup.
> 
> - Removes MkDocs configuration and infra: `docs/mkdocs.yml`,
`docs/netlify.toml`, `docs/overrides/main.html`,
`docs/requirements.txt`, and JS assets (`_javascript/mathjax.js`,
`_javascript/tablesort.js`)
> - Updates `docs/content/contribute/index.md` to describe GitBook
workflow (gitbook branch, editing, previews, and `SUMMARY.md`)
> - Adds GitBook navigation file `docs/platform/SUMMARY.md` and a new
platform overview page `docs/platform/what-is-autogpt-platform.md`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7e118b5a8. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated contribution guide for new documentation platform and workflow
  * Added new platform overview and navigation documentation

* **Chores**
  * Removed MkDocs configuration and related dependencies
  * Removed deprecated JavaScript integrations and deployment overrides

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 19:22:05 +00:00
81 changed files with 3305 additions and 482 deletions

37
.branchlet.json Normal file
View File

@@ -0,0 +1,37 @@
{
"worktreeCopyPatterns": [
".env*",
".vscode/**",
".auth/**",
".claude/**",
"autogpt_platform/.env*",
"autogpt_platform/backend/.env*",
"autogpt_platform/frontend/.env*",
"autogpt_platform/frontend/.auth/**",
"autogpt_platform/db/docker/.env*"
],
"worktreeCopyIgnores": [
"**/node_modules/**",
"**/dist/**",
"**/.git/**",
"**/Thumbs.db",
"**/.DS_Store",
"**/.next/**",
"**/__pycache__/**",
"**/.ruff_cache/**",
"**/.pytest_cache/**",
"**/*.pyc",
"**/playwright-report/**",
"**/logs/**",
"**/site/**"
],
"worktreePathTemplate": "$BASE_PATH.worktree",
"postCreateCmd": [
"cd autogpt_platform/autogpt_libs && poetry install",
"cd autogpt_platform/backend && poetry install && poetry run prisma generate",
"cd autogpt_platform/frontend && pnpm install",
"cd docs && pip install -r requirements.txt"
],
"terminalCommand": "code .",
"deleteBranchWithWorktree": false
}

View File

@@ -489,7 +489,7 @@ async def update_agent_version_in_library(
agent_graph_version: int,
) -> library_model.LibraryAgent:
"""
Updates the agent version in the library if useGraphIsActiveVersion is True.
Updates the agent version in the library for any agent owned by the user.
Args:
user_id: Owner of the LibraryAgent.
@@ -498,20 +498,31 @@ async def update_agent_version_in_library(
Raises:
DatabaseError: If there's an error with the update.
NotFoundError: If no library agent is found for this user and agent.
"""
logger.debug(
f"Updating agent version in library for user #{user_id}, "
f"agent #{agent_graph_id} v{agent_graph_version}"
)
try:
library_agent = await prisma.models.LibraryAgent.prisma().find_first_or_raise(
async with transaction() as tx:
library_agent = await prisma.models.LibraryAgent.prisma(tx).find_first_or_raise(
where={
"userId": user_id,
"agentGraphId": agent_graph_id,
"useGraphIsActiveVersion": True,
},
)
lib = await prisma.models.LibraryAgent.prisma().update(
# Delete any conflicting LibraryAgent for the target version
await prisma.models.LibraryAgent.prisma(tx).delete_many(
where={
"userId": user_id,
"agentGraphId": agent_graph_id,
"agentGraphVersion": agent_graph_version,
"id": {"not": library_agent.id},
}
)
lib = await prisma.models.LibraryAgent.prisma(tx).update(
where={"id": library_agent.id},
data={
"AgentGraph": {
@@ -525,13 +536,13 @@ async def update_agent_version_in_library(
},
include={"AgentGraph": True},
)
if lib is None:
raise NotFoundError(f"Library agent {library_agent.id} not found")
return library_model.LibraryAgent.from_db(lib)
except prisma.errors.PrismaError as e:
logger.error(f"Database error updating agent version in library: {e}")
raise DatabaseError("Failed to update agent version in library") from e
if lib is None:
raise NotFoundError(
f"Failed to update library agent for {agent_graph_id} v{agent_graph_version}"
)
return library_model.LibraryAgent.from_db(lib)
async def update_library_agent(
@@ -825,6 +836,7 @@ async def add_store_agent_to_library(
}
},
"isCreatedByUser": False,
"useGraphIsActiveVersion": False,
"settings": SafeJson(
_initialize_graph_settings(graph_model).model_dump()
),

View File

@@ -0,0 +1,184 @@
"""
Shared helpers for Human-In-The-Loop (HITL) review functionality.
Used by both the dedicated HumanInTheLoopBlock and blocks that require human review.
"""
import logging
from typing import Any, Optional
from prisma.enums import ReviewStatus
from pydantic import BaseModel
from backend.data.execution import ExecutionContext, ExecutionStatus
from backend.data.human_review import ReviewResult
from backend.executor.manager import async_update_node_execution_status
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
class ReviewDecision(BaseModel):
"""Result of a review decision."""
should_proceed: bool
message: str
review_result: ReviewResult
class HITLReviewHelper:
"""Helper class for Human-In-The-Loop review operations."""
@staticmethod
async def get_or_create_human_review(**kwargs) -> Optional[ReviewResult]:
"""Create or retrieve a human review from the database."""
return await get_database_manager_async_client().get_or_create_human_review(
**kwargs
)
@staticmethod
async def update_node_execution_status(**kwargs) -> None:
"""Update the execution status of a node."""
await async_update_node_execution_status(
db_client=get_database_manager_async_client(), **kwargs
)
@staticmethod
async def update_review_processed_status(
node_exec_id: str, processed: bool
) -> None:
"""Update the processed status of a review."""
return await get_database_manager_async_client().update_review_processed_status(
node_exec_id, processed
)
@staticmethod
async def _handle_review_request(
input_data: Any,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
block_name: str = "Block",
editable: bool = False,
) -> Optional[ReviewResult]:
"""
Handle a review request for a block that requires human review.
Args:
input_data: The input data to be reviewed
user_id: ID of the user requesting the review
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph
graph_version: Version of the graph
execution_context: Current execution context
block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data
Returns:
ReviewResult if review is complete, None if waiting for human input
Raises:
Exception: If review creation or status update fails
"""
# Skip review if safe mode is disabled - return auto-approved result
if not execution_context.safe_mode:
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
)
return ReviewResult(
data=input_data,
status=ReviewStatus.APPROVED,
message="Auto-approved (safe mode disabled)",
processed=True,
node_exec_id=node_exec_id,
)
result = await HITLReviewHelper.get_or_create_human_review(
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
input_data=input_data,
message=f"Review required for {block_name} execution",
editable=editable,
)
if result is None:
logger.info(
f"Block {block_name} pausing execution for node {node_exec_id} - awaiting human review"
)
await HITLReviewHelper.update_node_execution_status(
exec_id=node_exec_id,
status=ExecutionStatus.REVIEW,
)
return None # Signal that execution should pause
# Mark review as processed if not already done
if not result.processed:
await HITLReviewHelper.update_review_processed_status(
node_exec_id=node_exec_id, processed=True
)
return result
@staticmethod
async def handle_review_decision(
input_data: Any,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
block_name: str = "Block",
editable: bool = False,
) -> Optional[ReviewDecision]:
"""
Handle a review request and return the decision in a single call.
Args:
input_data: The input data to be reviewed
user_id: ID of the user requesting the review
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph
graph_version: Version of the graph
execution_context: Current execution context
block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data
Returns:
ReviewDecision if review is complete (approved/rejected),
None if execution should pause (awaiting review)
"""
review_result = await HITLReviewHelper._handle_review_request(
input_data=input_data,
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
block_name=block_name,
editable=editable,
)
if review_result is None:
# Still awaiting review - return None to pause execution
return None
# Review is complete, determine outcome
should_proceed = review_result.status == ReviewStatus.APPROVED
message = review_result.message or (
"Execution approved by reviewer"
if should_proceed
else "Execution rejected by reviewer"
)
return ReviewDecision(
should_proceed=should_proceed, message=message, review_result=review_result
)

View File

@@ -3,6 +3,7 @@ from typing import Any
from prisma.enums import ReviewStatus
from backend.blocks.helpers.review import HITLReviewHelper
from backend.data.block import (
Block,
BlockCategory,
@@ -11,11 +12,9 @@ from backend.data.block import (
BlockSchemaOutput,
BlockType,
)
from backend.data.execution import ExecutionContext, ExecutionStatus
from backend.data.execution import ExecutionContext
from backend.data.human_review import ReviewResult
from backend.data.model import SchemaField
from backend.executor.manager import async_update_node_execution_status
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
@@ -72,32 +71,26 @@ class HumanInTheLoopBlock(Block):
("approved_data", {"name": "John Doe", "age": 30}),
],
test_mock={
"get_or_create_human_review": lambda *_args, **_kwargs: ReviewResult(
data={"name": "John Doe", "age": 30},
status=ReviewStatus.APPROVED,
message="",
processed=False,
node_exec_id="test-node-exec-id",
),
"update_node_execution_status": lambda *_args, **_kwargs: None,
"update_review_processed_status": lambda *_args, **_kwargs: None,
"handle_review_decision": lambda **kwargs: type(
"ReviewDecision",
(),
{
"should_proceed": True,
"message": "Test approval message",
"review_result": ReviewResult(
data={"name": "John Doe", "age": 30},
status=ReviewStatus.APPROVED,
message="",
processed=False,
node_exec_id="test-node-exec-id",
),
},
)(),
},
)
async def get_or_create_human_review(self, **kwargs):
return await get_database_manager_async_client().get_or_create_human_review(
**kwargs
)
async def update_node_execution_status(self, **kwargs):
return await async_update_node_execution_status(
db_client=get_database_manager_async_client(), **kwargs
)
async def update_review_processed_status(self, node_exec_id: str, processed: bool):
return await get_database_manager_async_client().update_review_processed_status(
node_exec_id, processed
)
async def handle_review_decision(self, **kwargs):
return await HITLReviewHelper.handle_review_decision(**kwargs)
async def run(
self,
@@ -109,7 +102,7 @@ class HumanInTheLoopBlock(Block):
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
**kwargs,
**_kwargs,
) -> BlockOutput:
if not execution_context.safe_mode:
logger.info(
@@ -119,48 +112,28 @@ class HumanInTheLoopBlock(Block):
yield "review_message", "Auto-approved (safe mode disabled)"
return
try:
result = await self.get_or_create_human_review(
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
input_data=input_data.data,
message=input_data.name,
editable=input_data.editable,
)
except Exception as e:
logger.error(f"Error in HITL block for node {node_exec_id}: {str(e)}")
raise
decision = await self.handle_review_decision(
input_data=input_data.data,
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
block_name=self.name,
editable=input_data.editable,
)
if result is None:
logger.info(
f"HITL block pausing execution for node {node_exec_id} - awaiting human review"
)
try:
await self.update_node_execution_status(
exec_id=node_exec_id,
status=ExecutionStatus.REVIEW,
)
return
except Exception as e:
logger.error(
f"Failed to update node status for HITL block {node_exec_id}: {str(e)}"
)
raise
if decision is None:
return
if not result.processed:
await self.update_review_processed_status(
node_exec_id=node_exec_id, processed=True
)
status = decision.review_result.status
if status == ReviewStatus.APPROVED:
yield "approved_data", decision.review_result.data
elif status == ReviewStatus.REJECTED:
yield "rejected_data", decision.review_result.data
else:
raise RuntimeError(f"Unexpected review status: {status}")
if result.status == ReviewStatus.APPROVED:
yield "approved_data", result.data
if result.message:
yield "review_message", result.message
elif result.status == ReviewStatus.REJECTED:
yield "rejected_data", result.data
if result.message:
yield "review_message", result.message
if decision.message:
yield "review_message", decision.message

File diff suppressed because it is too large Load Diff

View File

@@ -391,8 +391,12 @@ class SmartDecisionMakerBlock(Block):
"""
block = sink_node.block
# Use custom name from node metadata if set, otherwise fall back to block.name
custom_name = sink_node.metadata.get("customized_name")
tool_name = custom_name if custom_name else block.name
tool_function: dict[str, Any] = {
"name": SmartDecisionMakerBlock.cleanup(block.name),
"name": SmartDecisionMakerBlock.cleanup(tool_name),
"description": block.description,
}
sink_block_input_schema = block.input_schema
@@ -489,14 +493,24 @@ class SmartDecisionMakerBlock(Block):
f"Sink graph metadata not found: {graph_id} {graph_version}"
)
# Use custom name from node metadata if set, otherwise fall back to graph name
custom_name = sink_node.metadata.get("customized_name")
tool_name = custom_name if custom_name else sink_graph_meta.name
tool_function: dict[str, Any] = {
"name": SmartDecisionMakerBlock.cleanup(sink_graph_meta.name),
"name": SmartDecisionMakerBlock.cleanup(tool_name),
"description": sink_graph_meta.description,
}
properties = {}
field_mapping = {}
for link in links:
field_name = link.sink_name
clean_field_name = SmartDecisionMakerBlock.cleanup(field_name)
field_mapping[clean_field_name] = field_name
sink_block_input_schema = sink_node.input_default["input_schema"]
sink_block_properties = sink_block_input_schema.get("properties", {}).get(
link.sink_name, {}
@@ -506,7 +520,7 @@ class SmartDecisionMakerBlock(Block):
if "description" in sink_block_properties
else f"The {link.sink_name} of the tool"
)
properties[link.sink_name] = {
properties[clean_field_name] = {
"type": "string",
"description": description,
"default": json.dumps(sink_block_properties.get("default", None)),
@@ -519,7 +533,7 @@ class SmartDecisionMakerBlock(Block):
"strict": True,
}
# Store node info for later use in output processing
tool_function["_field_mapping"] = field_mapping
tool_function["_sink_node_id"] = sink_node.id
return {"type": "function", "function": tool_function}
@@ -1147,8 +1161,9 @@ class SmartDecisionMakerBlock(Block):
original_field_name = field_mapping.get(clean_arg_name, clean_arg_name)
arg_value = tool_args.get(clean_arg_name)
sanitized_arg_name = self.cleanup(original_field_name)
emit_key = f"tools_^_{sink_node_id}_~_{sanitized_arg_name}"
# Use original_field_name directly (not sanitized) to match link sink_name
# The field_mapping already translates from LLM's cleaned names to original names
emit_key = f"tools_^_{sink_node_id}_~_{original_field_name}"
logger.debug(
"[SmartDecisionMakerBlock|geid:%s|neid:%s] emit %s",

View File

@@ -1057,3 +1057,153 @@ async def test_smart_decision_maker_traditional_mode_default():
) # Should yield individual tool parameters
assert "tools_^_test-sink-node-id_~_max_keyword_difficulty" in outputs
assert "conversations" in outputs
@pytest.mark.asyncio
async def test_smart_decision_maker_uses_customized_name_for_blocks():
"""Test that SmartDecisionMakerBlock uses customized_name from node metadata for tool names."""
from unittest.mock import MagicMock
from backend.blocks.basic import StoreValueBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node with customized_name in metadata
mock_node = MagicMock(spec=Node)
mock_node.id = "test-node-id"
mock_node.block_id = StoreValueBlock().id
mock_node.metadata = {"customized_name": "My Custom Tool Name"}
mock_node.block = StoreValueBlock()
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "input"
# Call the function directly
result = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the customized name (cleaned up)
assert result["type"] == "function"
assert result["function"]["name"] == "my_custom_tool_name" # Cleaned version
assert result["function"]["_sink_node_id"] == "test-node-id"
@pytest.mark.asyncio
async def test_smart_decision_maker_falls_back_to_block_name():
"""Test that SmartDecisionMakerBlock falls back to block.name when no customized_name."""
from unittest.mock import MagicMock
from backend.blocks.basic import StoreValueBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node without customized_name
mock_node = MagicMock(spec=Node)
mock_node.id = "test-node-id"
mock_node.block_id = StoreValueBlock().id
mock_node.metadata = {} # No customized_name
mock_node.block = StoreValueBlock()
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "input"
# Call the function directly
result = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the block's default name
assert result["type"] == "function"
assert result["function"]["name"] == "storevalueblock" # Default block name cleaned
assert result["function"]["_sink_node_id"] == "test-node-id"
@pytest.mark.asyncio
async def test_smart_decision_maker_uses_customized_name_for_agents():
"""Test that SmartDecisionMakerBlock uses customized_name from metadata for agent nodes."""
from unittest.mock import AsyncMock, MagicMock, patch
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node with customized_name in metadata
mock_node = MagicMock(spec=Node)
mock_node.id = "test-agent-node-id"
mock_node.metadata = {"customized_name": "My Custom Agent"}
mock_node.input_default = {
"graph_id": "test-graph-id",
"graph_version": 1,
"input_schema": {"properties": {"test_input": {"description": "Test input"}}},
}
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "test_input"
# Mock the database client
mock_graph_meta = MagicMock()
mock_graph_meta.name = "Original Agent Name"
mock_graph_meta.description = "Agent description"
mock_db_client = AsyncMock()
mock_db_client.get_graph_metadata.return_value = mock_graph_meta
with patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client",
return_value=mock_db_client,
):
result = await SmartDecisionMakerBlock._create_agent_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the customized name (cleaned up)
assert result["type"] == "function"
assert result["function"]["name"] == "my_custom_agent" # Cleaned version
assert result["function"]["_sink_node_id"] == "test-agent-node-id"
@pytest.mark.asyncio
async def test_smart_decision_maker_agent_falls_back_to_graph_name():
"""Test that agent node falls back to graph name when no customized_name."""
from unittest.mock import AsyncMock, MagicMock, patch
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node without customized_name
mock_node = MagicMock(spec=Node)
mock_node.id = "test-agent-node-id"
mock_node.metadata = {} # No customized_name
mock_node.input_default = {
"graph_id": "test-graph-id",
"graph_version": 1,
"input_schema": {"properties": {"test_input": {"description": "Test input"}}},
}
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "test_input"
# Mock the database client
mock_graph_meta = MagicMock()
mock_graph_meta.name = "Original Agent Name"
mock_graph_meta.description = "Agent description"
mock_db_client = AsyncMock()
mock_db_client.get_graph_metadata.return_value = mock_graph_meta
with patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client",
return_value=mock_db_client,
):
result = await SmartDecisionMakerBlock._create_agent_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the graph's default name
assert result["type"] == "function"
assert result["function"]["name"] == "original_agent_name" # Graph name cleaned
assert result["function"]["_sink_node_id"] == "test-agent-node-id"

View File

@@ -15,6 +15,7 @@ async def test_smart_decision_maker_handles_dynamic_dict_fields():
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic dictionary fields
mock_links = [
@@ -77,6 +78,7 @@ async def test_smart_decision_maker_handles_dynamic_list_fields():
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic list fields
mock_links = [

View File

@@ -44,6 +44,7 @@ async def test_create_block_function_signature_with_dict_fields():
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic dictionary fields (source sanitized, sink original)
mock_links = [
@@ -106,6 +107,7 @@ async def test_create_block_function_signature_with_list_fields():
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic list fields
mock_links = [
@@ -159,6 +161,7 @@ async def test_create_block_function_signature_with_object_fields():
mock_node.block = MatchTextPatternBlock()
mock_node.block_id = MatchTextPatternBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic object fields
mock_links = [
@@ -208,11 +211,13 @@ async def test_create_tool_node_signatures():
mock_dict_node.block = CreateDictionaryBlock()
mock_dict_node.block_id = CreateDictionaryBlock().id
mock_dict_node.input_default = {}
mock_dict_node.metadata = {}
mock_list_node = Mock()
mock_list_node.block = AddToListBlock()
mock_list_node.block_id = AddToListBlock().id
mock_list_node.input_default = {}
mock_list_node.metadata = {}
# Mock links with dynamic fields
dict_link1 = Mock(
@@ -423,6 +428,7 @@ async def test_mixed_regular_and_dynamic_fields():
mock_node.block.name = "TestBlock"
mock_node.block.description = "A test block"
mock_node.block.input_schema = Mock()
mock_node.metadata = {}
# Mock the get_field_schema to return a proper schema for regular fields
def get_field_schema(field_name):

View File

@@ -50,6 +50,8 @@ from .model import (
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from backend.data.execution import ExecutionContext
from .graph import Link
app_config = Config()
@@ -472,6 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type
self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.requires_human_review: bool = False
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -614,7 +617,77 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
block_id=self.id,
) from ex
async def is_block_exec_need_review(
self,
input_data: BlockInput,
*,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: "ExecutionContext",
**kwargs,
) -> tuple[bool, BlockInput]:
"""
Check if this block execution needs human review and handle the review process.
Returns:
Tuple of (should_pause, input_data_to_use)
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
# Skip review if not required or safe mode is disabled
if not self.requires_human_review or not execution_context.safe_mode:
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper
# Handle the review request and get decision
decision = await HITLReviewHelper.handle_review_decision(
input_data=input_data,
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
block_name=self.name,
editable=True,
)
if decision is None:
# We're awaiting review - pause execution
return True, input_data
if not decision.should_proceed:
# Review was rejected, raise an error to stop execution
raise BlockExecutionError(
message=f"Block execution rejected by reviewer: {decision.message}",
block_name=self.name,
block_id=self.id,
)
# Review was approved - use the potentially modified data
# ReviewResult.data must be a dict for block inputs
reviewed_data = decision.review_result.data
if not isinstance(reviewed_data, dict):
raise BlockExecutionError(
message=f"Review data must be a dict for block input, got {type(reviewed_data).__name__}",
block_name=self.name,
block_id=self.id,
)
return False, reviewed_data
async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
# Check for review requirement and get potentially modified input data
should_pause, input_data = await self.is_block_exec_need_review(
input_data, **kwargs
)
if should_pause:
return
# Validate the input data (original or reviewer-modified) once
if error := self.input_schema.validate_data(input_data):
raise BlockInputError(
message=f"Unable to execute block with invalid input data: {error}",
@@ -622,6 +695,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
block_id=self.id,
)
# Use the validated input data
async for output_name, output_data in self.run(
self.input_schema(**{k: v for k, v in input_data.items() if v is not None}),
**kwargs,

View File

@@ -244,7 +244,10 @@ class BaseGraph(BaseDbModel):
return any(
node.block_id
for node in self.nodes
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
if (
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
or node.block.requires_human_review
)
)
@property

View File

@@ -8,6 +8,7 @@ from .discord import DiscordOAuthHandler
from .github import GitHubOAuthHandler
from .google import GoogleOAuthHandler
from .notion import NotionOAuthHandler
from .reddit import RedditOAuthHandler
from .twitter import TwitterOAuthHandler
if TYPE_CHECKING:
@@ -20,6 +21,7 @@ _ORIGINAL_HANDLERS = [
GitHubOAuthHandler,
GoogleOAuthHandler,
NotionOAuthHandler,
RedditOAuthHandler,
TwitterOAuthHandler,
TodoistOAuthHandler,
]

View File

@@ -0,0 +1,208 @@
import time
import urllib.parse
from typing import ClassVar, Optional
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.providers import ProviderName
from backend.util.request import Requests
from backend.util.settings import Settings
settings = Settings()
class RedditOAuthHandler(BaseOAuthHandler):
"""
Reddit OAuth 2.0 handler.
Based on the documentation at:
- https://github.com/reddit-archive/reddit/wiki/OAuth2
Notes:
- Reddit requires `duration=permanent` to get refresh tokens
- Access tokens expire after 1 hour (3600 seconds)
- Reddit requires HTTP Basic Auth for token requests
- Reddit requires a unique User-Agent header
"""
PROVIDER_NAME = ProviderName.REDDIT
DEFAULT_SCOPES: ClassVar[list[str]] = [
"identity", # Get username, verify auth
"read", # Access posts and comments
"submit", # Submit new posts and comments
"edit", # Edit own posts and comments
"history", # Access user's post history
"privatemessages", # Access inbox and send private messages
"flair", # Access and set flair on posts/subreddits
]
AUTHORIZE_URL = "https://www.reddit.com/api/v1/authorize"
TOKEN_URL = "https://www.reddit.com/api/v1/access_token"
USERNAME_URL = "https://oauth.reddit.com/api/v1/me"
REVOKE_URL = "https://www.reddit.com/api/v1/revoke_token"
def __init__(self, client_id: str, client_secret: str, redirect_uri: str):
self.client_id = client_id
self.client_secret = client_secret
self.redirect_uri = redirect_uri
def get_login_url(
self, scopes: list[str], state: str, code_challenge: Optional[str]
) -> str:
"""Generate Reddit OAuth 2.0 authorization URL"""
scopes = self.handle_default_scopes(scopes)
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": " ".join(scopes),
"state": state,
"duration": "permanent", # Required for refresh tokens
}
return f"{self.AUTHORIZE_URL}?{urllib.parse.urlencode(params)}"
async def exchange_code_for_tokens(
self, code: str, scopes: list[str], code_verifier: Optional[str]
) -> OAuth2Credentials:
"""Exchange authorization code for access tokens"""
scopes = self.handle_default_scopes(scopes)
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": settings.config.reddit_user_agent,
}
data = {
"grant_type": "authorization_code",
"code": code,
"redirect_uri": self.redirect_uri,
}
# Reddit requires HTTP Basic Auth for token requests
auth = (self.client_id, self.client_secret)
response = await Requests().post(
self.TOKEN_URL, headers=headers, data=data, auth=auth
)
if not response.ok:
error_text = response.text()
raise ValueError(
f"Reddit token exchange failed: {response.status} - {error_text}"
)
tokens = response.json()
if "error" in tokens:
raise ValueError(f"Reddit OAuth error: {tokens.get('error')}")
username = await self._get_username(tokens["access_token"])
return OAuth2Credentials(
provider=self.PROVIDER_NAME,
title=None,
username=username,
access_token=tokens["access_token"],
refresh_token=tokens.get("refresh_token"),
access_token_expires_at=int(time.time()) + tokens.get("expires_in", 3600),
refresh_token_expires_at=None, # Reddit refresh tokens don't expire
scopes=scopes,
)
async def _get_username(self, access_token: str) -> str:
"""Get the username from the access token"""
headers = {
"Authorization": f"Bearer {access_token}",
"User-Agent": settings.config.reddit_user_agent,
}
response = await Requests().get(self.USERNAME_URL, headers=headers)
if not response.ok:
raise ValueError(f"Failed to get Reddit username: {response.status}")
data = response.json()
return data.get("name", "unknown")
async def _refresh_tokens(
self, credentials: OAuth2Credentials
) -> OAuth2Credentials:
"""Refresh access tokens using refresh token"""
if not credentials.refresh_token:
raise ValueError("No refresh token available")
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": settings.config.reddit_user_agent,
}
data = {
"grant_type": "refresh_token",
"refresh_token": credentials.refresh_token.get_secret_value(),
}
auth = (self.client_id, self.client_secret)
response = await Requests().post(
self.TOKEN_URL, headers=headers, data=data, auth=auth
)
if not response.ok:
error_text = response.text()
raise ValueError(
f"Reddit token refresh failed: {response.status} - {error_text}"
)
tokens = response.json()
if "error" in tokens:
raise ValueError(f"Reddit OAuth error: {tokens.get('error')}")
username = await self._get_username(tokens["access_token"])
# Reddit may or may not return a new refresh token
new_refresh_token = tokens.get("refresh_token")
if new_refresh_token:
refresh_token: SecretStr | None = SecretStr(new_refresh_token)
elif credentials.refresh_token:
# Keep the existing refresh token
refresh_token = credentials.refresh_token
else:
refresh_token = None
return OAuth2Credentials(
id=credentials.id,
provider=self.PROVIDER_NAME,
title=credentials.title,
username=username,
access_token=tokens["access_token"],
refresh_token=refresh_token,
access_token_expires_at=int(time.time()) + tokens.get("expires_in", 3600),
refresh_token_expires_at=None,
scopes=credentials.scopes,
)
async def revoke_tokens(self, credentials: OAuth2Credentials) -> bool:
"""Revoke the access token"""
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": settings.config.reddit_user_agent,
}
data = {
"token": credentials.access_token.get_secret_value(),
"token_type_hint": "access_token",
}
auth = (self.client_id, self.client_secret)
response = await Requests().post(
self.REVOKE_URL, headers=headers, data=data, auth=auth
)
# Reddit returns 204 No Content on successful revocation
return response.ok

View File

@@ -264,7 +264,7 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
)
reddit_user_agent: str = Field(
default="AutoGPT:1.0 (by /u/autogpt)",
default="web:AutoGPT:v0.6.0 (by /u/autogpt)",
description="The user agent for the Reddit API",
)

View File

@@ -135,6 +135,9 @@ ignore_patterns = []
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "session"
# Disable syrupy plugin to avoid conflict with pytest-snapshot
# Both provide --snapshot-update argument causing ArgumentError
addopts = "-p no:syrupy"
filterwarnings = [
"ignore:'audioop' is deprecated:DeprecationWarning:discord.player",
"ignore:invalid escape sequence:DeprecationWarning:tweepy.api",

View File

@@ -92,29 +92,33 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
const isUserCreator = agent?.owner_user_id === user?.id;
// Check if there's a pending submission for this specific agent version
const submissionsResponse = okData(submissionsData) as any;
const hasPendingSubmissionForCurrentVersion =
isUserCreator &&
submissionsResponse?.submissions?.some(
(submission: StoreSubmission) =>
submission.agent_id === agent.graph_id &&
submission.agent_version === agent.graph_version &&
submission.status === "PENDING",
);
const agentSubmissions =
submissionsResponse?.submissions?.filter(
(submission: StoreSubmission) => submission.agent_id === agent.graph_id,
) || [];
const highestSubmittedVersion =
agentSubmissions.length > 0
? Math.max(
...agentSubmissions.map(
(submission: StoreSubmission) => submission.agent_version,
),
)
: 0;
const hasUnpublishedChanges =
isUserCreator && agent.graph_version > highestSubmittedVersion;
if (!storeAgent) {
return {
hasUpdate: false,
latestVersion: undefined,
isUserCreator,
hasPublishUpdate:
isUserCreator && !hasPendingSubmissionForCurrentVersion,
hasPublishUpdate: agentSubmissions.length > 0 && hasUnpublishedChanges,
};
}
// Get the latest version from the marketplace
// agentGraphVersions array contains graph version numbers as strings, get the highest one
const latestMarketplaceVersion =
storeAgent.agentGraphVersions?.length > 0
? Math.max(
@@ -124,18 +128,11 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
)
: undefined;
// Show publish update button if:
// 1. User is the creator
// 2. No pending submission for current version
// 3. Either: agent not published yet OR local version is newer than marketplace
const hasPublishUpdate =
isUserCreator &&
!hasPendingSubmissionForCurrentVersion &&
(latestMarketplaceVersion === undefined || // Not published yet
agent.graph_version > latestMarketplaceVersion); // Or local version is newer
agent.graph_version >
Math.max(latestMarketplaceVersion || 0, highestSubmittedVersion);
// If marketplace version is newer than user's version, show update banner
// This applies to both creators and non-creators
const hasMarketplaceUpdate =
latestMarketplaceVersion !== undefined &&
latestMarketplaceVersion > agent.graph_version;

View File

@@ -133,7 +133,7 @@ export function EditAgentForm({
<Input
id={field.name}
label="Changes Summary"
type="text"
type="textarea"
placeholder="Briefly describe what you changed"
error={form.formState.errors.changes_summary?.message}
{...field}

View File

@@ -47,7 +47,7 @@ export const useEditAgentForm = ({
changes_summary: z
.string()
.min(1, "Changes summary is required")
.max(200, "Changes summary must be less than 200 characters"),
.max(500, "Changes summary must be less than 500 characters"),
agentOutputDemo: z
.string()
.refine(validateYouTubeUrl, "Please enter a valid YouTube URL"),

View File

@@ -53,21 +53,24 @@ export function useAgentInfoStep({
useEffect(() => {
if (initialData?.agent_id) {
setAgentId(initialData.agent_id);
const initialImages = [
...(initialData?.thumbnailSrc ? [initialData.thumbnailSrc] : []),
...(initialData.additionalImages || []),
];
setImages(initialImages);
// Update form with initial data
setImages(
Array.from(
new Set([
...(initialData?.thumbnailSrc ? [initialData.thumbnailSrc] : []),
...(initialData.additionalImages || []),
]),
),
);
form.reset({
changesSummary: initialData.changesSummary || "",
changesSummary: isMarketplaceUpdate
? ""
: initialData.changesSummary || "",
title: initialData.title,
subheader: initialData.subheader,
slug: initialData.slug.toLocaleLowerCase().trim(),
youtubeLink: initialData.youtubeLink,
category: initialData.category,
description: initialData.description,
description: isMarketplaceUpdate ? "" : initialData.description,
recommendedScheduleCron: initialData.recommendedScheduleCron || "",
instructions: initialData.instructions || "",
agentOutputDemo: initialData.agentOutputDemo || "",
@@ -149,12 +152,7 @@ export function useAgentInfoStep({
agentId,
images,
isSubmitting,
initialImages: initialData
? [
...(initialData?.thumbnailSrc ? [initialData.thumbnailSrc] : []),
...(initialData.additionalImages || []),
]
: [],
initialImages: images,
initialSelectedImage: initialData?.thumbnailSrc || null,
handleImagesChange,
handleSubmit: form.handleSubmit(handleFormSubmit),

View File

@@ -1,16 +0,0 @@
window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.typesetPromise()
})

View File

@@ -1,6 +0,0 @@
document$.subscribe(function () {
var tables = document.querySelectorAll("article table:not([class])")
tables.forEach(function (table) {
new Tablesort(table)
})
})

View File

@@ -1,48 +1,35 @@
# Contributing to the Docs
We welcome contributions to our documentation! If you would like to contribute, please follow the steps below.
We welcome contributions to our documentation! Our docs are hosted on GitBook and synced with GitHub.
## Setting up the Docs
## How It Works
1. Clone the repository:
- Documentation lives in the `docs/` directory on the `gitbook` branch
- GitBook automatically syncs changes from GitHub
- You can edit docs directly on GitHub or locally
## Editing Docs Locally
1. Clone the repository and switch to the gitbook branch:
```shell
git clone github.com/Significant-Gravitas/AutoGPT.git
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd AutoGPT
git checkout gitbook
```
1. Install the dependencies:
2. Make your changes to markdown files in `docs/`
```shell
python -m pip install -r docs/requirements.txt
```
3. Preview changes:
- Push to a branch and create a PR - GitBook will generate a preview
- Or use any markdown preview tool locally
or
## Adding a New Page
```shell
python3 -m pip install -r docs/requirements.txt
```
1. Start iterating using mkdocs' live server:
```shell
mkdocs serve
```
1. Open your browser and navigate to `http://127.0.0.1:8000`.
1. The server will automatically reload the docs when you save your changes.
## Adding a new page
1. Create a new markdown file in the `docs/content` directory.
1. Add the new page to the `nav` section in the `mkdocs.yml` file.
1. Add the content to the new markdown file.
1. Run `mkdocs serve` to see your changes.
## Checking links
To check for broken links in the documentation, run `mkdocs build` and look for warnings in the console output.
1. Create a new markdown file in the appropriate `docs/` subdirectory
2. Add the new page to the relevant `SUMMARY.md` file to include it in the navigation
3. Submit a pull request to the `gitbook` branch
## Submitting a Pull Request
When you're ready to submit your changes, please create a pull request. We will review your changes and merge them if they are appropriate.
When you're ready to submit your changes, create a pull request targeting the `gitbook` branch. We will review your changes and merge them if appropriate.

View File

@@ -1,194 +0,0 @@
site_name: AutoGPT Documentation
site_url: https://docs.agpt.co/
repo_url: https://github.com/Significant-Gravitas/AutoGPT
repo_name: AutoGPT
edit_uri: edit/master/docs/content
docs_dir: content
nav:
- Home: index.md
- The AutoGPT Platform 🆕:
- Getting Started:
- Setup AutoGPT (Local-Host): platform/getting-started.md
- Edit an Agent: platform/edit-agent.md
- Delete an Agent: platform/delete-agent.md
- Download & Import and Agent: platform/download-agent-from-marketplace-local.md
- Create a Basic Agent: platform/create-basic-agent.md
- Submit an Agent to the Marketplace: platform/submit-agent-to-marketplace.md
- Advanced Setup: platform/advanced_setup.md
- Agent Blocks: platform/agent-blocks.md
- Build your own Blocks: platform/new_blocks.md
- Block SDK Guide: platform/block-sdk-guide.md
- Using Ollama: platform/ollama.md
- Using AI/ML API: platform/aimlapi.md
- Using D-ID: platform/d_id.md
- Blocks: platform/blocks/blocks.md
- API:
- Introduction: platform/integrating/api-guide.md
- OAuth & SSO: platform/integrating/oauth-guide.md
- Contributing:
- Tests: platform/contributing/tests.md
- OAuth Flows: platform/contributing/oauth-integration-flow.md
- AutoGPT Classic:
- Introduction: classic/index.md
- Setup:
- Setting up AutoGPT: classic/setup/index.md
- Set up with Docker: classic/setup/docker.md
- For Developers: classic/setup/for-developers.md
- Configuration:
- Options: classic/configuration/options.md
- Search: classic/configuration/search.md
- Voice: classic/configuration/voice.md
- Usage: classic/usage.md
- Help us improve AutoGPT:
- Share your debug logs with us: classic/share-your-logs.md
- Contribution guide: contributing.md
- Running tests: classic/testing.md
- Code of Conduct: code-of-conduct.md
- Benchmark:
- Readme: https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/benchmark/README.md
- Forge:
- Introduction: forge/get-started.md
- Components:
- Introduction: forge/components/introduction.md
- Agents: forge/components/agents.md
- Components: forge/components/components.md
- Protocols: forge/components/protocols.md
- Commands: forge/components/commands.md
- Built in Components: forge/components/built-in-components.md
- Creating Components: forge/components/creating-components.md
- Frontend:
- Readme: https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/frontend/README.md
- Contribute:
- Introduction: contribute/index.md
- Testing: ../../autogpt_platform/backend/TESTING.md
# - Challenges:
# - Introduction: challenges/introduction.md
# - List of Challenges:
# - Memory:
# - Introduction: challenges/memory/introduction.md
# - Memory Challenge A: challenges/memory/challenge_a.md
# - Memory Challenge B: challenges/memory/challenge_b.md
# - Memory Challenge C: challenges/memory/challenge_c.md
# - Memory Challenge D: challenges/memory/challenge_d.md
# - Information retrieval:
# - Introduction: challenges/information_retrieval/introduction.md
# - Information Retrieval Challenge A: challenges/information_retrieval/challenge_a.md
# - Information Retrieval Challenge B: challenges/information_retrieval/challenge_b.md
# - Submit a Challenge: challenges/submit.md
# - Beat a Challenge: challenges/beat.md
- License: https://github.com/Significant-Gravitas/AutoGPT/blob/master/LICENSE
theme:
name: material
custom_dir: overrides
language: en
icon:
repo: fontawesome/brands/github
logo: material/book-open-variant
edit: material/pencil
view: material/eye
favicon: assets/favicon.png
features:
- navigation.sections
- navigation.footer
- navigation.top
- navigation.tracking
- navigation.tabs
# - navigation.path
- toc.follow
- toc.integrate
- content.action.edit
- content.action.view
- content.code.copy
- content.code.annotate
- content.tabs.link
palette:
# Palette toggle for light mode
- media: "(prefers-color-scheme: light)"
scheme: default
toggle:
icon: material/weather-night
name: Switch to dark mode
# Palette toggle for dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
toggle:
icon: material/weather-sunny
name: Switch to light mode
markdown_extensions:
# Python Markdown
- abbr
- admonition
- attr_list
- def_list
- footnotes
- md_in_html
- toc:
permalink: true
- tables
# Python Markdown Extensions
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.critic
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
- pymdownx.highlight
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.snippets:
base_path: ['.','../']
check_paths: true
dedent_subsections: true
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
plugins:
- table-reader
- search
- git-revision-date-localized:
enable_creation_date: true
extra:
social:
- icon: fontawesome/brands/github
link: https://github.com/Significant-Gravitas/AutoGPT
- icon: fontawesome/brands/x-twitter
link: https://x.com/Auto_GPT
- icon: fontawesome/brands/instagram
link: https://www.instagram.com/autogpt/
- icon: fontawesome/brands/discord
link: https://discord.gg/autogpt
extra:
analytics:
provider: google
property: G-XKPNKB9XG6
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
- _javascript/tablesort.js
- _javascript/mathjax.js
- https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js?features=es6
- https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js

View File

@@ -1,5 +0,0 @@
# Netlify config for AutoGPT docs
[build]
publish = "public/"
command = "mkdocs build -d public"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -1,61 +0,0 @@
{% extends "base.html" %}
{% block extrahead %}
<style>
svg.ms-w-12 {
width: 2rem !important;
}
svg.ms-h-12 {
height: 2rem !important;
}
/* Temporary solution for the font sizes */
#headlessui-dialog-panel-2 p{
font-size: 16px;
}
#headlessui-dialog-panel-2 textarea{
font-size: 16px;
}
#headlessui-dialog-panel-2 span{
font-size: 14px;
}
#headlessui-dialog-panel-2 div{
font-size: 16px;
}
#headlessui-dialog-panel-2 a{
font-size: 14px;
line-height: 1rem;
}
#headlessui-dialog-panel-2 button{
border: 1px solid #0000001d;
line-height: 1rem;
}
/* Temporary solution for the font size */
#headlessui-dialog-panel-2 > div > div > div > div.ms-w-full.ms-rounded-xl.ms-flex.ms-flex-col.ms-relative > div > div > div > div > div.ms-flex.ms-flex-row.ms-items-center.ms-justify-between.ms-gap-1 > p{
font-size: 14px;
line-height: 1rem;
}
</style>
<div id="my-component-root" class="mendable-search"></div>
<script src="https://unpkg.com/@mendable/search@0.0.155/dist/umd/mendable-bundle.min.js"></script>
{% raw %}
<script>
Mendable.initialize({
anon_key: 'a0bd44db-eb3b-412f-8924-b31c58244a64',
type: 'floatingButton',
style : { accentColor: '#3F51B5', darkMode: false },
floatingButtonStyle:{
backgroundColor: '#3F51B5',
color: '#fff',
},
icon: "https://mendable-storage.s3.amazonaws.com/autoGPT.gif",
botIcon: "https://mendable-storage.s3.amazonaws.com/autoGPTbot.gif",
})
</script>
{% endraw %}
{% endblock %}

34
docs/platform/SUMMARY.md Normal file
View File

@@ -0,0 +1,34 @@
# Table of contents
* [What is the AutoGPT Platform?](what-is-autogpt-platform.md)
## Getting Started
* [Setting Up Auto-GPT (Local Host)](getting-started.md)
* [AutoGPT Platform Installer](installer.md)
* [Edit an Agent](edit-agent.md)
* [Delete an Agent](delete-agent.md)
* [Download & Import an Agent](download-agent-from-marketplace-local.md)
* [Create a Basic Agent](create-basic-agent.md)
* [Submit an Agent to the Marketplace](submit-agent-to-marketplace.md)
## Advanced Setup
* [Advanced Setup](advanced_setup.md)
## Building Blocks
* [Agent Blocks Overview](agent-blocks.md)
* [Build your own Blocks](new_blocks.md)
* [Block SDK Guide](block-sdk-guide.md)
## Using AI Services
* [Using Ollama](ollama.md)
* [Using AI/ML API](aimlapi.md)
* [Using D-ID](d_id.md)
## API & Integrations
* [API Introduction](integrating/api-guide.md)
* [OAuth & SSO](integrating/oauth-guide.md)

View File

@@ -227,7 +227,7 @@ backend/blocks/my_provider/
## Best Practices
1. **Error Handling**: Error output pin is already defined on BlockSchemaOutput
1. **Error Handling**: Use `BlockInputError` for validation failures and `BlockExecutionError` for runtime errors (import from `backend.util.exceptions`). These inherit from `ValueError` so the executor treats them as user-fixable. See [Error Handling in new_blocks.md](new_blocks.md#error-handling) for details.
2. **Credentials**: Use the provider's `credentials_field()` method
3. **Validation**: Use SchemaField constraints (ge, le, min_length, etc.)
4. **Categories**: Choose appropriate categories for discoverability

View File

@@ -616,7 +616,59 @@ custom_requests = Requests(
### Error Handling
All blocks should have an error output that catches all reasonable errors that a user can handle, wrap them in a ValueError, and re-raise. Don't catch things the system admin would need to fix like being out of money or unreachable addresses.
Blocks should raise appropriate exceptions for errors that users can fix. The executor classifies errors based on whether they inherit from `ValueError` - these are treated as "expected failures" (user-fixable) rather than system errors.
#### Block Exception Classes
Import from `backend.util.exceptions`:
```python
from backend.util.exceptions import BlockInputError, BlockExecutionError
```
| Exception | Use Case | Example |
|-----------|----------|---------|
| `BlockInputError` | Invalid user input, validation failures, missing required fields | Bad API key format, invalid URL, missing credentials |
| `BlockExecutionError` | Runtime failures the user can address | API errors, auth failures, resource not found, rate limits |
| `ValueError` | Simple cases (auto-wrapped to `BlockExecutionError`) | Basic validation errors |
#### Raising Exceptions
```python
from backend.util.exceptions import BlockInputError, BlockExecutionError
class MyBlock(Block):
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Input validation - use BlockInputError
if not input_data.api_key:
raise BlockInputError(
message="API key is required",
block_name=self.name,
block_id=self.id,
)
try:
result = await self.call_api(input_data)
yield "result", result
except AuthenticationError as e:
# API/runtime errors - use BlockExecutionError
raise BlockExecutionError(
message=f"Authentication failed: {e}",
block_name=self.name,
block_id=self.id,
) from e
```
#### What NOT to Catch
Don't catch errors that require system admin intervention:
- Out of money/credits
- Unreachable infrastructure
- Database connection failures
- Internal server errors from your own services
Let these propagate as unexpected errors so they get proper attention.
### Data Models

View File

@@ -0,0 +1,82 @@
# What is the AutoGPT Platform?
The AutoGPT Platform is a groundbreaking system that revolutionizes AI utilization for businesses and individuals. It enables the creation, deployment, and management of continuous agents that work tirelessly on your behalf, bringing unprecedented efficiency and innovation to your workflows.
## Key Features
* **Seamless Integration and Low-Code Workflows**: Rapidly create complex workflows without extensive coding knowledge.
* **Autonomous Operation and Continuous Agents**: Deploy cloud-based assistants that run indefinitely, activating on relevant triggers.
* **Intelligent Automation and Maximum Efficiency**: Streamline workflows by automating repetitive processes.
* **Reliable Performance and Predictable Execution**: Enjoy consistent and dependable long-running processes.
## Platform Architecture
The AutoGPT Platform consists of two main components:
### 1. AutoGPT Server
The powerhouse of our platform, containing:
* **Source Code**: Core logic driving agents and automation processes.
* **Infrastructure**: Robust systems ensuring reliable and scalable performance.
* **Marketplace**: A comprehensive marketplace for pre-built agents.
### 2. AutoGPT Frontend
The user interface where you interact with the platform:
* **Agent Builder**: Design and configure your own AI agents.
* **Workflow Management**: Build, modify, and optimize automation workflows.
* **Deployment Controls**: Manage the lifecycle of your agents.
* **Ready-to-Use Agents**: Select from pre-configured agents.
* **Agent Interaction**: Run and interact with agents through a user-friendly interface.
* **Monitoring and Analytics**: Track agent performance and gain insights.
## Platform Components
### Agents and Workflows
In the platform, you can create highly customized workflows to build agents. An agent is essentially an automated workflow that you design to perform specific tasks or processes. Create customized workflows to build agents for various tasks, including:
* Data processing and analysis
* Task scheduling and management
* Communication and notification systems
* Integration between different software tools
* AI-powered decision making and content generation
### Blocks as Integrations
Blocks represent actions and are the building blocks of your workflows, including:
* Connections to external services
* Data processing tools
* AI models for various tasks
* Custom scripts or functions
* Conditional logic and decision-making components
You can learn more under: [Build your own Blocks](new_blocks.md)
## Available Language Models
The platform comes pre-integrated with cutting-edge LLM providers:
* OpenAI - <https://openai.com/>
* Anthropic - <https://www.anthropic.com/>
* Groq - <https://groq.com/>
* Llama - <https://llamaindex.ai/>
* AI/ML API - <https://aimlapi.com/>
* AI/ML API provides 300+ AI models including Deepseek, Gemini, ChatGPT. The models run at enterprise-grade rate limits and uptimes.
## License Overview
We've adopted a dual-license approach to balance open collaboration with sustainable development:
* **MIT License**: The majority of the AutoGPT repository remains under this license.
* **Polyform Shield License**: Applies to the new `autogpt_platform` folder.
This strategy allows us to share previously closed-source components, fostering a vibrant ecosystem of developers and users.
## Ready to Get Started?
* Read the [Getting Started docs](getting-started.md) to self-host.
* [Join the waitlist](https://agpt.co/waitlist) for the cloud-hosted beta.

View File

@@ -1,8 +0,0 @@
mkdocs
mkdocs-material
mkdocs-table-reader-plugin
pymdown-extensions
mkdocs-git-revision-date-localized-plugin
zipp>=3.19.1 # not directly required, pinned by Snyk to avoid a vulnerability
urllib3>=2.2.2 # not directly required, pinned by Snyk to avoid a vulnerability
requests>=2.32.4 # not directly required, pinned by Snyk to avoid a vulnerability