Compare commits

..

18 Commits

Author SHA1 Message Date
Zamil Majdy
7c90b16209 Merge branch 'dev' into feat/ai-generated-mode 2026-01-12 15:41:14 -06:00
Zamil Majdy
3ec2f1953f refactor(backend): make is_ai_generated parameter mandatory in GraphSettings.from_graph()
- Removed default value from is_ai_generated parameter
- Forces all callers to explicitly specify whether graph is AI-generated
- Explicitly set is_ai_generated=False for store installations (curated user-published agents)
- Makes the code more explicit and prevents accidental defaults
2026-01-12 15:35:24 -06:00
Zamil Majdy
8d3dfe2cf8 fix(backend): preserve is_ai_generated_graph flag on version update
When updating a library agent to a new version, the is_ai_generated_graph
flag was being reset to False (the default), disabling human review for
AI-generated graphs on subsequent runs.

Fix: Pass the existing is_ai_generated_graph value from library.settings
to GraphSettings.from_graph() to preserve the flag across version updates.

Uses from_graph() to reduce duplication while preserving all settings.
2026-01-12 15:22:13 -06:00
Zamil Majdy
7133580d43 feat(backend): add is_ai_generated_graph setting and cleanup human_in_the_loop_safe_mode duplication
- Add is_ai_generated_graph field to GraphSettings with default false
- Add GraphSettings.from_graph() class method for initialization
- Add is_ai_generated_graph to ExecutionContext
- Update block review logic: required_human_review blocks only pause for review if safe_mode=True AND is_ai_generated_graph=True
- HITL blocks continue to respect only safe_mode (unchanged)
- Expose is_ai_generated in CreateGraph API model and create_new_graph endpoint

Cleanup:
- Remove redundant update_library_agent_settings wrapper function (25 lines)
- Inline human_in_the_loop_safe_mode initialization logic using GraphSettings.from_graph()
- Remove unnecessary branching and comments
- Total: 68 lines removed, 34 lines added

Tests: All 33 tests passing (26 API + 7 executor)
2026-01-12 15:12:41 -06:00
Toran Bruce Richards
db8b43bb3d feat(blocks): Add WordPress Get All Posts block and Publish Post draft toggle (#11003)
**Implements issue #11002**

This PR adds WordPress post management functionality and improves error
handling in DataForSEO blocks.

### Changes 🏗️

1. **New WordPress Blocks:**
- Added `WordPressGetAllPostsBlock` - Fetches posts from WordPress sites
with filtering and pagination support
- Enhanced `WordPressCreatePostBlock` with `publish_as_draft` toggle to
control post publication status

2. **WordPress API Enhancements:**
- Added `get_posts()` function in `_api.py` to retrieve posts with
filtering by status
- Added `PostsResponse` model for handling WordPress posts list API
responses
- Support for pagination with `number` and `offset` parameters (max 100
posts per request)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
- [x] Test `WordPressGetAllPostsBlock` with valid WordPress credentials
  - [x] Verify filtering posts by status (publish, draft, pending, etc.)
  - [x] Test pagination with different number and offset values
- [x] Test `WordPressCreatePostBlock` with publish_as_draft=True to
create draft posts
- [x] Test `WordPressCreatePostBlock` with publish_as_draft=False to
publish posts publicly

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Note:** No configuration changes were required for this PR.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added a WordPress “Get All Posts” block to fetch posts with optional
status filtering and pagination; returns total found and post details.
* **Enhancements**
* WordPress “Create Post” block now supports a “Publish as draft”
option, allowing posts to be created as drafts or published immediately.
* WordPress blocks are now surfaced consistently in the block catalog
for easier use.
* **Error Handling**
* Clearer error messages when fetching posts fails, aiding
troubleshooting.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces WordPress post listing and improves post creation and API
robustness.
> 
> - Adds `WordPressGetAllPostsBlock` to fetch posts with optional
`status` filter and pagination (`number`, `offset`); outputs `found`,
`posts`, and streams each `post`
> - Enhances `WordPressCreatePostBlock` with `publish_as_draft` input
and adds `site` to outputs; sets `status` accordingly
> - WordPress API updates in `_api.py`: new `get_posts`, `Post`,
`PostsResponse`, and `normalize_site`; apply
`Requests(raise_for_status=False)` across OAuth/token/info and post
creation; better error propagation
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
10be1c4709. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <Torantulino@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 19:57:47 +00:00
Abhimanyu Yadav
923d8baedc feat(frontend): add JsonTextField component for complex nested form data (#11752)
### Changes 🏗️

- Added a new `JsonTextField` component to handle complex nested JSON
types (objects/arrays inside other objects/arrays)
- Created helper functions for JSON parsing, validation, and formatting
- Implemented `useJsonTextField` hook to manage state and validation
- Enhanced `generateUiSchemaForCustomFields` to detect nested complex
types and render them as JSON text fields
- Updated `TextInputExpanderModal` to support JSON-specific styling
- Added `JSON_TEXT_FIELD_ID` constant to custom registry for field
identification

This change improves the user experience by preventing deeply nested
form UIs. Instead, complex nested structures are presented as editable
JSON text fields with proper validation and formatting.

### Before

![Screenshot 2026-01-12 at
1.07.54 PM.png](https://app.graphite.com/user-attachments/assets/dc2b96cc-562a-4e6b-8278-76de941e3bd9.png)

### After

![Screenshot 2026-01-12 at
12.35.19 PM.png](https://app.graphite.com/user-attachments/assets/ea0028a5-c119-43c3-8100-b103484e0b54.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test with simple JSON objects in forms
  - [x] Test with nested arrays and objects
  - [x] Test with anyOf/oneOf schemas containing complex types
  - [x] Test the expander modal with JSON content

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* New JSON text field with expandable modal editor, inline validation,
and helpful placeholders.
* Complex nested objects/arrays now render as JSON fields to simplify
editing.
* Modal editor uses monospace, smaller text when editing JSON for
improved readability.

* **Chores**
* Added a non-functional runtime debug log (no user-facing behavior
changes).

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 12:22:41 +00:00
Abhimanyu Yadav
a55b2e02dc feat(frontend): enhance CredentialsInput and CredentialRow components with variant support (#11753)
### Changes 🏗️

- Added a new `variant` prop to `CredentialsInput` component with
options "default" or "node"
- Implemented compact styling for the "node" variant in `CredentialRow`
component
- Modified layout and overflow handling for credential display in node
context
- Added conditional rendering of masked key display based on variant
- Passed the variant prop through the component hierarchy
- Applied the "node" variant to the `CredentialsField` component with
appropriate styling

Before

![Screenshot 2026-01-12 at
4.39.35 PM.png](https://app.graphite.com/user-attachments/assets/2b605b2d-7abf-4e8a-adc5-6a6e8b712ef7.png)

After

![Screenshot 2026-01-12 at
4.55.39 PM.png](https://app.graphite.com/user-attachments/assets/20bb1452-870a-4111-a246-c4e3a3b456ea.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified credential selection works correctly in node context
  - [x] Confirmed compact styling is applied properly in node variant
  - [x] Tested overflow handling for long credential names
  - [x] Verified both default and node variants display correctly

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Credential input and selection components now support multiple
configurable visual variants, enabling better text display handling,
optimized layouts, and improved visual consistency across different
application contexts and specific use cases.

* **Style**
* Credential field displays now feature enhanced text truncation and
overflow management for a more polished and consistent user interface
experience.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 12:22:20 +00:00
Abhimanyu Yadav
6b6648b290 feat(frontend): add Table component with TableField renderer for tabular data input (#11751)
### Changes 🏗️

- Added a new `Table` component for handling tabular data input
- Created supporting hooks and helper functions for the Table component
- Added Storybook stories to showcase different Table configurations
- Implemented a custom `TableField` renderer for JSON Schema forms
- Updated type display info to support the new "table" format
- Added schema matcher to detect and render table fields appropriately

![Screenshot 2026-01-12 at
11.29.04 AM.png](https://app.graphite.com/user-attachments/assets/71469d59-469f-4cb0-882b-a49791fe948d.png)

![Screenshot 2026-01-12 at
11.28.54 AM.png](https://app.graphite.com/user-attachments/assets/81193f32-0e16-435e-bb66-5d2aea98266a.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified Table component renders correctly with various
configurations
  - [x] Tested adding and removing rows in the Table
- [x] Confirmed data changes are properly tracked and reported via
onChange
  - [x] Verified TableField renderer works with JSON Schema forms
  - [x] Checked that table format is properly detected in the schema

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

* **New Features**
* Added a Table component for displaying and editing tabular data with
support for adding/deleting rows, read-only mode, and customizable
labels.
* Added support for rendering array fields as tables in form inputs with
configurable columns and values.

* **Tests**
* Added comprehensive Storybook stories demonstrating various Table
configurations and behaviors.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 10:32:14 +00:00
Abhimanyu Yadav
c0a9c0410b feat(frontend): add MultiSelectField component and improve node title cursor styling (#11744)
## Changes 🏗️

- Added a new `MultiSelectField` component for handling multiple boolean
selections in a dropdown format
- Implemented `useMultiSelectField` hook to manage the state and logic
of the multi-select component
- Added support for custom fields in `AnyOfField` by checking if the
option schema matches a custom field
- Added `isMultiSelectSchema` utility function to detect schemas
suitable for the multi-select component
- Added hover cursor styling to node headers to indicate text
editability

![Screenshot 2026-01-10 at
11.15.12 AM.png](https://app.graphite.com/user-attachments/assets/8254497b-604f-4ccc-a40b-eb8994c073b4.png)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that multi-select fields render correctly in the UI
  - [x] Confirmed that selecting multiple options works as expected
  - [x] Tested that the node header shows the text cursor on hover
- [x] Verified that AnyOf fields correctly use custom field renderers
when applicable

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a multi-select field allowing selection of multiple options with
improved selection UI.
* AnyOf options can now resolve and render custom field types, improving
form composition when schemas map to custom controls.

* **Style**
  * Tooltip header cursor updated for clearer hover feedback.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 09:48:58 +00:00
Abhimanyu Yadav
17a77b02c7 fix(frontend): exclude schemas with enum from anyOf detection (#11743)
### Changes 🏗️

Fixed the `isAnyOfSchema` function in schema-utils.ts to exclude schemas
that have an `enum` property. This prevents incorrect schema processing
for enums that also have anyOf definitions. Added a console.log
statement in FormRenderer.tsx to help debug schema preprocessing.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that forms with enum values render correctly
- [x] Confirmed that anyOf schemas are properly identified and processed
- [x] Tested with various schema combinations to ensure the fix doesn't
break existing functionality

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Bug Fixes
* Improved validation logic for form field schemas to correctly handle
edge cases when multiple constraint types are defined.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 09:48:47 +00:00
Zamil Majdy
701fce83ca fix(backend): add missing metadata attribute to mock nodes in SmartDecisionMaker tests (#11750)
This PR fixes failing SmartDecisionMaker tests by adding missing
`metadata` attribute to mock nodes.

### Changes 🏗️

Mock nodes in SmartDecisionMaker tests were missing the `metadata = {}`
attribute, which was introduced in commit 4a52b7eca for the
customized_name feature. This caused tests to fail with:

```
TypeError: expected string or bytes-like object, got 'Mock'
```

**Files fixed**:
- `backend/blocks/test/test_smart_decision_maker_dict.py`: Added
`metadata = {}` to mock nodes in all 3 tests
- `backend/blocks/test/test_smart_decision_maker_dynamic_fields.py`:
Added `metadata = {}` to mock nodes in all 8 tests

**Root cause**: The `_create_block_function_signature` method calls
`sink_node.metadata.get("customized_name")`, but mock nodes in tests
didn't have the metadata attribute initialized.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `poetry run pytest
backend/blocks/test/test_smart_decision_maker_dict.py -xvs` - 3 passed
- [x] Run `poetry run pytest
backend/blocks/test/test_smart_decision_maker_dynamic_fields.py -xvs` -
8 passed
  - [x] All tests pass successfully

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

* **Tests**
* Updated test infrastructure to enhance mock object configuration for
improved test reliability and consistency across test suites.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-11 17:00:36 -06:00
Zamil Majdy
78d89d0faf Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2026-01-11 13:09:23 -06:00
Zamil Majdy
f482eb668b hotfix(backend): resolve tool pin name mismatch in SmartDecisionMakerBlock (#11749)
## Root Cause

Execution a40bdb4a-964d-4684-94e8-b148eb6bcfc2 and all similar
executions have been failing since Nov 12, 2025 when tool pin routing
was refactored to use node IDs. The SmartDecisionMakerBlock was
double-sanitizing field names when emitting tool call outputs:

```python
# Original field name from link: "Max Keyword Difficulty"
original_field_name = field_mapping.get(clean_arg_name)  #  Retrieved correctly
sanitized_arg_name = self.cleanup(original_field_name)   #  Sanitized AGAIN!
emit_key = f"tools_^_{node_id}_~_{sanitized_arg_name}"   # Emits "max_keyword_difficulty"
```

But the parser expected original names from graph links:
```python
# Parser expects: "Max Keyword Difficulty" (from link.sink_name)
# Emit provides: "max_keyword_difficulty" (sanitized)
# Result: Mismatch → Tool never executes
```

### Changes 🏗️

**1. Fixed Emit Logic** (`smart_decision_maker.py` line 1135)
- Removed double sanitization: `sanitized_arg_name =
self.cleanup(original_field_name)`
- Now emits with original field names: `emit_key =
f"tools_^_{node_id}_~_{original_field_name}"`

**2. Made Agent Nodes Consistent** (`smart_decision_maker.py` lines
497-530)
- Added `field_mapping` to agent function signatures (was missing)
- Agent signatures now sanitize property keys for Anthropic API (like
block signatures)
- Stores field_mapping for use during emit

### Impact

**Fixes:**
-  All graphs with multi-word field names (e.g., "Max Keyword
Difficulty", "Minimum Volume")
-  All graphs with special characters in field names (e.g., "API-Key")
-  Both block nodes AND agent nodes now work consistently

**Unaffected:**
- Single-word lowercase field names (e.g., "keyword", "url") - these
were already working

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified parse_execution_output handles exact match correctly
  - [x] Verified emit uses original field names
  - [x] Verified field_mapping works for both block and agent nodes
- [x] Re-run execution a40bdb4a-964d-4684-94e8-b148eb6bcfc2 after
deployment to verify fix

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
(no changes)
- [x] `docker-compose.yml` is updated or already compatible with my
changes (no changes)
- [x] No configuration changes in this PR

### Test Plan

1. **Unit test validation** (completed):
- Field name cleanup: "Max Keyword Difficulty" →
"max_keyword_difficulty" 
   - Parse with exact match: Success 
   - Parse with mismatch: Returns None 

2. **Production validation** (to be done after deployment):
   - Re-run execution a40bdb4a-964d-4684-94e8-b148eb6bcfc2
- Verify AgentExecutor (node 767682f5-694f-4b2a-bf52-fbdcad6a4a4f)
executes successfully
   - Verify execution completes with high correctness score (not 0.20)
   - Monitor for any regressions in existing graphs

### Files Changed

- `backend/blocks/smart_decision_maker.py`: Remove double sanitization,
add agent field_mapping

### Related Issues

- Resolves execution failure a40bdb4a-964d-4684-94e8-b148eb6bcfc2
- Fixes bug introduced in commit 536e2a5ec (Nov 12, 2025)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved field name mapping consistency in the SmartDecisionMaker
block to ensure proper handling of field names throughout function
signatures and tool execution workflows.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-12 02:08:12 +07:00
Nicholas Tindle
4a52b7eca0 fix(backend): use customized block names in smart decision maker
The SmartDecisionMakerBlock now respects the customized_name field from
node metadata when generating tool function signatures for the LLM.

Previously, the block always used the static block.name from the block
class definition, ignoring any custom names users set in the builder UI.

Changes:
- _create_block_function_signature: Check sink_node.metadata for
  customized_name before falling back to block.name
- _create_agent_function_signature: Check sink_node.metadata for
  customized_name before falling back to sink_graph_meta.name
- Added 4 unit tests for the customized_name feature

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 16:51:39 -07:00
Zamil Majdy
97847f59f7 feat(backend): add human-in-the-loop review system for blocks requiring approval (#11732)
## Summary
Introduces a comprehensive Human-In-The-Loop (HITL) review system that
allows any block to require human approval before execution. This
extends the existing HITL infrastructure to support automatic review
requests for potentially dangerous operations.

## 🚀 Key Features

### **Automatic HITL for Any Block**
- **Simple opt-in**: Set `self.requires_human_review = True` in any
block constructor
- **Safe mode integration**: Only activates when
`execution_context.safe_mode = True`
- **Seamless workflow**: Blocks pause execution → Human reviews via
existing UI → Execution continues or stops

### **Unified Review Infrastructure**
- **Shared HITLReviewHelper**: Clean, reusable helper class for all
review operations
- **Single API**: `handle_review_decision()` method with structured
return type
- **Type-safe**: Proper typing with non-nullable
`ReviewDecision.review_result`

### **Smart Graph Detection** 
- **Updated `has_human_in_the_loop`**: Now detects both dedicated HITL
blocks and blocks with `requires_human_review = True`
- **Frontend awareness**: UI can properly indicate graphs requiring
human intervention

## 🏗️ Implementation

### **Block Usage**
```python
class MyBlock(Block):
    def __init__(self):
        super().__init__(...)
        self.requires_human_review = True  # Enable automatic HITL
        
    async def run(self, input_data, **kwargs):
        # If we reach here, either safe mode is off OR human approved
        # No additional HITL code needed - handled automatically by base class
        yield "result", "Operation completed"
```

### **Review Workflow**
1. **Block execution starts** → Base class checks
`requires_human_review` flag
2. **Safe mode enabled** → Creates review entry, pauses execution 
3. **Human reviews** → Uses existing review UI to approve/reject
4. **Execution resumes** → Continues if approved, raises error if
rejected
5. **Safe mode disabled** → Executes normally without review

## 🔧 Technical Improvements

### **Code Quality Enhancements**
- **Better naming**: `risky_block` → `requires_human_review` (clearer
intent)
- **Type safety**: Non-nullable `ReviewDecision.review_result`
(eliminates Optional checks)
- **Exhaustive handling**: Proper error handling for unexpected review
statuses
- **Clean exception handling**: Removed redundant try-catch-log-reraise
patterns

### **Architecture Fixes**
- **Circular import resolution**: Fixed `ExecutionContext` import issues
breaking 444+ block tests
- **Early returns**: Cleaner control flow without nested conditionals
- **Defensive programming**: Handles edge cases with clear error
messages

## 📊 Changes Made

### **Core Files**
- **`Block.requires_human_review`**: New flag for marking blocks
requiring approval
- **`HITLReviewHelper`**: Shared helper class with clean, testable API
- **`HumanInTheLoopBlock`**: Refactored to use shared infrastructure
- **`Graph.has_human_in_the_loop`**: Updated to include review-requiring
blocks

### **Quality Improvements**
- **Type hints**: Proper typing throughout with runtime compatibility
- **Error handling**: Exhaustive status handling with descriptive errors
- **Code reduction**: -16 lines through removal of redundant exception
handling
- **Test compatibility**: All 444/445 block tests pass

##  Testing & Validation

- **All tests pass**: 444/445 block tests passing 
- **Type checking**: All pyright/mypy checks pass   
- **Formatting**: All linting and formatting checks pass 
- **Circular imports**: Resolved import issues that were breaking tests

- **Backward compatibility**: Existing HITL functionality unchanged 

## 🎯 Use Cases

This enables automatic human oversight for blocks performing:
- **File operations**: Deletion, modification, system access
- **External API calls**: Payments, data modifications, destructive
operations
- **System commands**: Shell execution, configuration changes
- **Data processing**: Sensitive data handling, compliance-required
operations

## 🔄 Migration Path

**Existing code**: No changes required - fully backward compatible
**New blocks**: Simply set `self.requires_human_review = True` to enable
automatic HITL
**Safe mode**: Controls whether review requests are created (production
vs development)

---

This creates a robust, type-safe foundation for human oversight in
automated workflows while maintaining the existing HITL user experience
and API compatibility.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Human-in-the-loop review support so executions can pause for human
review and resume based on decisions.

* **Improvements**
* Blocks can opt into requiring human review and will use reviewed input
when proceeding.
* Unified review decision flow with clearer approved/rejected outcomes
and messaging.
* Graph detection expanded to recognize nodes that require human review.

* **Chores**
  * Test config adjusted to avoid pytest plugin conflicts.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 21:14:37 +00:00
Zamil Majdy
22ca8955c5 fix(backend): library agent creation and version update improvements (#11731)
## Summary
Fixes library agent creation and version update logic to properly handle
both user-created and marketplace agents.

## Changes
- **Remove useGraphIsActiveVersion filter** from
`update_agent_version_in_library` to allow both manual and auto updates
- **Set useGraphIsActiveVersion correctly**:
- `False` for marketplace agents (require manual updates to avoid
breaking workflows)
- `True` for user-created agents (can safely auto-update since user
controls source)
- Update function documentation to reflect new behavior

## Problem Solved
- Marketplace agents can now be updated manually via API
- User-created agents maintain auto-update capability  
- Resolves Sentry error AUTOGPT-SERVER-722 about "Expected a record,
found none"
- Fixes store submission modal issues

## Test Plan
- [x] Verify marketplace agents are created with
`useGraphIsActiveVersion: False`
- [x] Verify user agents are created with `useGraphIsActiveVersion:
True`
- [x] Confirm `update_agent_version_in_library` works for both types
- [x] Test store submission flow works without modal issues

## Review Notes
This change ensures proper separation between user-controlled agents
(auto-update) and marketplace agents (manual update), while allowing the
API to service both use cases.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

* **New Features**
* Enhanced agent publishing workflow with improved version tracking and
change detection for marketplace updates

* **Bug Fixes**
  * Improved error handling when updating agent versions in the library
  * Better detection of unpublished changes before publishing agents

* **Improvements**
* Changes Summary field now supports longer descriptions (up to 500
characters) with multi-line editing capability

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-09 21:14:05 +00:00
Nicholas Tindle
43cbe2e011 feat!(blocks): Add Reddit OAuth2 integration and advanced Reddit blocks (#11623)
Replaces user/password Reddit credentials with OAuth2, adds
RedditOAuthHandler, and updates Reddit blocks to support OAuth2
authentication. Introduces new blocks for creating posts, fetching post
details, searching, editing posts, and retrieving subreddit info.
Updates test credentials and input handling to use OAuth2 tokens.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Rebuild the reddit blocks to support oauth2 rather than requiring users
to provide their password and username.
This is done via a swap from script based to web based authentication on
the reddit side faciliatated by the approval of an oauth app by reddit
on the account `ntindle`
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build a super agent
  - [x] Upload the super agent and a video of it working

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces full Reddit OAuth2 support and substantially expands Reddit
capabilities across the platform.
> 
> - Adds `RedditOAuthHandler` with token exchange, refresh, revoke;
registers handler in `integrations/oauth/__init__.py`
> - Refactors Reddit blocks to use `OAuth2Credentials` and `praw` via
refresh tokens; updates models (e.g., `post_id`, richer outputs) and
adds `strip_reddit_prefix`
> - New blocks: create/edit/delete posts, post/get/delete comments,
reply to comments, get post details, user posts (self/others), search,
inbox, subreddit info/rules/flairs, send messages
> - Updates default `settings.config.reddit_user_agent` and test
credentials; minor `.branchlet.json` addition
> - Docs: clarifies block error-handling with
`BlockInputError`/`BlockExecutionError` guidance
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4f1f26c7e7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

* **New Features**
* Added OAuth2-based authentication for Reddit integration, replacing
legacy credential methods
* Expanded Reddit capabilities with new blocks for creating posts,
retrieving post details, managing comments, accessing inbox, and
fetching user/subreddit information
* Enhanced data models to support richer Reddit interactions and
chainable workflows

* **Documentation**
* Updated error handling guidance to distinguish between validation
errors and runtime errors with improved exception patterns

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2026-01-09 20:53:03 +00:00
Nicholas Tindle
a318832414 feat(docs): update dev from gitbook changes (#11740)
<!-- Clearly explain the need for these changes: -->
gitbook branch has changes that need synced to dev
### Changes 🏗️
Pull changes from gitbook into dev
<!-- Concisely describe all of the changes made in this pull request:
-->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Migrates documentation to GitBook and removes the old MkDocs setup.
> 
> - Removes MkDocs configuration and infra: `docs/mkdocs.yml`,
`docs/netlify.toml`, `docs/overrides/main.html`,
`docs/requirements.txt`, and JS assets (`_javascript/mathjax.js`,
`_javascript/tablesort.js`)
> - Updates `docs/content/contribute/index.md` to describe GitBook
workflow (gitbook branch, editing, previews, and `SUMMARY.md`)
> - Adds GitBook navigation file `docs/platform/SUMMARY.md` and a new
platform overview page `docs/platform/what-is-autogpt-platform.md`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7e118b5a8. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated contribution guide for new documentation platform and workflow
  * Added new platform overview and navigation documentation

* **Chores**
  * Removed MkDocs configuration and related dependencies
  * Removed deprecated JavaScript integrations and deployment overrides

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 19:22:05 +00:00
117 changed files with 4789 additions and 2111 deletions

37
.branchlet.json Normal file
View File

@@ -0,0 +1,37 @@
{
"worktreeCopyPatterns": [
".env*",
".vscode/**",
".auth/**",
".claude/**",
"autogpt_platform/.env*",
"autogpt_platform/backend/.env*",
"autogpt_platform/frontend/.env*",
"autogpt_platform/frontend/.auth/**",
"autogpt_platform/db/docker/.env*"
],
"worktreeCopyIgnores": [
"**/node_modules/**",
"**/dist/**",
"**/.git/**",
"**/Thumbs.db",
"**/.DS_Store",
"**/.next/**",
"**/__pycache__/**",
"**/.ruff_cache/**",
"**/.pytest_cache/**",
"**/*.pyc",
"**/playwright-report/**",
"**/logs/**",
"**/site/**"
],
"worktreePathTemplate": "$BASE_PATH.worktree",
"postCreateCmd": [
"cd autogpt_platform/autogpt_libs && poetry install",
"cd autogpt_platform/backend && poetry install && poetry run prisma generate",
"cd autogpt_platform/frontend && pnpm install",
"cd docs && pip install -r requirements.txt"
],
"terminalCommand": "code .",
"deleteBranchWithWorktree": false
}

View File

@@ -401,28 +401,11 @@ async def add_generated_agent_image(
)
def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings:
"""
Initialize GraphSettings based on graph content.
Args:
graph: The graph to analyze
Returns:
GraphSettings with appropriate human_in_the_loop_safe_mode value
"""
if graph.has_human_in_the_loop:
# Graph has HITL blocks - set safe mode to True by default
return GraphSettings(human_in_the_loop_safe_mode=True)
else:
# Graph has no HITL blocks - keep None
return GraphSettings(human_in_the_loop_safe_mode=None)
async def create_library_agent(
graph: graph_db.GraphModel,
user_id: str,
create_library_agents_for_sub_graphs: bool = True,
is_ai_generated: bool = False,
) -> list[library_model.LibraryAgent]:
"""
Adds an agent to the user's library (LibraryAgent table).
@@ -431,6 +414,7 @@ async def create_library_agent(
agent: The agent/Graph to add to the library.
user_id: The user to whom the agent will be added.
create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well.
is_ai_generated: Whether this graph was AI-generated.
Returns:
The newly created LibraryAgent records.
@@ -465,7 +449,9 @@ async def create_library_agent(
}
},
settings=SafeJson(
_initialize_graph_settings(graph_entry).model_dump()
GraphSettings.from_graph(
graph_entry, is_ai_generated=is_ai_generated
).model_dump()
),
),
include=library_agent_include(
@@ -489,7 +475,7 @@ async def update_agent_version_in_library(
agent_graph_version: int,
) -> library_model.LibraryAgent:
"""
Updates the agent version in the library if useGraphIsActiveVersion is True.
Updates the agent version in the library for any agent owned by the user.
Args:
user_id: Owner of the LibraryAgent.
@@ -498,20 +484,31 @@ async def update_agent_version_in_library(
Raises:
DatabaseError: If there's an error with the update.
NotFoundError: If no library agent is found for this user and agent.
"""
logger.debug(
f"Updating agent version in library for user #{user_id}, "
f"agent #{agent_graph_id} v{agent_graph_version}"
)
try:
library_agent = await prisma.models.LibraryAgent.prisma().find_first_or_raise(
async with transaction() as tx:
library_agent = await prisma.models.LibraryAgent.prisma(tx).find_first_or_raise(
where={
"userId": user_id,
"agentGraphId": agent_graph_id,
"useGraphIsActiveVersion": True,
},
)
lib = await prisma.models.LibraryAgent.prisma().update(
# Delete any conflicting LibraryAgent for the target version
await prisma.models.LibraryAgent.prisma(tx).delete_many(
where={
"userId": user_id,
"agentGraphId": agent_graph_id,
"agentGraphVersion": agent_graph_version,
"id": {"not": library_agent.id},
}
)
lib = await prisma.models.LibraryAgent.prisma(tx).update(
where={"id": library_agent.id},
data={
"AgentGraph": {
@@ -525,13 +522,13 @@ async def update_agent_version_in_library(
},
include={"AgentGraph": True},
)
if lib is None:
raise NotFoundError(f"Library agent {library_agent.id} not found")
return library_model.LibraryAgent.from_db(lib)
except prisma.errors.PrismaError as e:
logger.error(f"Database error updating agent version in library: {e}")
raise DatabaseError("Failed to update agent version in library") from e
if lib is None:
raise NotFoundError(
f"Failed to update library agent for {agent_graph_id} v{agent_graph_version}"
)
return library_model.LibraryAgent.from_db(lib)
async def update_library_agent(
@@ -616,33 +613,6 @@ async def update_library_agent(
raise DatabaseError("Failed to update library agent") from e
async def update_library_agent_settings(
user_id: str,
agent_id: str,
settings: GraphSettings,
) -> library_model.LibraryAgent:
"""
Updates the settings for a specific LibraryAgent.
Args:
user_id: The owner of the LibraryAgent.
agent_id: The ID of the LibraryAgent to update.
settings: New GraphSettings to apply.
Returns:
The updated LibraryAgent.
Raises:
NotFoundError: If the specified LibraryAgent does not exist.
DatabaseError: If there's an error in the update operation.
"""
return await update_library_agent(
library_agent_id=agent_id,
user_id=user_id,
settings=settings,
)
async def delete_library_agent(
library_agent_id: str, user_id: str, soft_delete: bool = True
) -> None:
@@ -825,8 +795,11 @@ async def add_store_agent_to_library(
}
},
"isCreatedByUser": False,
"useGraphIsActiveVersion": False,
"settings": SafeJson(
_initialize_graph_settings(graph_model).model_dump()
GraphSettings.from_graph(
graph_model, is_ai_generated=False
).model_dump()
),
},
include=library_agent_include(

View File

@@ -1,72 +0,0 @@
#!/usr/bin/env python3
"""
CLI script to backfill embeddings for store agents.
Usage:
poetry run python -m backend.server.v2.store.backfill_embeddings [--batch-size N]
"""
import argparse
import asyncio
import sys
import prisma
from backend.api.features.store.embeddings import (
backfill_missing_embeddings,
get_embedding_stats,
)
async def main(batch_size: int = 100) -> int:
"""Run the backfill process."""
# Initialize Prisma client
client = prisma.Prisma()
await client.connect()
prisma.register(client)
try:
# Get current stats
print("Current embedding stats:")
stats = await get_embedding_stats()
print(f" Total approved: {stats['total_approved']}")
print(f" With embeddings: {stats['with_embeddings']}")
print(f" Without embeddings: {stats['without_embeddings']}")
print(f" Coverage: {stats['coverage_percent']}%")
if stats["without_embeddings"] == 0:
print("\nAll agents already have embeddings. Nothing to do.")
return 0
# Run backfill
print(f"\nBackfilling up to {batch_size} embeddings...")
result = await backfill_missing_embeddings(batch_size=batch_size)
print(f" Processed: {result['processed']}")
print(f" Success: {result['success']}")
print(f" Failed: {result['failed']}")
# Get final stats
print("\nFinal embedding stats:")
stats = await get_embedding_stats()
print(f" Total approved: {stats['total_approved']}")
print(f" With embeddings: {stats['with_embeddings']}")
print(f" Without embeddings: {stats['without_embeddings']}")
print(f" Coverage: {stats['coverage_percent']}%")
return 0 if result["failed"] == 0 else 1
finally:
await client.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Backfill embeddings for store agents")
parser.add_argument(
"--batch-size",
type=int,
default=100,
help="Number of embeddings to generate (default: 100)",
)
args = parser.parse_args()
sys.exit(asyncio.run(main(batch_size=args.batch_size)))

View File

@@ -1,5 +1,6 @@
import asyncio
import logging
import typing
from datetime import datetime, timezone
from typing import Literal
@@ -9,7 +10,7 @@ import prisma.errors
import prisma.models
import prisma.types
from backend.data.db import transaction
from backend.data.db import query_raw_with_schema, transaction
from backend.data.graph import (
GraphMeta,
GraphModel,
@@ -29,8 +30,6 @@ from backend.util.settings import Settings
from . import exceptions as store_exceptions
from . import model as store_model
from .embeddings import ensure_embedding
from .hybrid_search import hybrid_search
logger = logging.getLogger(__name__)
settings = Settings()
@@ -57,62 +56,122 @@ async def get_store_agents(
f"Getting store agents. featured={featured}, creators={creators}, sorted_by={sorted_by}, search={search_query}, category={category}, page={page}"
)
search_used_hybrid = False
store_agents: list[store_model.StoreAgent] = []
total = 0
total_pages = 0
try:
# If search_query is provided, try hybrid search (embeddings + tsvector)
# If search_query is provided, use full-text search
if search_query:
try:
# Use hybrid search combining semantic and lexical signals
agents, total = await hybrid_search(
query=search_query,
featured=featured,
creators=creators,
category=category,
sorted_by="relevance", # Use hybrid scoring for relevance
page=page,
page_size=page_size,
)
search_used_hybrid = True
offset = (page - 1) * page_size
# Convert hybrid search results (dict format)
total_pages = (total + page_size - 1) // page_size
store_agents: list[store_model.StoreAgent] = []
for agent in agents:
try:
store_agent = store_model.StoreAgent(
slug=agent["slug"],
agent_name=agent["agent_name"],
agent_image=(
agent["agent_image"][0] if agent["agent_image"] else ""
),
creator=agent["creator_username"] or "Needs Profile",
creator_avatar=agent["creator_avatar"] or "",
sub_heading=agent["sub_heading"],
description=agent["description"],
runs=agent["runs"],
rating=agent["rating"],
)
store_agents.append(store_agent)
except Exception as e:
logger.error(
f"Error parsing Store agent from hybrid search results: {e}"
)
continue
# Whitelist allowed order_by columns
ALLOWED_ORDER_BY = {
"rating": "rating DESC, rank DESC",
"runs": "runs DESC, rank DESC",
"name": "agent_name ASC, rank ASC",
"updated_at": "updated_at DESC, rank DESC",
}
except Exception as hybrid_error:
# If hybrid search fails (e.g., missing embeddings table),
# fallback to basic search logic below
logger.warning(
f"Hybrid search failed, falling back to basic search: {hybrid_error}"
)
search_used_hybrid = False
# Validate and get order clause
if sorted_by and sorted_by in ALLOWED_ORDER_BY:
order_by_clause = ALLOWED_ORDER_BY[sorted_by]
else:
order_by_clause = "updated_at DESC, rank DESC"
if not search_used_hybrid:
# Fallback path - use basic search or no search
# Build WHERE conditions and parameters list
where_parts: list[str] = []
params: list[typing.Any] = [search_query] # $1 - search term
param_index = 2 # Start at $2 for next parameter
# Always filter for available agents
where_parts.append("is_available = true")
if featured:
where_parts.append("featured = true")
if creators and creators:
# Use ANY with array parameter
where_parts.append(f"creator_username = ANY(${param_index})")
params.append(creators)
param_index += 1
if category and category:
where_parts.append(f"${param_index} = ANY(categories)")
params.append(category)
param_index += 1
sql_where_clause: str = " AND ".join(where_parts) if where_parts else "1=1"
# Add pagination params
params.extend([page_size, offset])
limit_param = f"${param_index}"
offset_param = f"${param_index + 1}"
# Execute full-text search query with parameterized values
sql_query = f"""
SELECT
slug,
agent_name,
agent_image,
creator_username,
creator_avatar,
sub_heading,
description,
runs,
rating,
categories,
featured,
is_available,
updated_at,
ts_rank_cd(search, query) AS rank
FROM {{schema_prefix}}"StoreAgent",
plainto_tsquery('english', $1) AS query
WHERE {sql_where_clause}
AND search @@ query
ORDER BY {order_by_clause}
LIMIT {limit_param} OFFSET {offset_param}
"""
# Count query for pagination - only uses search term parameter
count_query = f"""
SELECT COUNT(*) as count
FROM {{schema_prefix}}"StoreAgent",
plainto_tsquery('english', $1) AS query
WHERE {sql_where_clause}
AND search @@ query
"""
# Execute both queries with parameters
agents = await query_raw_with_schema(sql_query, *params)
# For count, use params without pagination (last 2 params)
count_params = params[:-2]
count_result = await query_raw_with_schema(count_query, *count_params)
total = count_result[0]["count"] if count_result else 0
total_pages = (total + page_size - 1) // page_size
# Convert raw results to StoreAgent models
store_agents: list[store_model.StoreAgent] = []
for agent in agents:
try:
store_agent = store_model.StoreAgent(
slug=agent["slug"],
agent_name=agent["agent_name"],
agent_image=(
agent["agent_image"][0] if agent["agent_image"] else ""
),
creator=agent["creator_username"] or "Needs Profile",
creator_avatar=agent["creator_avatar"] or "",
sub_heading=agent["sub_heading"],
description=agent["description"],
runs=agent["runs"],
rating=agent["rating"],
)
store_agents.append(store_agent)
except Exception as e:
logger.error(f"Error parsing Store agent from search results: {e}")
continue
else:
# Non-search query path (original logic)
where_clause: prisma.types.StoreAgentWhereInput = {"is_available": True}
if featured:
where_clause["featured"] = featured
@@ -121,14 +180,6 @@ async def get_store_agents(
if category:
where_clause["categories"] = {"has": category}
# Add basic text search if search_query provided but hybrid failed
if search_query:
where_clause["OR"] = [
{"agent_name": {"contains": search_query, "mode": "insensitive"}},
{"sub_heading": {"contains": search_query, "mode": "insensitive"}},
{"description": {"contains": search_query, "mode": "insensitive"}},
]
order_by = []
if sorted_by == "rating":
order_by.append({"rating": "desc"})
@@ -1549,22 +1600,6 @@ async def review_store_submission(
},
)
# Generate embedding for approved listing (non-blocking)
try:
await ensure_embedding(
version_id=store_listing_version_id,
name=store_listing_version.name,
description=store_listing_version.description,
sub_heading=store_listing_version.subHeading,
categories=store_listing_version.categories or [],
)
except Exception as e:
# Don't fail approval if embedding generation fails
logger.warning(
f"Failed to generate embedding for approved listing "
f"{store_listing_version_id}: {e}"
)
# If rejecting an approved agent, update the StoreListing accordingly
if is_rejecting_approved:
# Check if there are other approved versions

View File

@@ -1,533 +0,0 @@
"""
Unified Content Embeddings Service
Handles generation and storage of OpenAI embeddings for all content types
(store listings, blocks, documentation, library agents) to enable semantic/hybrid search.
"""
import asyncio
import logging
from typing import Any
import prisma
from openai import OpenAI
from prisma.enums import ContentType
from backend.util.json import dumps
from backend.util.settings import Settings
logger = logging.getLogger(__name__)
# OpenAI embedding model configuration
EMBEDDING_MODEL = "text-embedding-3-small"
EMBEDDING_DIM = 1536
def build_searchable_text(
name: str,
description: str,
sub_heading: str,
categories: list[str],
) -> str:
"""
Build searchable text from listing version fields.
Combines relevant fields into a single string for embedding.
"""
parts = []
# Name is important - include it
if name:
parts.append(name)
# Sub-heading provides context
if sub_heading:
parts.append(sub_heading)
# Description is the main content
if description:
parts.append(description)
# Categories help with semantic matching
if categories:
parts.append(" ".join(categories))
return " ".join(parts)
async def generate_embedding(text: str) -> list[float] | None:
"""
Generate embedding for text using OpenAI API.
Returns None if embedding generation fails.
"""
try:
settings = Settings()
api_key = settings.secrets.openai_internal_api_key
if not api_key:
logger.warning("openai_internal_api_key not set, cannot generate embedding")
return None
client = OpenAI(api_key=api_key)
# Truncate text to avoid token limits (~32k chars for safety)
truncated_text = text[:32000]
response = client.embeddings.create(
model=EMBEDDING_MODEL,
input=truncated_text,
)
embedding = response.data[0].embedding
logger.debug(f"Generated embedding with {len(embedding)} dimensions")
return embedding
except Exception as e:
logger.error(f"Failed to generate embedding: {e}")
return None
async def store_embedding(
version_id: str,
embedding: list[float],
tx: prisma.Prisma | None = None,
) -> bool:
"""
Store embedding in the database.
BACKWARD COMPATIBILITY: Maintained for existing store listing usage.
Uses raw SQL since Prisma doesn't natively support pgvector.
"""
return await store_content_embedding(
content_type=ContentType.STORE_AGENT,
content_id=version_id,
embedding=embedding,
searchable_text="", # Will be populated from existing data
metadata=None,
user_id=None, # Store agents are public
tx=tx,
)
async def store_content_embedding(
content_type: ContentType,
content_id: str,
embedding: list[float],
searchable_text: str,
metadata: dict | None = None,
user_id: str | None = None,
tx: prisma.Prisma | None = None,
) -> bool:
"""
Store embedding in the unified content embeddings table.
New function for unified content embedding storage.
Uses raw SQL since Prisma doesn't natively support pgvector.
"""
try:
client = tx if tx else prisma.get_client()
# Convert embedding to PostgreSQL vector format
embedding_str = "[" + ",".join(str(x) for x in embedding) + "]"
metadata_json = dumps(metadata or {})
# Upsert the embedding
await client.execute_raw(
"""
INSERT INTO platform."UnifiedContentEmbedding" (
"contentType", "contentId", "userId", "embedding", "searchableText", "metadata", "createdAt", "updatedAt"
)
VALUES ($1, $2, $3, $4::vector, $5, $6::jsonb, NOW(), NOW())
ON CONFLICT ("contentType", "contentId", "userId")
DO UPDATE SET
"embedding" = $4::vector,
"searchableText" = $5,
"metadata" = $6::jsonb,
"updatedAt" = NOW()
""",
content_type,
content_id,
user_id,
embedding_str,
searchable_text,
metadata_json,
)
logger.info(f"Stored embedding for {content_type}:{content_id}")
return True
except Exception as e:
logger.error(f"Failed to store embedding for {content_type}:{content_id}: {e}")
return False
async def get_embedding(version_id: str) -> dict[str, Any] | None:
"""
Retrieve embedding record for a listing version.
BACKWARD COMPATIBILITY: Maintained for existing store listing usage.
Returns dict with storeListingVersionId, embedding, timestamps or None if not found.
"""
result = await get_content_embedding(
ContentType.STORE_AGENT, version_id, user_id=None
)
if result:
# Transform to old format for backward compatibility
return {
"storeListingVersionId": result["contentId"],
"embedding": result["embedding"],
"createdAt": result["createdAt"],
"updatedAt": result["updatedAt"],
}
return None
async def get_content_embedding(
content_type: ContentType, content_id: str, user_id: str | None = None
) -> dict[str, Any] | None:
"""
Retrieve embedding record for any content type.
New function for unified content embedding retrieval.
Returns dict with contentType, contentId, embedding, timestamps or None if not found.
"""
try:
client = prisma.get_client()
result = await client.query_raw(
"""
SELECT
"contentType",
"contentId",
"userId",
"embedding"::text as "embedding",
"searchableText",
"metadata",
"createdAt",
"updatedAt"
FROM platform."UnifiedContentEmbedding"
WHERE "contentType" = $1 AND "contentId" = $2 AND ("userId" = $3 OR ($3 IS NULL AND "userId" IS NULL))
""",
content_type,
content_id,
user_id,
)
if result and len(result) > 0:
return result[0]
return None
except Exception as e:
logger.error(f"Failed to get embedding for {content_type}:{content_id}: {e}")
return None
async def ensure_embedding(
version_id: str,
name: str,
description: str,
sub_heading: str,
categories: list[str],
force: bool = False,
tx: prisma.Prisma | None = None,
) -> bool:
"""
Ensure an embedding exists for the listing version.
Creates embedding if missing. Use force=True to regenerate.
Backward-compatible wrapper for store listings.
Args:
version_id: The StoreListingVersion ID
name: Agent name
description: Agent description
sub_heading: Agent sub-heading
categories: Agent categories
force: Force regeneration even if embedding exists
tx: Optional transaction client
Returns:
True if embedding exists/was created, False on failure
"""
try:
# Check if embedding already exists
if not force:
existing = await get_embedding(version_id)
if existing and existing.get("embedding"):
logger.debug(f"Embedding for version {version_id} already exists")
return True
# Build searchable text for embedding
searchable_text = build_searchable_text(
name, description, sub_heading, categories
)
# Generate new embedding
embedding = await generate_embedding(searchable_text)
if embedding is None:
logger.warning(f"Could not generate embedding for version {version_id}")
return False
# Store the embedding with metadata using new function
metadata = {
"name": name,
"subHeading": sub_heading,
"categories": categories,
}
return await store_content_embedding(
content_type=ContentType.STORE_AGENT,
content_id=version_id,
embedding=embedding,
searchable_text=searchable_text,
metadata=metadata,
user_id=None, # Store agents are public
tx=tx,
)
except Exception as e:
logger.error(f"Failed to ensure embedding for version {version_id}: {e}")
return False
async def delete_embedding(version_id: str) -> bool:
"""
Delete embedding for a listing version.
BACKWARD COMPATIBILITY: Maintained for existing store listing usage.
Note: This is usually handled automatically by CASCADE delete,
but provided for manual cleanup if needed.
"""
return await delete_content_embedding(ContentType.STORE_AGENT, version_id)
async def delete_content_embedding(content_type: ContentType, content_id: str) -> bool:
"""
Delete embedding for any content type.
New function for unified content embedding deletion.
Note: This is usually handled automatically by CASCADE delete,
but provided for manual cleanup if needed.
"""
try:
client = prisma.get_client()
await client.execute_raw(
"""
DELETE FROM platform."UnifiedContentEmbedding"
WHERE "contentType" = $1 AND "contentId" = $2
""",
content_type,
content_id,
)
logger.info(f"Deleted embedding for {content_type}:{content_id}")
return True
except Exception as e:
logger.error(f"Failed to delete embedding for {content_type}:{content_id}: {e}")
return False
async def get_embedding_stats() -> dict[str, Any]:
"""
Get statistics about embedding coverage.
Returns counts of:
- Total approved listing versions
- Versions with embeddings
- Versions without embeddings
"""
try:
client = prisma.get_client()
# Count approved versions
approved_result = await client.query_raw(
"""
SELECT COUNT(*) as count
FROM platform."StoreListingVersion"
WHERE "submissionStatus" = 'APPROVED'
AND "isDeleted" = false
"""
)
total_approved = approved_result[0]["count"] if approved_result else 0
# Count versions with embeddings
embedded_result = await client.query_raw(
"""
SELECT COUNT(*) as count
FROM platform."StoreListingVersion" slv
JOIN platform."UnifiedContentEmbedding" uce ON slv.id = uce."contentId" AND uce."contentType" = 'STORE_AGENT'
WHERE slv."submissionStatus" = 'APPROVED'
AND slv."isDeleted" = false
"""
)
with_embeddings = embedded_result[0]["count"] if embedded_result else 0
return {
"total_approved": total_approved,
"with_embeddings": with_embeddings,
"without_embeddings": total_approved - with_embeddings,
"coverage_percent": (
round(with_embeddings / total_approved * 100, 1)
if total_approved > 0
else 0
),
}
except Exception as e:
logger.error(f"Failed to get embedding stats: {e}")
return {
"total_approved": 0,
"with_embeddings": 0,
"without_embeddings": 0,
"coverage_percent": 0,
"error": str(e),
}
async def backfill_missing_embeddings(batch_size: int = 10) -> dict[str, Any]:
"""
Generate embeddings for approved listings that don't have them.
Args:
batch_size: Number of embeddings to generate in one call
Returns:
Dict with success/failure counts
"""
try:
client = prisma.get_client()
# Find approved versions without embeddings
missing = await client.query_raw(
"""
SELECT
slv.id,
slv.name,
slv.description,
slv."subHeading",
slv.categories
FROM platform."StoreListingVersion" slv
LEFT JOIN platform."UnifiedContentEmbedding" uce
ON slv.id = uce."contentId" AND uce."contentType" = 'STORE_AGENT'
WHERE slv."submissionStatus" = 'APPROVED'
AND slv."isDeleted" = false
AND uce."contentId" IS NULL
LIMIT $1
""",
batch_size,
)
if not missing:
return {
"processed": 0,
"success": 0,
"failed": 0,
"message": "No missing embeddings",
}
# Process embeddings concurrently for better performance
embedding_tasks = [
ensure_embedding(
version_id=row["id"],
name=row["name"],
description=row["description"],
sub_heading=row["subHeading"],
categories=row["categories"] or [],
)
for row in missing
]
results = await asyncio.gather(*embedding_tasks, return_exceptions=True)
success = sum(1 for result in results if result is True)
failed = len(results) - success
return {
"processed": len(missing),
"success": success,
"failed": failed,
"message": f"Backfilled {success} embeddings, {failed} failed",
}
except Exception as e:
logger.error(f"Failed to backfill embeddings: {e}")
return {
"processed": 0,
"success": 0,
"failed": 0,
"error": str(e),
}
async def embed_query(query: str) -> list[float] | None:
"""
Generate embedding for a search query.
Same as generate_embedding but with clearer intent.
"""
return await generate_embedding(query)
def embedding_to_vector_string(embedding: list[float]) -> str:
"""Convert embedding list to PostgreSQL vector string format."""
return "[" + ",".join(str(x) for x in embedding) + "]"
async def ensure_content_embedding(
content_type: ContentType,
content_id: str,
searchable_text: str,
metadata: dict | None = None,
user_id: str | None = None,
force: bool = False,
tx: prisma.Prisma | None = None,
) -> bool:
"""
Ensure an embedding exists for any content type.
Generic function for creating embeddings for store agents, blocks, docs, etc.
Args:
content_type: ContentType enum value (STORE_AGENT, BLOCK, etc.)
content_id: Unique identifier for the content
searchable_text: Combined text for embedding generation
metadata: Optional metadata to store with embedding
force: Force regeneration even if embedding exists
tx: Optional transaction client
Returns:
True if embedding exists/was created, False on failure
"""
try:
# Check if embedding already exists
if not force:
existing = await get_content_embedding(content_type, content_id, user_id)
if existing and existing.get("embedding"):
logger.debug(
f"Embedding for {content_type}:{content_id} already exists"
)
return True
# Generate new embedding
embedding = await generate_embedding(searchable_text)
if embedding is None:
logger.warning(
f"Could not generate embedding for {content_type}:{content_id}"
)
return False
# Store the embedding
return await store_content_embedding(
content_type=content_type,
content_id=content_id,
embedding=embedding,
searchable_text=searchable_text,
metadata=metadata or {},
user_id=user_id,
tx=tx,
)
except Exception as e:
logger.error(f"Failed to ensure embedding for {content_type}:{content_id}: {e}")
return False

View File

@@ -1,359 +0,0 @@
from unittest.mock import MagicMock, patch
import prisma
import pytest
from prisma import Prisma
from prisma.enums import ContentType
from backend.api.features.store import embeddings
@pytest.fixture(autouse=True)
async def setup_prisma():
"""Setup Prisma client for tests."""
try:
Prisma()
except prisma.errors.ClientAlreadyRegisteredError:
pass
yield
@pytest.mark.asyncio(loop_scope="session")
async def test_build_searchable_text():
"""Test searchable text building from listing fields."""
result = embeddings.build_searchable_text(
name="AI Assistant",
description="A helpful AI assistant for productivity",
sub_heading="Boost your productivity",
categories=["AI", "Productivity"],
)
expected = "AI Assistant Boost your productivity A helpful AI assistant for productivity AI Productivity"
assert result == expected
@pytest.mark.asyncio(loop_scope="session")
async def test_build_searchable_text_empty_fields():
"""Test searchable text building with empty fields."""
result = embeddings.build_searchable_text(
name="", description="Test description", sub_heading="", categories=[]
)
assert result == "Test description"
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.OpenAI")
async def test_generate_embedding_success(mock_openai_class):
"""Test successful embedding generation."""
# Mock OpenAI response
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.data = [MagicMock()]
mock_response.data[0].embedding = [0.1, 0.2, 0.3] * 512 # 1536 dimensions
mock_client.embeddings.create.return_value = mock_response
mock_openai_class.return_value = mock_client
with patch("backend.api.features.store.embeddings.Settings") as mock_settings:
mock_settings.return_value.secrets.openai_internal_api_key = "test-key"
result = await embeddings.generate_embedding("test text")
assert result is not None
assert len(result) == 1536
assert result[0] == 0.1
mock_client.embeddings.create.assert_called_once_with(
model="text-embedding-3-small", input="test text"
)
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.OpenAI")
async def test_generate_embedding_no_api_key(mock_openai_class):
"""Test embedding generation without API key."""
with patch("backend.api.features.store.embeddings.Settings") as mock_settings:
mock_settings.return_value.secrets.openai_internal_api_key = ""
result = await embeddings.generate_embedding("test text")
assert result is None
mock_openai_class.assert_not_called()
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.OpenAI")
async def test_generate_embedding_api_error(mock_openai_class):
"""Test embedding generation with API error."""
mock_client = MagicMock()
mock_client.embeddings.create.side_effect = Exception("API Error")
mock_openai_class.return_value = mock_client
with patch("backend.api.features.store.embeddings.Settings") as mock_settings:
mock_settings.return_value.secrets.openai_internal_api_key = "test-key"
result = await embeddings.generate_embedding("test text")
assert result is None
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.OpenAI")
async def test_generate_embedding_text_truncation(mock_openai_class):
"""Test that long text is properly truncated."""
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.data = [MagicMock()]
mock_response.data[0].embedding = [0.1] * 1536
mock_client.embeddings.create.return_value = mock_response
mock_openai_class.return_value = mock_client
# Create text longer than 32k chars
long_text = "a" * 35000
with patch("backend.api.features.store.embeddings.Settings") as mock_settings:
mock_settings.return_value.secrets.openai_internal_api_key = "test-key"
await embeddings.generate_embedding(long_text)
# Verify truncated text was sent to API
call_args = mock_client.embeddings.create.call_args
assert len(call_args.kwargs["input"]) == 32000
@pytest.mark.asyncio(loop_scope="session")
async def test_store_embedding_success(mocker):
"""Test successful embedding storage."""
mock_client = mocker.AsyncMock()
mock_client.execute_raw = mocker.AsyncMock()
embedding = [0.1, 0.2, 0.3]
result = await embeddings.store_embedding(
version_id="test-version-id", embedding=embedding, tx=mock_client
)
assert result is True
mock_client.execute_raw.assert_called_once()
call_args = mock_client.execute_raw.call_args[0]
assert "test-version-id" in call_args
assert "[0.1,0.2,0.3]" in call_args
assert None in call_args # userId should be None for store agents
@pytest.mark.asyncio(loop_scope="session")
async def test_store_embedding_database_error(mocker):
"""Test embedding storage with database error."""
mock_client = mocker.AsyncMock()
mock_client.execute_raw.side_effect = Exception("Database error")
embedding = [0.1, 0.2, 0.3]
result = await embeddings.store_embedding(
version_id="test-version-id", embedding=embedding, tx=mock_client
)
assert result is False
@pytest.mark.asyncio(loop_scope="session")
async def test_get_embedding_success(mocker):
"""Test successful embedding retrieval."""
mock_client = mocker.AsyncMock()
mock_result = [
{
"contentType": "STORE_AGENT",
"contentId": "test-version-id",
"embedding": "[0.1,0.2,0.3]",
"searchableText": "Test text",
"metadata": {},
"createdAt": "2024-01-01T00:00:00Z",
"updatedAt": "2024-01-01T00:00:00Z",
}
]
mock_client.query_raw.return_value = mock_result
with patch("prisma.get_client", return_value=mock_client):
result = await embeddings.get_embedding("test-version-id")
assert result is not None
assert result["storeListingVersionId"] == "test-version-id"
assert result["embedding"] == "[0.1,0.2,0.3]"
@pytest.mark.asyncio(loop_scope="session")
async def test_get_embedding_not_found(mocker):
"""Test embedding retrieval when not found."""
mock_client = mocker.AsyncMock()
mock_client.query_raw.return_value = []
with patch("prisma.get_client", return_value=mock_client):
result = await embeddings.get_embedding("test-version-id")
assert result is None
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.generate_embedding")
@patch("backend.api.features.store.embeddings.store_embedding")
@patch("backend.api.features.store.embeddings.get_embedding")
async def test_ensure_embedding_already_exists(mock_get, mock_store, mock_generate):
"""Test ensure_embedding when embedding already exists."""
mock_get.return_value = {"embedding": "[0.1,0.2,0.3]"}
result = await embeddings.ensure_embedding(
version_id="test-id",
name="Test",
description="Test description",
sub_heading="Test heading",
categories=["test"],
)
assert result is True
mock_generate.assert_not_called()
mock_store.assert_not_called()
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.generate_embedding")
@patch("backend.api.features.store.embeddings.store_content_embedding")
@patch("backend.api.features.store.embeddings.get_embedding")
async def test_ensure_embedding_create_new(mock_get, mock_store, mock_generate):
"""Test ensure_embedding creating new embedding."""
mock_get.return_value = None
mock_generate.return_value = [0.1, 0.2, 0.3]
mock_store.return_value = True
result = await embeddings.ensure_embedding(
version_id="test-id",
name="Test",
description="Test description",
sub_heading="Test heading",
categories=["test"],
)
assert result is True
mock_generate.assert_called_once_with("Test Test heading Test description test")
mock_store.assert_called_once_with(
content_type=ContentType.STORE_AGENT,
content_id="test-id",
embedding=[0.1, 0.2, 0.3],
searchable_text="Test Test heading Test description test",
metadata={"name": "Test", "subHeading": "Test heading", "categories": ["test"]},
user_id=None,
tx=None,
)
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.generate_embedding")
@patch("backend.api.features.store.embeddings.get_embedding")
async def test_ensure_embedding_generation_fails(mock_get, mock_generate):
"""Test ensure_embedding when generation fails."""
mock_get.return_value = None
mock_generate.return_value = None
result = await embeddings.ensure_embedding(
version_id="test-id",
name="Test",
description="Test description",
sub_heading="Test heading",
categories=["test"],
)
assert result is False
@pytest.mark.asyncio(loop_scope="session")
async def test_get_embedding_stats(mocker):
"""Test embedding statistics retrieval."""
mock_client = mocker.AsyncMock()
# Mock approved count query
mock_approved_result = [{"count": 100}]
# Mock embedded count query
mock_embedded_result = [{"count": 75}]
mock_client.query_raw.side_effect = [mock_approved_result, mock_embedded_result]
with patch("prisma.get_client", return_value=mock_client):
result = await embeddings.get_embedding_stats()
assert result["total_approved"] == 100
assert result["with_embeddings"] == 75
assert result["without_embeddings"] == 25
assert result["coverage_percent"] == 75.0
@pytest.mark.asyncio(loop_scope="session")
@patch("backend.api.features.store.embeddings.ensure_embedding")
async def test_backfill_missing_embeddings_success(mock_ensure, mocker):
"""Test backfill with successful embedding generation."""
mock_client = mocker.AsyncMock()
# Mock missing embeddings query
mock_missing = [
{
"id": "version-1",
"name": "Agent 1",
"description": "Description 1",
"subHeading": "Heading 1",
"categories": ["AI"],
},
{
"id": "version-2",
"name": "Agent 2",
"description": "Description 2",
"subHeading": "Heading 2",
"categories": ["Productivity"],
},
]
mock_client.query_raw.return_value = mock_missing
# Mock ensure_embedding to succeed for first, fail for second
mock_ensure.side_effect = [True, False]
with patch("prisma.get_client", return_value=mock_client):
result = await embeddings.backfill_missing_embeddings(batch_size=5)
assert result["processed"] == 2
assert result["success"] == 1
assert result["failed"] == 1
assert mock_ensure.call_count == 2
@pytest.mark.asyncio(loop_scope="session")
async def test_backfill_missing_embeddings_no_missing(mocker):
"""Test backfill when no embeddings are missing."""
mock_client = mocker.AsyncMock()
mock_client.query_raw.return_value = []
with patch("prisma.get_client", return_value=mock_client):
result = await embeddings.backfill_missing_embeddings(batch_size=5)
assert result["processed"] == 0
assert result["success"] == 0
assert result["failed"] == 0
assert result["message"] == "No missing embeddings"
@pytest.mark.asyncio(loop_scope="session")
async def test_embedding_to_vector_string():
"""Test embedding to PostgreSQL vector string conversion."""
embedding = [0.1, 0.2, 0.3, -0.4]
result = embeddings.embedding_to_vector_string(embedding)
assert result == "[0.1,0.2,0.3,-0.4]"
@pytest.mark.asyncio(loop_scope="session")
async def test_embed_query():
"""Test embed_query function (alias for generate_embedding)."""
with patch(
"backend.api.features.store.embeddings.generate_embedding"
) as mock_generate:
mock_generate.return_value = [0.1, 0.2, 0.3]
result = await embeddings.embed_query("test query")
assert result == [0.1, 0.2, 0.3]
mock_generate.assert_called_once_with("test query")

View File

@@ -1,377 +0,0 @@
"""
Hybrid Search for Store Agents
Combines semantic (embedding) search with lexical (tsvector) search
for improved relevance in marketplace agent discovery.
"""
import logging
from dataclasses import dataclass
from datetime import datetime
from typing import Any, Literal
from backend.api.features.store.embeddings import (
embed_query,
embedding_to_vector_string,
)
from backend.data.db import query_raw_with_schema
logger = logging.getLogger(__name__)
@dataclass
class HybridSearchWeights:
"""Weights for combining search signals."""
semantic: float = 0.35 # Embedding cosine similarity
lexical: float = 0.35 # tsvector ts_rank_cd score
category: float = 0.20 # Category match boost
recency: float = 0.10 # Newer agents ranked higher
DEFAULT_WEIGHTS = HybridSearchWeights()
# Minimum relevance score threshold - agents below this are filtered out
# With weights (0.35 semantic + 0.35 lexical + 0.20 category + 0.10 recency):
# - 0.20 means at least ~50% semantic match OR strong lexical match required
# - Ensures only genuinely relevant results are returned
# - Recency alone (0.10 max) won't pass the threshold
DEFAULT_MIN_SCORE = 0.20
@dataclass
class HybridSearchResult:
"""A single search result with score breakdown."""
slug: str
agent_name: str
agent_image: str
creator_username: str
creator_avatar: str
sub_heading: str
description: str
runs: int
rating: float
categories: list[str]
featured: bool
is_available: bool
updated_at: datetime
# Score breakdown (for debugging/tuning)
combined_score: float
semantic_score: float = 0.0
lexical_score: float = 0.0
category_score: float = 0.0
recency_score: float = 0.0
async def hybrid_search(
query: str,
featured: bool = False,
creators: list[str] | None = None,
category: str | None = None,
sorted_by: (
Literal["relevance", "rating", "runs", "name", "updated_at"] | None
) = None,
page: int = 1,
page_size: int = 20,
weights: HybridSearchWeights | None = None,
min_score: float | None = None,
) -> tuple[list[dict[str, Any]], int]:
"""
Perform hybrid search combining semantic and lexical signals.
Args:
query: Search query string
featured: Filter for featured agents only
creators: Filter by creator usernames
category: Filter by category
sorted_by: Sort order (relevance uses hybrid scoring)
page: Page number (1-indexed)
page_size: Results per page
weights: Custom weights for search signals
min_score: Minimum relevance score threshold (0-1). Results below
this score are filtered out. Defaults to DEFAULT_MIN_SCORE.
Returns:
Tuple of (results list, total count). Returns empty list if no
results meet the minimum relevance threshold.
"""
if weights is None:
weights = DEFAULT_WEIGHTS
if min_score is None:
min_score = DEFAULT_MIN_SCORE
offset = (page - 1) * page_size
# Generate query embedding
query_embedding = await embed_query(query)
# Build WHERE clause conditions
where_parts: list[str] = ["sa.is_available = true"]
params: list[Any] = []
param_index = 1
# Add search query for lexical matching
params.append(query)
query_param = f"${param_index}"
param_index += 1
# Add lowercased query for category matching
params.append(query.lower())
query_lower_param = f"${param_index}"
param_index += 1
if featured:
where_parts.append("sa.featured = true")
if creators:
where_parts.append(f"sa.creator_username = ANY(${param_index})")
params.append(creators)
param_index += 1
if category:
where_parts.append(f"${param_index} = ANY(sa.categories)")
params.append(category)
param_index += 1
where_clause = " AND ".join(where_parts)
# Determine if we can use hybrid search (have query embedding)
use_hybrid = query_embedding is not None
if use_hybrid:
# Add embedding parameter
embedding_str = embedding_to_vector_string(query_embedding)
params.append(embedding_str)
embedding_param = f"${param_index}"
param_index += 1
# Optimized hybrid search query:
# 1. Direct join to UnifiedContentEmbedding via contentId=storeListingVersionId (no redundant JOINs)
# 2. UNION ALL approach to enable index usage for both lexical and semantic branches
# 3. COUNT(*) OVER() to get total count in single query
# 4. Simplified category matching with array_to_string
sql_query = f"""
WITH candidates AS (
-- Lexical matches (uses GIN index on search column)
SELECT DISTINCT sa."storeListingVersionId"
FROM {{schema_prefix}}"StoreAgent" sa
WHERE {where_clause}
AND sa.search @@ plainto_tsquery('english', {query_param})
UNION
-- Semantic matches (uses HNSW index on embedding)
SELECT DISTINCT sa."storeListingVersionId"
FROM {{schema_prefix}}"StoreAgent" sa
INNER JOIN {{schema_prefix}}"UnifiedContentEmbedding" uce
ON sa."storeListingVersionId" = uce."contentId" AND uce."contentType" = 'STORE_AGENT'
WHERE {where_clause}
),
search_scores AS (
SELECT
sa.slug,
sa.agent_name,
sa.agent_image,
sa.creator_username,
sa.creator_avatar,
sa.sub_heading,
sa.description,
sa.runs,
sa.rating,
sa.categories,
sa.featured,
sa.is_available,
sa.updated_at,
-- Semantic score: cosine similarity (1 - distance)
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
-- Lexical score: ts_rank_cd (will be normalized later)
COALESCE(ts_rank_cd(sa.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match: check if query appears in any category
CASE
WHEN LOWER(array_to_string(sa.categories, ' ')) LIKE '%' || {query_lower_param} || '%'
THEN 1.0
ELSE 0.0
END as category_score,
-- Recency score: exponential decay over 90 days
EXP(-EXTRACT(EPOCH FROM (NOW() - sa.updated_at)) / (90 * 24 * 3600)) as recency_score
FROM candidates c
INNER JOIN {{schema_prefix}}"StoreAgent" sa
ON c."storeListingVersionId" = sa."storeListingVersionId"
LEFT JOIN {{schema_prefix}}"UnifiedContentEmbedding" uce
ON sa."storeListingVersionId" = uce."contentId" AND uce."contentType" = 'STORE_AGENT'
),
normalized AS (
SELECT
*,
-- Normalize lexical score by max in result set
CASE
WHEN MAX(lexical_raw) OVER () > 0
THEN lexical_raw / MAX(lexical_raw) OVER ()
ELSE 0
END as lexical_score
FROM search_scores
),
scored AS (
SELECT
slug,
agent_name,
agent_image,
creator_username,
creator_avatar,
sub_heading,
description,
runs,
rating,
categories,
featured,
is_available,
updated_at,
semantic_score,
lexical_score,
category_score,
recency_score,
(
{weights.semantic} * semantic_score +
{weights.lexical} * lexical_score +
{weights.category} * category_score +
{weights.recency} * recency_score
) as combined_score
FROM normalized
),
filtered AS (
SELECT
*,
COUNT(*) OVER () as total_count
FROM scored
WHERE combined_score >= {min_score}
)
SELECT * FROM filtered
ORDER BY combined_score DESC
LIMIT ${param_index} OFFSET ${param_index + 1}
"""
# Add pagination params
params.extend([page_size, offset])
else:
# Fallback to lexical-only search (existing behavior)
logger.warning("Falling back to lexical-only search (no query embedding)")
sql_query = f"""
WITH lexical_scores AS (
SELECT
slug,
agent_name,
agent_image,
creator_username,
creator_avatar,
sub_heading,
description,
runs,
rating,
categories,
featured,
is_available,
updated_at,
0.0 as semantic_score,
ts_rank_cd(search, plainto_tsquery('english', {query_param})) as lexical_raw,
CASE
WHEN LOWER(array_to_string(categories, ' ')) LIKE '%' || {query_lower_param} || '%'
THEN 1.0
ELSE 0.0
END as category_score,
EXP(-EXTRACT(EPOCH FROM (NOW() - updated_at)) / (90 * 24 * 3600)) as recency_score
FROM {{schema_prefix}}"StoreAgent" sa
WHERE {where_clause}
AND search @@ plainto_tsquery('english', {query_param})
),
normalized AS (
SELECT
*,
CASE
WHEN MAX(lexical_raw) OVER () > 0
THEN lexical_raw / MAX(lexical_raw) OVER ()
ELSE 0
END as lexical_score
FROM lexical_scores
),
scored AS (
SELECT
slug,
agent_name,
agent_image,
creator_username,
creator_avatar,
sub_heading,
description,
runs,
rating,
categories,
featured,
is_available,
updated_at,
semantic_score,
lexical_score,
category_score,
recency_score,
(
{weights.lexical} * lexical_score +
{weights.category} * category_score +
{weights.recency} * recency_score
) as combined_score
FROM normalized
),
filtered AS (
SELECT
*,
COUNT(*) OVER () as total_count
FROM scored
WHERE combined_score >= {min_score}
)
SELECT * FROM filtered
ORDER BY combined_score DESC
LIMIT ${param_index} OFFSET ${param_index + 1}
"""
params.extend([page_size, offset])
try:
# Execute search query - includes total_count via window function
results = await query_raw_with_schema(sql_query, *params)
# Extract total count from first result (all rows have same count)
total = results[0]["total_count"] if results else 0
# Remove total_count from results before returning
for result in results:
result.pop("total_count", None)
logger.info(
f"Hybrid search for '{query}': {len(results)} results, {total} total "
f"(hybrid={use_hybrid})"
)
return results, total
except Exception as e:
logger.error(f"Hybrid search failed: {e}")
raise
async def hybrid_search_simple(
query: str,
page: int = 1,
page_size: int = 20,
) -> tuple[list[dict[str, Any]], int]:
"""
Simplified hybrid search for common use cases.
Uses default weights and no filters.
"""
return await hybrid_search(
query=query,
page=page,
page_size=page_size,
)

View File

@@ -762,10 +762,10 @@ async def create_new_graph(
graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
graph.validate_graph(for_run=False)
# The return value of the create graph & library function is intentionally not used here,
# as the graph already valid and no sub-graphs are returned back.
await graph_db.create_graph(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id=user_id)
await library_db.create_library_agent(
graph, user_id=user_id, is_ai_generated=create_graph.is_ai_generated
)
activated_graph = await on_graph_activate(graph, user_id=user_id)
if create_graph.source == "builder":
@@ -889,21 +889,17 @@ async def set_graph_active_version(
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
updated_settings = GraphSettings.from_graph(
agent_graph, is_ai_generated=library.settings.is_ai_generated_graph
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
settings=updated_settings,
)
return library
@@ -920,21 +916,18 @@ async def update_graph_settings(
user_id: Annotated[str, Security(get_user_id)],
) -> GraphSettings:
"""Update graph settings for the user's library agent."""
# Get the library agent for this graph
library_agent = await library_db.get_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id
)
if not library_agent:
raise HTTPException(404, f"Graph #{graph_id} not found in user's library")
# Update the library agent settings
updated_agent = await library_db.update_library_agent_settings(
updated_agent = await library_db.update_library_agent(
library_agent_id=library_agent.id,
user_id=user_id,
agent_id=library_agent.id,
settings=settings,
)
# Return the updated settings
return GraphSettings.model_validate(updated_agent.settings)

View File

@@ -43,6 +43,7 @@ GraphExecutionSource = Literal["builder", "library", "onboarding"]
class CreateGraph(pydantic.BaseModel):
graph: Graph
source: GraphCreationSource | None = None
is_ai_generated: bool = False
class CreateAPIKeyRequest(pydantic.BaseModel):

View File

@@ -0,0 +1,184 @@
"""
Shared helpers for Human-In-The-Loop (HITL) review functionality.
Used by both the dedicated HumanInTheLoopBlock and blocks that require human review.
"""
import logging
from typing import Any, Optional
from prisma.enums import ReviewStatus
from pydantic import BaseModel
from backend.data.execution import ExecutionContext, ExecutionStatus
from backend.data.human_review import ReviewResult
from backend.executor.manager import async_update_node_execution_status
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
class ReviewDecision(BaseModel):
"""Result of a review decision."""
should_proceed: bool
message: str
review_result: ReviewResult
class HITLReviewHelper:
"""Helper class for Human-In-The-Loop review operations."""
@staticmethod
async def get_or_create_human_review(**kwargs) -> Optional[ReviewResult]:
"""Create or retrieve a human review from the database."""
return await get_database_manager_async_client().get_or_create_human_review(
**kwargs
)
@staticmethod
async def update_node_execution_status(**kwargs) -> None:
"""Update the execution status of a node."""
await async_update_node_execution_status(
db_client=get_database_manager_async_client(), **kwargs
)
@staticmethod
async def update_review_processed_status(
node_exec_id: str, processed: bool
) -> None:
"""Update the processed status of a review."""
return await get_database_manager_async_client().update_review_processed_status(
node_exec_id, processed
)
@staticmethod
async def _handle_review_request(
input_data: Any,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
block_name: str = "Block",
editable: bool = False,
) -> Optional[ReviewResult]:
"""
Handle a review request for a block that requires human review.
Args:
input_data: The input data to be reviewed
user_id: ID of the user requesting the review
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph
graph_version: Version of the graph
execution_context: Current execution context
block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data
Returns:
ReviewResult if review is complete, None if waiting for human input
Raises:
Exception: If review creation or status update fails
"""
# Skip review if safe mode is disabled - return auto-approved result
if not execution_context.safe_mode:
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
)
return ReviewResult(
data=input_data,
status=ReviewStatus.APPROVED,
message="Auto-approved (safe mode disabled)",
processed=True,
node_exec_id=node_exec_id,
)
result = await HITLReviewHelper.get_or_create_human_review(
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
input_data=input_data,
message=f"Review required for {block_name} execution",
editable=editable,
)
if result is None:
logger.info(
f"Block {block_name} pausing execution for node {node_exec_id} - awaiting human review"
)
await HITLReviewHelper.update_node_execution_status(
exec_id=node_exec_id,
status=ExecutionStatus.REVIEW,
)
return None # Signal that execution should pause
# Mark review as processed if not already done
if not result.processed:
await HITLReviewHelper.update_review_processed_status(
node_exec_id=node_exec_id, processed=True
)
return result
@staticmethod
async def handle_review_decision(
input_data: Any,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
block_name: str = "Block",
editable: bool = False,
) -> Optional[ReviewDecision]:
"""
Handle a review request and return the decision in a single call.
Args:
input_data: The input data to be reviewed
user_id: ID of the user requesting the review
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph
graph_version: Version of the graph
execution_context: Current execution context
block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data
Returns:
ReviewDecision if review is complete (approved/rejected),
None if execution should pause (awaiting review)
"""
review_result = await HITLReviewHelper._handle_review_request(
input_data=input_data,
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
block_name=block_name,
editable=editable,
)
if review_result is None:
# Still awaiting review - return None to pause execution
return None
# Review is complete, determine outcome
should_proceed = review_result.status == ReviewStatus.APPROVED
message = review_result.message or (
"Execution approved by reviewer"
if should_proceed
else "Execution rejected by reviewer"
)
return ReviewDecision(
should_proceed=should_proceed, message=message, review_result=review_result
)

View File

@@ -3,6 +3,7 @@ from typing import Any
from prisma.enums import ReviewStatus
from backend.blocks.helpers.review import HITLReviewHelper
from backend.data.block import (
Block,
BlockCategory,
@@ -11,11 +12,9 @@ from backend.data.block import (
BlockSchemaOutput,
BlockType,
)
from backend.data.execution import ExecutionContext, ExecutionStatus
from backend.data.execution import ExecutionContext
from backend.data.human_review import ReviewResult
from backend.data.model import SchemaField
from backend.executor.manager import async_update_node_execution_status
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
@@ -72,32 +71,26 @@ class HumanInTheLoopBlock(Block):
("approved_data", {"name": "John Doe", "age": 30}),
],
test_mock={
"get_or_create_human_review": lambda *_args, **_kwargs: ReviewResult(
data={"name": "John Doe", "age": 30},
status=ReviewStatus.APPROVED,
message="",
processed=False,
node_exec_id="test-node-exec-id",
),
"update_node_execution_status": lambda *_args, **_kwargs: None,
"update_review_processed_status": lambda *_args, **_kwargs: None,
"handle_review_decision": lambda **kwargs: type(
"ReviewDecision",
(),
{
"should_proceed": True,
"message": "Test approval message",
"review_result": ReviewResult(
data={"name": "John Doe", "age": 30},
status=ReviewStatus.APPROVED,
message="",
processed=False,
node_exec_id="test-node-exec-id",
),
},
)(),
},
)
async def get_or_create_human_review(self, **kwargs):
return await get_database_manager_async_client().get_or_create_human_review(
**kwargs
)
async def update_node_execution_status(self, **kwargs):
return await async_update_node_execution_status(
db_client=get_database_manager_async_client(), **kwargs
)
async def update_review_processed_status(self, node_exec_id: str, processed: bool):
return await get_database_manager_async_client().update_review_processed_status(
node_exec_id, processed
)
async def handle_review_decision(self, **kwargs):
return await HITLReviewHelper.handle_review_decision(**kwargs)
async def run(
self,
@@ -109,7 +102,7 @@ class HumanInTheLoopBlock(Block):
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
**kwargs,
**_kwargs,
) -> BlockOutput:
if not execution_context.safe_mode:
logger.info(
@@ -119,48 +112,28 @@ class HumanInTheLoopBlock(Block):
yield "review_message", "Auto-approved (safe mode disabled)"
return
try:
result = await self.get_or_create_human_review(
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
input_data=input_data.data,
message=input_data.name,
editable=input_data.editable,
)
except Exception as e:
logger.error(f"Error in HITL block for node {node_exec_id}: {str(e)}")
raise
decision = await self.handle_review_decision(
input_data=input_data.data,
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
block_name=self.name,
editable=input_data.editable,
)
if result is None:
logger.info(
f"HITL block pausing execution for node {node_exec_id} - awaiting human review"
)
try:
await self.update_node_execution_status(
exec_id=node_exec_id,
status=ExecutionStatus.REVIEW,
)
return
except Exception as e:
logger.error(
f"Failed to update node status for HITL block {node_exec_id}: {str(e)}"
)
raise
if decision is None:
return
if not result.processed:
await self.update_review_processed_status(
node_exec_id=node_exec_id, processed=True
)
status = decision.review_result.status
if status == ReviewStatus.APPROVED:
yield "approved_data", decision.review_result.data
elif status == ReviewStatus.REJECTED:
yield "rejected_data", decision.review_result.data
else:
raise RuntimeError(f"Unexpected review status: {status}")
if result.status == ReviewStatus.APPROVED:
yield "approved_data", result.data
if result.message:
yield "review_message", result.message
elif result.status == ReviewStatus.REJECTED:
yield "rejected_data", result.data
if result.message:
yield "review_message", result.message
if decision.message:
yield "review_message", decision.message

File diff suppressed because it is too large Load Diff

View File

@@ -391,8 +391,12 @@ class SmartDecisionMakerBlock(Block):
"""
block = sink_node.block
# Use custom name from node metadata if set, otherwise fall back to block.name
custom_name = sink_node.metadata.get("customized_name")
tool_name = custom_name if custom_name else block.name
tool_function: dict[str, Any] = {
"name": SmartDecisionMakerBlock.cleanup(block.name),
"name": SmartDecisionMakerBlock.cleanup(tool_name),
"description": block.description,
}
sink_block_input_schema = block.input_schema
@@ -489,14 +493,24 @@ class SmartDecisionMakerBlock(Block):
f"Sink graph metadata not found: {graph_id} {graph_version}"
)
# Use custom name from node metadata if set, otherwise fall back to graph name
custom_name = sink_node.metadata.get("customized_name")
tool_name = custom_name if custom_name else sink_graph_meta.name
tool_function: dict[str, Any] = {
"name": SmartDecisionMakerBlock.cleanup(sink_graph_meta.name),
"name": SmartDecisionMakerBlock.cleanup(tool_name),
"description": sink_graph_meta.description,
}
properties = {}
field_mapping = {}
for link in links:
field_name = link.sink_name
clean_field_name = SmartDecisionMakerBlock.cleanup(field_name)
field_mapping[clean_field_name] = field_name
sink_block_input_schema = sink_node.input_default["input_schema"]
sink_block_properties = sink_block_input_schema.get("properties", {}).get(
link.sink_name, {}
@@ -506,7 +520,7 @@ class SmartDecisionMakerBlock(Block):
if "description" in sink_block_properties
else f"The {link.sink_name} of the tool"
)
properties[link.sink_name] = {
properties[clean_field_name] = {
"type": "string",
"description": description,
"default": json.dumps(sink_block_properties.get("default", None)),
@@ -519,7 +533,7 @@ class SmartDecisionMakerBlock(Block):
"strict": True,
}
# Store node info for later use in output processing
tool_function["_field_mapping"] = field_mapping
tool_function["_sink_node_id"] = sink_node.id
return {"type": "function", "function": tool_function}
@@ -1147,8 +1161,9 @@ class SmartDecisionMakerBlock(Block):
original_field_name = field_mapping.get(clean_arg_name, clean_arg_name)
arg_value = tool_args.get(clean_arg_name)
sanitized_arg_name = self.cleanup(original_field_name)
emit_key = f"tools_^_{sink_node_id}_~_{sanitized_arg_name}"
# Use original_field_name directly (not sanitized) to match link sink_name
# The field_mapping already translates from LLM's cleaned names to original names
emit_key = f"tools_^_{sink_node_id}_~_{original_field_name}"
logger.debug(
"[SmartDecisionMakerBlock|geid:%s|neid:%s] emit %s",

View File

@@ -1057,3 +1057,153 @@ async def test_smart_decision_maker_traditional_mode_default():
) # Should yield individual tool parameters
assert "tools_^_test-sink-node-id_~_max_keyword_difficulty" in outputs
assert "conversations" in outputs
@pytest.mark.asyncio
async def test_smart_decision_maker_uses_customized_name_for_blocks():
"""Test that SmartDecisionMakerBlock uses customized_name from node metadata for tool names."""
from unittest.mock import MagicMock
from backend.blocks.basic import StoreValueBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node with customized_name in metadata
mock_node = MagicMock(spec=Node)
mock_node.id = "test-node-id"
mock_node.block_id = StoreValueBlock().id
mock_node.metadata = {"customized_name": "My Custom Tool Name"}
mock_node.block = StoreValueBlock()
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "input"
# Call the function directly
result = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the customized name (cleaned up)
assert result["type"] == "function"
assert result["function"]["name"] == "my_custom_tool_name" # Cleaned version
assert result["function"]["_sink_node_id"] == "test-node-id"
@pytest.mark.asyncio
async def test_smart_decision_maker_falls_back_to_block_name():
"""Test that SmartDecisionMakerBlock falls back to block.name when no customized_name."""
from unittest.mock import MagicMock
from backend.blocks.basic import StoreValueBlock
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node without customized_name
mock_node = MagicMock(spec=Node)
mock_node.id = "test-node-id"
mock_node.block_id = StoreValueBlock().id
mock_node.metadata = {} # No customized_name
mock_node.block = StoreValueBlock()
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "input"
# Call the function directly
result = await SmartDecisionMakerBlock._create_block_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the block's default name
assert result["type"] == "function"
assert result["function"]["name"] == "storevalueblock" # Default block name cleaned
assert result["function"]["_sink_node_id"] == "test-node-id"
@pytest.mark.asyncio
async def test_smart_decision_maker_uses_customized_name_for_agents():
"""Test that SmartDecisionMakerBlock uses customized_name from metadata for agent nodes."""
from unittest.mock import AsyncMock, MagicMock, patch
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node with customized_name in metadata
mock_node = MagicMock(spec=Node)
mock_node.id = "test-agent-node-id"
mock_node.metadata = {"customized_name": "My Custom Agent"}
mock_node.input_default = {
"graph_id": "test-graph-id",
"graph_version": 1,
"input_schema": {"properties": {"test_input": {"description": "Test input"}}},
}
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "test_input"
# Mock the database client
mock_graph_meta = MagicMock()
mock_graph_meta.name = "Original Agent Name"
mock_graph_meta.description = "Agent description"
mock_db_client = AsyncMock()
mock_db_client.get_graph_metadata.return_value = mock_graph_meta
with patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client",
return_value=mock_db_client,
):
result = await SmartDecisionMakerBlock._create_agent_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the customized name (cleaned up)
assert result["type"] == "function"
assert result["function"]["name"] == "my_custom_agent" # Cleaned version
assert result["function"]["_sink_node_id"] == "test-agent-node-id"
@pytest.mark.asyncio
async def test_smart_decision_maker_agent_falls_back_to_graph_name():
"""Test that agent node falls back to graph name when no customized_name."""
from unittest.mock import AsyncMock, MagicMock, patch
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
from backend.data.graph import Link, Node
# Create a mock node without customized_name
mock_node = MagicMock(spec=Node)
mock_node.id = "test-agent-node-id"
mock_node.metadata = {} # No customized_name
mock_node.input_default = {
"graph_id": "test-graph-id",
"graph_version": 1,
"input_schema": {"properties": {"test_input": {"description": "Test input"}}},
}
# Create a mock link
mock_link = MagicMock(spec=Link)
mock_link.sink_name = "test_input"
# Mock the database client
mock_graph_meta = MagicMock()
mock_graph_meta.name = "Original Agent Name"
mock_graph_meta.description = "Agent description"
mock_db_client = AsyncMock()
mock_db_client.get_graph_metadata.return_value = mock_graph_meta
with patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client",
return_value=mock_db_client,
):
result = await SmartDecisionMakerBlock._create_agent_function_signature(
mock_node, [mock_link]
)
# Verify the tool name uses the graph's default name
assert result["type"] == "function"
assert result["function"]["name"] == "original_agent_name" # Graph name cleaned
assert result["function"]["_sink_node_id"] == "test-agent-node-id"

View File

@@ -15,6 +15,7 @@ async def test_smart_decision_maker_handles_dynamic_dict_fields():
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic dictionary fields
mock_links = [
@@ -77,6 +78,7 @@ async def test_smart_decision_maker_handles_dynamic_list_fields():
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic list fields
mock_links = [

View File

@@ -44,6 +44,7 @@ async def test_create_block_function_signature_with_dict_fields():
mock_node.block = CreateDictionaryBlock()
mock_node.block_id = CreateDictionaryBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic dictionary fields (source sanitized, sink original)
mock_links = [
@@ -106,6 +107,7 @@ async def test_create_block_function_signature_with_list_fields():
mock_node.block = AddToListBlock()
mock_node.block_id = AddToListBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic list fields
mock_links = [
@@ -159,6 +161,7 @@ async def test_create_block_function_signature_with_object_fields():
mock_node.block = MatchTextPatternBlock()
mock_node.block_id = MatchTextPatternBlock().id
mock_node.input_default = {}
mock_node.metadata = {}
# Create mock links with dynamic object fields
mock_links = [
@@ -208,11 +211,13 @@ async def test_create_tool_node_signatures():
mock_dict_node.block = CreateDictionaryBlock()
mock_dict_node.block_id = CreateDictionaryBlock().id
mock_dict_node.input_default = {}
mock_dict_node.metadata = {}
mock_list_node = Mock()
mock_list_node.block = AddToListBlock()
mock_list_node.block_id = AddToListBlock().id
mock_list_node.input_default = {}
mock_list_node.metadata = {}
# Mock links with dynamic fields
dict_link1 = Mock(
@@ -423,6 +428,7 @@ async def test_mixed_regular_and_dynamic_fields():
mock_node.block.name = "TestBlock"
mock_node.block.description = "A test block"
mock_node.block.input_schema = Mock()
mock_node.metadata = {}
# Mock the get_field_schema to return a proper schema for regular fields
def get_field_schema(field_name):

View File

@@ -1,3 +1,3 @@
from .blog import WordPressCreatePostBlock
from .blog import WordPressCreatePostBlock, WordPressGetAllPostsBlock
__all__ = ["WordPressCreatePostBlock"]
__all__ = ["WordPressCreatePostBlock", "WordPressGetAllPostsBlock"]

View File

@@ -161,7 +161,7 @@ async def oauth_exchange_code_for_tokens(
grant_type="authorization_code",
).model_dump(exclude_none=True)
response = await Requests().post(
response = await Requests(raise_for_status=False).post(
f"{WORDPRESS_BASE_URL}oauth2/token",
headers=headers,
data=data,
@@ -205,7 +205,7 @@ async def oauth_refresh_tokens(
grant_type="refresh_token",
).model_dump(exclude_none=True)
response = await Requests().post(
response = await Requests(raise_for_status=False).post(
f"{WORDPRESS_BASE_URL}oauth2/token",
headers=headers,
data=data,
@@ -252,7 +252,7 @@ async def validate_token(
"token": token,
}
response = await Requests().get(
response = await Requests(raise_for_status=False).get(
f"{WORDPRESS_BASE_URL}oauth2/token-info",
params=params,
)
@@ -296,7 +296,7 @@ async def make_api_request(
url = f"{WORDPRESS_BASE_URL.rstrip('/')}{endpoint}"
request_method = getattr(Requests(), method.lower())
request_method = getattr(Requests(raise_for_status=False), method.lower())
response = await request_method(
url,
headers=headers,
@@ -476,6 +476,7 @@ async def create_post(
data["tags"] = ",".join(str(t) for t in data["tags"])
# Make the API request
site = normalize_site(site)
endpoint = f"/rest/v1.1/sites/{site}/posts/new"
headers = {
@@ -483,7 +484,7 @@ async def create_post(
"Content-Type": "application/x-www-form-urlencoded",
}
response = await Requests().post(
response = await Requests(raise_for_status=False).post(
f"{WORDPRESS_BASE_URL.rstrip('/')}{endpoint}",
headers=headers,
data=data,
@@ -499,3 +500,132 @@ async def create_post(
)
error_message = error_data.get("message", response.text)
raise ValueError(f"Failed to create post: {response.status} - {error_message}")
class Post(BaseModel):
"""Response model for individual posts in a posts list response.
This is a simplified version compared to PostResponse, as the list endpoint
returns less detailed information than the create/get single post endpoints.
"""
ID: int
site_ID: int
author: PostAuthor
date: datetime
modified: datetime
title: str
URL: str
short_URL: str
content: str | None = None
excerpt: str | None = None
slug: str
guid: str
status: str
sticky: bool
password: str | None = ""
parent: Union[Dict[str, Any], bool, None] = None
type: str
discussion: Dict[str, Union[str, bool, int]] | None = None
likes_enabled: bool | None = None
sharing_enabled: bool | None = None
like_count: int | None = None
i_like: bool | None = None
is_reblogged: bool | None = None
is_following: bool | None = None
global_ID: str | None = None
featured_image: str | None = None
post_thumbnail: Dict[str, Any] | None = None
format: str | None = None
geo: Union[Dict[str, Any], bool, None] = None
menu_order: int | None = None
page_template: str | None = None
publicize_URLs: List[str] | None = None
terms: Dict[str, Dict[str, Any]] | None = None
tags: Dict[str, Dict[str, Any]] | None = None
categories: Dict[str, Dict[str, Any]] | None = None
attachments: Dict[str, Dict[str, Any]] | None = None
attachment_count: int | None = None
metadata: List[Dict[str, Any]] | None = None
meta: Dict[str, Any] | None = None
capabilities: Dict[str, bool] | None = None
revisions: List[int] | None = None
other_URLs: Dict[str, Any] | None = None
class PostsResponse(BaseModel):
"""Response model for WordPress posts list."""
found: int
posts: List[Post]
meta: Dict[str, Any]
def normalize_site(site: str) -> str:
"""
Normalize a site identifier by stripping protocol and trailing slashes.
Args:
site: Site URL, domain, or ID (e.g., "https://myblog.wordpress.com/", "myblog.wordpress.com", "123456789")
Returns:
Normalized site identifier (domain or ID only)
"""
site = site.strip()
if site.startswith("https://"):
site = site[8:]
elif site.startswith("http://"):
site = site[7:]
return site.rstrip("/")
async def get_posts(
credentials: Credentials,
site: str,
status: PostStatus | None = None,
number: int = 100,
offset: int = 0,
) -> PostsResponse:
"""
Get posts from a WordPress site.
Args:
credentials: OAuth credentials
site: Site ID or domain (e.g., "myblog.wordpress.com" or "123456789")
status: Filter by post status using PostStatus enum, or None for all
number: Number of posts to retrieve (max 100)
offset: Number of posts to skip (for pagination)
Returns:
PostsResponse with the list of posts
"""
site = normalize_site(site)
endpoint = f"/rest/v1.1/sites/{site}/posts"
headers = {
"Authorization": credentials.auth_header(),
}
params: Dict[str, Any] = {
"number": max(1, min(number, 100)), # 1100 posts per request
"offset": offset,
}
if status:
params["status"] = status.value
response = await Requests(raise_for_status=False).get(
f"{WORDPRESS_BASE_URL.rstrip('/')}{endpoint}",
headers=headers,
params=params,
)
if response.ok:
return PostsResponse.model_validate(response.json())
error_data = (
response.json()
if response.headers.get("content-type", "").startswith("application/json")
else {}
)
error_message = error_data.get("message", response.text)
raise ValueError(f"Failed to get posts: {response.status} - {error_message}")

View File

@@ -9,7 +9,15 @@ from backend.sdk import (
SchemaField,
)
from ._api import CreatePostRequest, PostResponse, PostStatus, create_post
from ._api import (
CreatePostRequest,
Post,
PostResponse,
PostsResponse,
PostStatus,
create_post,
get_posts,
)
from ._config import wordpress
@@ -49,8 +57,15 @@ class WordPressCreatePostBlock(Block):
media_urls: list[str] = SchemaField(
description="URLs of images to sideload and attach to the post", default=[]
)
publish_as_draft: bool = SchemaField(
description="If True, publishes the post as a draft. If False, publishes it publicly.",
default=False,
)
class Output(BlockSchemaOutput):
site: str = SchemaField(
description="The site ID or domain (pass-through for chaining with other blocks)"
)
post_id: int = SchemaField(description="The ID of the created post")
post_url: str = SchemaField(description="The full URL of the created post")
short_url: str = SchemaField(description="The shortened wp.me URL")
@@ -78,7 +93,9 @@ class WordPressCreatePostBlock(Block):
tags=input_data.tags,
featured_image=input_data.featured_image,
media_urls=input_data.media_urls,
status=PostStatus.PUBLISH,
status=(
PostStatus.DRAFT if input_data.publish_as_draft else PostStatus.PUBLISH
),
)
post_response: PostResponse = await create_post(
@@ -87,7 +104,69 @@ class WordPressCreatePostBlock(Block):
post_data=post_request,
)
yield "site", input_data.site
yield "post_id", post_response.ID
yield "post_url", post_response.URL
yield "short_url", post_response.short_URL
yield "post_data", post_response.model_dump()
class WordPressGetAllPostsBlock(Block):
"""
Fetches all posts from a WordPress.com site or Jetpack-enabled site.
Supports filtering by status and pagination.
"""
class Input(BlockSchemaInput):
credentials: CredentialsMetaInput = wordpress.credentials_field()
site: str = SchemaField(
description="Site ID or domain (e.g., 'myblog.wordpress.com' or '123456789')"
)
status: PostStatus | None = SchemaField(
description="Filter by post status, or None for all",
default=None,
)
number: int = SchemaField(
description="Number of posts to retrieve (max 100 per request)", default=20
)
offset: int = SchemaField(
description="Number of posts to skip (for pagination)", default=0
)
class Output(BlockSchemaOutput):
site: str = SchemaField(
description="The site ID or domain (pass-through for chaining with other blocks)"
)
found: int = SchemaField(description="Total number of posts found")
posts: list[Post] = SchemaField(
description="List of post objects with their details"
)
post: Post = SchemaField(
description="Individual post object (yielded for each post)"
)
def __init__(self):
super().__init__(
id="97728fa7-7f6f-4789-ba0c-f2c114119536",
description="Fetch all posts from WordPress.com or Jetpack sites",
categories={BlockCategory.SOCIAL},
input_schema=self.Input,
output_schema=self.Output,
)
async def run(
self, input_data: Input, *, credentials: Credentials, **kwargs
) -> BlockOutput:
posts_response: PostsResponse = await get_posts(
credentials=credentials,
site=input_data.site,
status=input_data.status,
number=input_data.number,
offset=input_data.offset,
)
yield "site", input_data.site
yield "found", posts_response.found
yield "posts", posts_response.posts
for post in posts_response.posts:
yield "post", post

View File

@@ -50,6 +50,8 @@ from .model import (
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from backend.data.execution import ExecutionContext
from .graph import Link
app_config = Config()
@@ -472,6 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type
self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.requires_human_review: bool = False
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -614,7 +617,80 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
block_id=self.id,
) from ex
async def is_block_exec_need_review(
self,
input_data: BlockInput,
*,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: "ExecutionContext",
**kwargs,
) -> tuple[bool, BlockInput]:
"""
Check if this block execution needs human review and handle the review process.
Returns:
Tuple of (should_pause, input_data_to_use)
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
if not (
self.requires_human_review
and execution_context.safe_mode
and execution_context.is_ai_generated_graph
):
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper
# Handle the review request and get decision
decision = await HITLReviewHelper.handle_review_decision(
input_data=input_data,
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
block_name=self.name,
editable=True,
)
if decision is None:
# We're awaiting review - pause execution
return True, input_data
if not decision.should_proceed:
# Review was rejected, raise an error to stop execution
raise BlockExecutionError(
message=f"Block execution rejected by reviewer: {decision.message}",
block_name=self.name,
block_id=self.id,
)
# Review was approved - use the potentially modified data
# ReviewResult.data must be a dict for block inputs
reviewed_data = decision.review_result.data
if not isinstance(reviewed_data, dict):
raise BlockExecutionError(
message=f"Review data must be a dict for block input, got {type(reviewed_data).__name__}",
block_name=self.name,
block_id=self.id,
)
return False, reviewed_data
async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
# Check for review requirement and get potentially modified input data
should_pause, input_data = await self.is_block_exec_need_review(
input_data, **kwargs
)
if should_pause:
return
# Validate the input data (original or reviewer-modified) once
if error := self.input_schema.validate_data(input_data):
raise BlockInputError(
message=f"Unable to execute block with invalid input data: {error}",
@@ -622,6 +698,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
block_id=self.id,
)
# Use the validated input data
async for output_name, output_data in self.run(
self.input_schema(**{k: v for k, v in input_data.items() if v is not None}),
**kwargs,

View File

@@ -82,6 +82,7 @@ class ExecutionContext(BaseModel):
"""
safe_mode: bool = True
is_ai_generated_graph: bool = False
user_timezone: str = "UTC"
root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None

View File

@@ -63,6 +63,14 @@ logger = logging.getLogger(__name__)
class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool | None = None
is_ai_generated_graph: bool = False
@classmethod
def from_graph(cls, graph: "GraphModel", is_ai_generated: bool) -> "GraphSettings":
return cls(
human_in_the_loop_safe_mode=(True if graph.has_human_in_the_loop else None),
is_ai_generated_graph=is_ai_generated,
)
class Link(BaseDbModel):
@@ -244,7 +252,10 @@ class BaseGraph(BaseDbModel):
return any(
node.block_id
for node in self.nodes
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
if (
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
or node.block.requires_human_review
)
)
@property

View File

@@ -877,6 +877,7 @@ async def add_graph_execution(
if settings.human_in_the_loop_safe_mode is not None
else True
),
is_ai_generated_graph=settings.is_ai_generated_graph,
user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
),

View File

@@ -8,6 +8,7 @@ from .discord import DiscordOAuthHandler
from .github import GitHubOAuthHandler
from .google import GoogleOAuthHandler
from .notion import NotionOAuthHandler
from .reddit import RedditOAuthHandler
from .twitter import TwitterOAuthHandler
if TYPE_CHECKING:
@@ -20,6 +21,7 @@ _ORIGINAL_HANDLERS = [
GitHubOAuthHandler,
GoogleOAuthHandler,
NotionOAuthHandler,
RedditOAuthHandler,
TwitterOAuthHandler,
TodoistOAuthHandler,
]

View File

@@ -0,0 +1,208 @@
import time
import urllib.parse
from typing import ClassVar, Optional
from pydantic import SecretStr
from backend.data.model import OAuth2Credentials
from backend.integrations.oauth.base import BaseOAuthHandler
from backend.integrations.providers import ProviderName
from backend.util.request import Requests
from backend.util.settings import Settings
settings = Settings()
class RedditOAuthHandler(BaseOAuthHandler):
"""
Reddit OAuth 2.0 handler.
Based on the documentation at:
- https://github.com/reddit-archive/reddit/wiki/OAuth2
Notes:
- Reddit requires `duration=permanent` to get refresh tokens
- Access tokens expire after 1 hour (3600 seconds)
- Reddit requires HTTP Basic Auth for token requests
- Reddit requires a unique User-Agent header
"""
PROVIDER_NAME = ProviderName.REDDIT
DEFAULT_SCOPES: ClassVar[list[str]] = [
"identity", # Get username, verify auth
"read", # Access posts and comments
"submit", # Submit new posts and comments
"edit", # Edit own posts and comments
"history", # Access user's post history
"privatemessages", # Access inbox and send private messages
"flair", # Access and set flair on posts/subreddits
]
AUTHORIZE_URL = "https://www.reddit.com/api/v1/authorize"
TOKEN_URL = "https://www.reddit.com/api/v1/access_token"
USERNAME_URL = "https://oauth.reddit.com/api/v1/me"
REVOKE_URL = "https://www.reddit.com/api/v1/revoke_token"
def __init__(self, client_id: str, client_secret: str, redirect_uri: str):
self.client_id = client_id
self.client_secret = client_secret
self.redirect_uri = redirect_uri
def get_login_url(
self, scopes: list[str], state: str, code_challenge: Optional[str]
) -> str:
"""Generate Reddit OAuth 2.0 authorization URL"""
scopes = self.handle_default_scopes(scopes)
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": " ".join(scopes),
"state": state,
"duration": "permanent", # Required for refresh tokens
}
return f"{self.AUTHORIZE_URL}?{urllib.parse.urlencode(params)}"
async def exchange_code_for_tokens(
self, code: str, scopes: list[str], code_verifier: Optional[str]
) -> OAuth2Credentials:
"""Exchange authorization code for access tokens"""
scopes = self.handle_default_scopes(scopes)
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": settings.config.reddit_user_agent,
}
data = {
"grant_type": "authorization_code",
"code": code,
"redirect_uri": self.redirect_uri,
}
# Reddit requires HTTP Basic Auth for token requests
auth = (self.client_id, self.client_secret)
response = await Requests().post(
self.TOKEN_URL, headers=headers, data=data, auth=auth
)
if not response.ok:
error_text = response.text()
raise ValueError(
f"Reddit token exchange failed: {response.status} - {error_text}"
)
tokens = response.json()
if "error" in tokens:
raise ValueError(f"Reddit OAuth error: {tokens.get('error')}")
username = await self._get_username(tokens["access_token"])
return OAuth2Credentials(
provider=self.PROVIDER_NAME,
title=None,
username=username,
access_token=tokens["access_token"],
refresh_token=tokens.get("refresh_token"),
access_token_expires_at=int(time.time()) + tokens.get("expires_in", 3600),
refresh_token_expires_at=None, # Reddit refresh tokens don't expire
scopes=scopes,
)
async def _get_username(self, access_token: str) -> str:
"""Get the username from the access token"""
headers = {
"Authorization": f"Bearer {access_token}",
"User-Agent": settings.config.reddit_user_agent,
}
response = await Requests().get(self.USERNAME_URL, headers=headers)
if not response.ok:
raise ValueError(f"Failed to get Reddit username: {response.status}")
data = response.json()
return data.get("name", "unknown")
async def _refresh_tokens(
self, credentials: OAuth2Credentials
) -> OAuth2Credentials:
"""Refresh access tokens using refresh token"""
if not credentials.refresh_token:
raise ValueError("No refresh token available")
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": settings.config.reddit_user_agent,
}
data = {
"grant_type": "refresh_token",
"refresh_token": credentials.refresh_token.get_secret_value(),
}
auth = (self.client_id, self.client_secret)
response = await Requests().post(
self.TOKEN_URL, headers=headers, data=data, auth=auth
)
if not response.ok:
error_text = response.text()
raise ValueError(
f"Reddit token refresh failed: {response.status} - {error_text}"
)
tokens = response.json()
if "error" in tokens:
raise ValueError(f"Reddit OAuth error: {tokens.get('error')}")
username = await self._get_username(tokens["access_token"])
# Reddit may or may not return a new refresh token
new_refresh_token = tokens.get("refresh_token")
if new_refresh_token:
refresh_token: SecretStr | None = SecretStr(new_refresh_token)
elif credentials.refresh_token:
# Keep the existing refresh token
refresh_token = credentials.refresh_token
else:
refresh_token = None
return OAuth2Credentials(
id=credentials.id,
provider=self.PROVIDER_NAME,
title=credentials.title,
username=username,
access_token=tokens["access_token"],
refresh_token=refresh_token,
access_token_expires_at=int(time.time()) + tokens.get("expires_in", 3600),
refresh_token_expires_at=None,
scopes=credentials.scopes,
)
async def revoke_tokens(self, credentials: OAuth2Credentials) -> bool:
"""Revoke the access token"""
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": settings.config.reddit_user_agent,
}
data = {
"token": credentials.access_token.get_secret_value(),
"token_type_hint": "access_token",
}
auth = (self.client_id, self.client_secret)
response = await Requests().post(
self.REVOKE_URL, headers=headers, data=data, auth=auth
)
# Reddit returns 204 No Content on successful revocation
return response.ok

View File

@@ -264,7 +264,7 @@ class Config(UpdateTrackingModel["Config"], BaseSettings):
)
reddit_user_agent: str = Field(
default="AutoGPT:1.0 (by /u/autogpt)",
default="web:AutoGPT:v0.6.0 (by /u/autogpt)",
description="The user agent for the Reddit API",
)

View File

@@ -1,35 +0,0 @@
-- CreateExtension in public schema (standard location for pgvector)
CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "public";
-- Grant usage on public schema to platform users
GRANT USAGE ON SCHEMA public TO postgres;
-- CreateEnum
CREATE TYPE "ContentType" AS ENUM ('STORE_AGENT', 'BLOCK', 'INTEGRATION', 'DOCUMENTATION', 'LIBRARY_AGENT');
-- CreateTable
CREATE TABLE "UnifiedContentEmbedding" (
"id" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
"contentType" "ContentType" NOT NULL,
"contentId" TEXT NOT NULL,
"userId" TEXT,
"embedding" public.vector(1536) NOT NULL,
"searchableText" TEXT NOT NULL,
"metadata" JSONB NOT NULL DEFAULT '{}',
CONSTRAINT "UnifiedContentEmbedding_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "UnifiedContentEmbedding_contentType_idx" ON "UnifiedContentEmbedding"("contentType");
-- CreateIndex
CREATE INDEX "UnifiedContentEmbedding_userId_idx" ON "UnifiedContentEmbedding"("userId");
-- CreateIndex
CREATE INDEX "UnifiedContentEmbedding_contentType_userId_idx" ON "UnifiedContentEmbedding"("contentType", "userId");
-- CreateIndex
CREATE UNIQUE INDEX "UnifiedContentEmbedding_contentType_contentId_userId_key" ON "UnifiedContentEmbedding"("contentType", "contentId", "userId");

View File

@@ -1,15 +1,14 @@
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL")
extensions = [pgvector(map: "vector")]
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL")
}
generator client {
provider = "prisma-client-py"
recursive_type_depth = -1
interface = "asyncio"
previewFeatures = ["views", "fullTextSearch", "postgresqlExtensions"]
previewFeatures = ["views", "fullTextSearch"]
partial_type_generator = "backend/data/partial_types.py"
}
@@ -128,8 +127,8 @@ model BuilderSearchHistory {
updatedAt DateTime @default(now()) @updatedAt
searchQuery String
filter String[] @default([])
byCreator String[] @default([])
filter String[] @default([])
byCreator String[] @default([])
userId String
User User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@ -722,25 +721,26 @@ view StoreAgent {
storeListingVersionId String
updated_at DateTime
slug String
agent_name String
agent_video String?
agent_output_demo String?
agent_image String[]
slug String
agent_name String
agent_video String?
agent_output_demo String?
agent_image String[]
featured Boolean @default(false)
creator_username String?
creator_avatar String?
sub_heading String
description String
categories String[]
runs Int
rating Float
versions String[]
agentGraphVersions String[]
agentGraphId String
is_available Boolean @default(true)
useForOnboarding Boolean @default(false)
featured Boolean @default(false)
creator_username String?
creator_avatar String?
sub_heading String
description String
categories String[]
search Unsupported("tsvector")? @default(dbgenerated("''::tsvector"))
runs Int
rating Float
versions String[]
agentGraphVersions String[]
agentGraphId String
is_available Boolean @default(true)
useForOnboarding Boolean @default(false)
// Materialized views used (refreshed every 15 minutes via pg_cron):
// - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId
@@ -856,14 +856,14 @@ model StoreListingVersion {
AgentGraph AgentGraph @relation(fields: [agentGraphId, agentGraphVersion], references: [id, version])
// Content fields
name String
subHeading String
videoUrl String?
agentOutputDemoUrl String?
imageUrls String[]
description String
instructions String?
categories String[]
name String
subHeading String
videoUrl String?
agentOutputDemoUrl String?
imageUrls String[]
description String
instructions String?
categories String[]
isFeatured Boolean @default(false)
@@ -899,9 +899,6 @@ model StoreListingVersion {
// Reviews for this specific version
Reviews StoreListingReview[]
// Note: Embeddings now stored in UnifiedContentEmbedding table
// Use contentType=STORE_AGENT and contentId=storeListingVersionId
@@unique([storeListingId, version])
@@index([storeListingId, submissionStatus, isAvailable])
@@index([submissionStatus])
@@ -909,42 +906,6 @@ model StoreListingVersion {
@@index([agentGraphId, agentGraphVersion]) // Non-unique index for efficient lookups
}
// Content type enum for unified search across store agents, blocks, docs
// Note: BLOCK/INTEGRATION are file-based (Python classes), not DB records
// DOCUMENTATION are file-based (.md files), not DB records
// Only STORE_AGENT and LIBRARY_AGENT are stored in database
enum ContentType {
STORE_AGENT // Database: StoreListingVersion
BLOCK // File-based: Python classes in /backend/blocks/
INTEGRATION // File-based: Python classes (blocks with credentials)
DOCUMENTATION // File-based: .md/.mdx files
LIBRARY_AGENT // Database: User's personal agents
}
// Unified embeddings table for all searchable content types
// Supports both public content (userId=null) and user-specific content (userId=userID)
model UnifiedContentEmbedding {
id String @id @default(uuid())
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
// Content identification
contentType ContentType
contentId String // DB ID (storeListingVersionId) or file identifier (block.id, file_path)
userId String? // NULL for public content (store, blocks, docs), userId for private content (library agents)
// Search data
embedding Unsupported("public.vector(1536)") // pgvector embedding from public schema
searchableText String // Combined text for search and fallback
metadata Json @default("{}") // Content-specific metadata
@@unique([contentType, contentId, userId]) // Allow same content for different users
@@index([contentType])
@@index([userId])
@@index([contentType, userId])
}
model StoreListingReview {
id String @id @default(uuid())
createdAt DateTime @default(now())
@@ -1037,16 +998,16 @@ model OAuthApplication {
updatedAt DateTime @updatedAt
// Application metadata
name String
description String?
logoUrl String? // URL to app logo stored in GCS
clientId String @unique
clientSecret String // Hashed with Scrypt (same as API keys)
clientSecretSalt String // Salt for Scrypt hashing
name String
description String?
logoUrl String? // URL to app logo stored in GCS
clientId String @unique
clientSecret String // Hashed with Scrypt (same as API keys)
clientSecretSalt String // Salt for Scrypt hashing
// OAuth configuration
redirectUris String[] // Allowed callback URLs
grantTypes String[] @default(["authorization_code", "refresh_token"])
grantTypes String[] @default(["authorization_code", "refresh_token"])
scopes APIKeyPermission[] // Which permissions the app can request
// Application management

View File

@@ -68,7 +68,10 @@ export const NodeHeader = ({ data, nodeId }: Props) => {
<Tooltip>
<TooltipTrigger asChild>
<div>
<Text variant="large-semibold" className="line-clamp-1">
<Text
variant="large-semibold"
className="line-clamp-1 hover:cursor-text"
>
{beautifyString(title).replace("Block", "").trim()}
</Text>
</div>

View File

@@ -89,6 +89,18 @@ export function extractOptions(
// get display type and color for schema types [need for type display next to field name]
export const getTypeDisplayInfo = (schema: any) => {
if (
schema?.type === "array" &&
"format" in schema &&
schema.format === "table"
) {
return {
displayType: "table",
colorClass: "!text-indigo-500",
hexColor: "#6366f1",
};
}
if (schema?.type === "string" && schema?.format) {
const formatMap: Record<
string,

View File

@@ -36,6 +36,7 @@ type Props = {
readOnly?: boolean;
isOptional?: boolean;
showTitle?: boolean;
variant?: "default" | "node";
};
export function CredentialsInput({
@@ -48,6 +49,7 @@ export function CredentialsInput({
readOnly = false,
isOptional = false,
showTitle = true,
variant = "default",
}: Props) {
const hookData = useCredentialsInput({
schema,
@@ -123,6 +125,7 @@ export function CredentialsInput({
onClearCredential={() => onSelectCredential(undefined)}
readOnly={readOnly}
allowNone={isOptional}
variant={variant}
/>
) : (
<div className="mb-4 space-y-2">

View File

@@ -30,6 +30,8 @@ type CredentialRowProps = {
readOnly?: boolean;
showCaret?: boolean;
asSelectTrigger?: boolean;
/** When "node", applies compact styling for node context */
variant?: "default" | "node";
};
export function CredentialRow({
@@ -41,14 +43,22 @@ export function CredentialRow({
readOnly = false,
showCaret = false,
asSelectTrigger = false,
variant = "default",
}: CredentialRowProps) {
const ProviderIcon = providerIcons[provider] || fallbackIcon;
const isNodeVariant = variant === "node";
return (
<div
className={cn(
"flex items-center gap-3 rounded-medium border border-zinc-200 bg-white p-3 transition-colors",
asSelectTrigger ? "border-0 bg-transparent" : readOnly ? "w-fit" : "",
asSelectTrigger && isNodeVariant
? "min-w-0 flex-1 overflow-hidden border-0 bg-transparent"
: asSelectTrigger
? "border-0 bg-transparent"
: readOnly
? "w-fit"
: "",
)}
onClick={readOnly || showCaret || asSelectTrigger ? undefined : onSelect}
style={
@@ -61,19 +71,31 @@ export function CredentialRow({
<ProviderIcon className="h-3 w-3 text-white" />
</div>
<IconKey className="h-5 w-5 shrink-0 text-zinc-800" />
<div className="flex min-w-0 flex-1 flex-nowrap items-center gap-4">
<div
className={cn(
"flex min-w-0 flex-1 flex-nowrap items-center gap-4",
isNodeVariant && "overflow-hidden",
)}
>
<Text
variant="body"
className="line-clamp-1 flex-[0_0_50%] text-ellipsis tracking-tight"
className={cn(
"tracking-tight",
isNodeVariant
? "truncate"
: "line-clamp-1 flex-[0_0_50%] text-ellipsis",
)}
>
{getCredentialDisplayName(credential, displayName)}
</Text>
<Text
variant="large"
className="lex-[0_0_40%] relative top-1 hidden overflow-hidden whitespace-nowrap font-mono tracking-tight md:block"
>
{"*".repeat(MASKED_KEY_LENGTH)}
</Text>
{!(asSelectTrigger && isNodeVariant) && (
<Text
variant="large"
className="relative top-1 hidden overflow-hidden whitespace-nowrap font-mono tracking-tight md:block"
>
{"*".repeat(MASKED_KEY_LENGTH)}
</Text>
)}
</div>
{showCaret && !asSelectTrigger && (
<CaretDown className="h-4 w-4 shrink-0 text-gray-400" />

View File

@@ -7,6 +7,7 @@ import {
} from "@/components/__legacy__/ui/select";
import { Text } from "@/components/atoms/Text/Text";
import { CredentialsMetaInput } from "@/lib/autogpt-server-api/types";
import { cn } from "@/lib/utils";
import { useEffect } from "react";
import { getCredentialDisplayName } from "../../helpers";
import { CredentialRow } from "../CredentialRow/CredentialRow";
@@ -26,6 +27,8 @@ interface Props {
onClearCredential?: () => void;
readOnly?: boolean;
allowNone?: boolean;
/** When "node", applies compact styling for node context */
variant?: "default" | "node";
}
export function CredentialsSelect({
@@ -37,6 +40,7 @@ export function CredentialsSelect({
onClearCredential,
readOnly = false,
allowNone = true,
variant = "default",
}: Props) {
// Auto-select first credential if none is selected (only if allowNone is false)
useEffect(() => {
@@ -59,7 +63,12 @@ export function CredentialsSelect({
value={selectedCredentials?.id || (allowNone ? "__none__" : "")}
onValueChange={handleValueChange}
>
<SelectTrigger className="h-auto min-h-12 w-full rounded-medium border-zinc-200 p-0 pr-4 shadow-none">
<SelectTrigger
className={cn(
"h-auto min-h-12 w-full rounded-medium border-zinc-200 p-0 pr-4 shadow-none",
variant === "node" && "overflow-hidden",
)}
>
{selectedCredentials ? (
<SelectValue key={selectedCredentials.id} asChild>
<CredentialRow
@@ -75,6 +84,7 @@ export function CredentialsSelect({
onDelete={() => {}}
readOnly={readOnly}
asSelectTrigger={true}
variant={variant}
/>
</SelectValue>
) : (

View File

@@ -92,29 +92,33 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
const isUserCreator = agent?.owner_user_id === user?.id;
// Check if there's a pending submission for this specific agent version
const submissionsResponse = okData(submissionsData) as any;
const hasPendingSubmissionForCurrentVersion =
isUserCreator &&
submissionsResponse?.submissions?.some(
(submission: StoreSubmission) =>
submission.agent_id === agent.graph_id &&
submission.agent_version === agent.graph_version &&
submission.status === "PENDING",
);
const agentSubmissions =
submissionsResponse?.submissions?.filter(
(submission: StoreSubmission) => submission.agent_id === agent.graph_id,
) || [];
const highestSubmittedVersion =
agentSubmissions.length > 0
? Math.max(
...agentSubmissions.map(
(submission: StoreSubmission) => submission.agent_version,
),
)
: 0;
const hasUnpublishedChanges =
isUserCreator && agent.graph_version > highestSubmittedVersion;
if (!storeAgent) {
return {
hasUpdate: false,
latestVersion: undefined,
isUserCreator,
hasPublishUpdate:
isUserCreator && !hasPendingSubmissionForCurrentVersion,
hasPublishUpdate: agentSubmissions.length > 0 && hasUnpublishedChanges,
};
}
// Get the latest version from the marketplace
// agentGraphVersions array contains graph version numbers as strings, get the highest one
const latestMarketplaceVersion =
storeAgent.agentGraphVersions?.length > 0
? Math.max(
@@ -124,18 +128,11 @@ export function useMarketplaceUpdate({ agent }: UseMarketplaceUpdateProps) {
)
: undefined;
// Show publish update button if:
// 1. User is the creator
// 2. No pending submission for current version
// 3. Either: agent not published yet OR local version is newer than marketplace
const hasPublishUpdate =
isUserCreator &&
!hasPendingSubmissionForCurrentVersion &&
(latestMarketplaceVersion === undefined || // Not published yet
agent.graph_version > latestMarketplaceVersion); // Or local version is newer
agent.graph_version >
Math.max(latestMarketplaceVersion || 0, highestSubmittedVersion);
// If marketplace version is newer than user's version, show update banner
// This applies to both creators and non-creators
const hasMarketplaceUpdate =
latestMarketplaceVersion !== undefined &&
latestMarketplaceVersion > agent.graph_version;

View File

@@ -133,7 +133,7 @@ export function EditAgentForm({
<Input
id={field.name}
label="Changes Summary"
type="text"
type="textarea"
placeholder="Briefly describe what you changed"
error={form.formState.errors.changes_summary?.message}
{...field}

View File

@@ -47,7 +47,7 @@ export const useEditAgentForm = ({
changes_summary: z
.string()
.min(1, "Changes summary is required")
.max(200, "Changes summary must be less than 200 characters"),
.max(500, "Changes summary must be less than 500 characters"),
agentOutputDemo: z
.string()
.refine(validateYouTubeUrl, "Please enter a valid YouTube URL"),

View File

@@ -53,21 +53,24 @@ export function useAgentInfoStep({
useEffect(() => {
if (initialData?.agent_id) {
setAgentId(initialData.agent_id);
const initialImages = [
...(initialData?.thumbnailSrc ? [initialData.thumbnailSrc] : []),
...(initialData.additionalImages || []),
];
setImages(initialImages);
// Update form with initial data
setImages(
Array.from(
new Set([
...(initialData?.thumbnailSrc ? [initialData.thumbnailSrc] : []),
...(initialData.additionalImages || []),
]),
),
);
form.reset({
changesSummary: initialData.changesSummary || "",
changesSummary: isMarketplaceUpdate
? ""
: initialData.changesSummary || "",
title: initialData.title,
subheader: initialData.subheader,
slug: initialData.slug.toLocaleLowerCase().trim(),
youtubeLink: initialData.youtubeLink,
category: initialData.category,
description: initialData.description,
description: isMarketplaceUpdate ? "" : initialData.description,
recommendedScheduleCron: initialData.recommendedScheduleCron || "",
instructions: initialData.instructions || "",
agentOutputDemo: initialData.agentOutputDemo || "",
@@ -149,12 +152,7 @@ export function useAgentInfoStep({
agentId,
images,
isSubmitting,
initialImages: initialData
? [
...(initialData?.thumbnailSrc ? [initialData.thumbnailSrc] : []),
...(initialData.additionalImages || []),
]
: [],
initialImages: images,
initialSelectedImage: initialData?.thumbnailSrc || null,
handleImagesChange,
handleSubmit: form.handleSubmit(handleFormSubmit),

View File

@@ -0,0 +1,116 @@
import type { Meta, StoryObj } from "@storybook/nextjs";
import { TooltipProvider } from "@/components/atoms/Tooltip/BaseTooltip";
import { Table } from "./Table";
const meta = {
title: "Molecules/Table",
component: Table,
decorators: [
(Story) => (
<TooltipProvider>
<Story />
</TooltipProvider>
),
],
parameters: {
layout: "centered",
},
tags: ["autodocs"],
argTypes: {
allowAddRow: {
control: "boolean",
description: "Whether to show the Add row button",
},
allowDeleteRow: {
control: "boolean",
description: "Whether to show delete buttons for each row",
},
readOnly: {
control: "boolean",
description:
"Whether the table is read-only (renders text instead of inputs)",
},
addRowLabel: {
control: "text",
description: "Label for the Add row button",
},
},
} satisfies Meta<typeof Table>;
export default meta;
type Story = StoryObj<typeof meta>;
export const Default: Story = {
args: {
columns: ["name", "email", "role"],
allowAddRow: true,
allowDeleteRow: true,
},
};
export const WithDefaultValues: Story = {
args: {
columns: ["name", "email", "role"],
defaultValues: [
{ name: "John Doe", email: "john@example.com", role: "Admin" },
{ name: "Jane Smith", email: "jane@example.com", role: "User" },
{ name: "Bob Wilson", email: "bob@example.com", role: "Editor" },
],
allowAddRow: true,
allowDeleteRow: true,
},
};
export const ReadOnly: Story = {
args: {
columns: ["name", "email"],
defaultValues: [
{ name: "John Doe", email: "john@example.com" },
{ name: "Jane Smith", email: "jane@example.com" },
],
readOnly: true,
},
};
export const NoAddOrDelete: Story = {
args: {
columns: ["name", "email"],
defaultValues: [
{ name: "John Doe", email: "john@example.com" },
{ name: "Jane Smith", email: "jane@example.com" },
],
allowAddRow: false,
allowDeleteRow: false,
},
};
export const SingleColumn: Story = {
args: {
columns: ["item"],
allowAddRow: true,
allowDeleteRow: true,
addRowLabel: "Add item",
},
};
export const CustomAddLabel: Story = {
args: {
columns: ["key", "value"],
allowAddRow: true,
allowDeleteRow: true,
addRowLabel: "Add new entry",
},
};
export const KeyValuePairs: Story = {
args: {
columns: ["key", "value"],
defaultValues: [
{ key: "API_KEY", value: "sk-..." },
{ key: "DATABASE_URL", value: "postgres://..." },
],
allowAddRow: true,
allowDeleteRow: true,
addRowLabel: "Add variable",
},
};

View File

@@ -0,0 +1,133 @@
import * as React from "react";
import {
Table as BaseTable,
TableBody,
TableCell,
TableHead,
TableHeader,
TableRow,
} from "@/components/__legacy__/ui/table";
import { Button } from "@/components/atoms/Button/Button";
import { Input } from "@/components/atoms/Input/Input";
import { Text } from "@/components/atoms/Text/Text";
import { Plus, Trash2 } from "lucide-react";
import { cn } from "@/lib/utils";
import { useTable, RowData } from "./useTable";
import { formatColumnTitle, formatPlaceholder } from "./helpers";
export interface TableProps {
columns: string[];
defaultValues?: RowData[];
onChange?: (rows: RowData[]) => void;
allowAddRow?: boolean;
allowDeleteRow?: boolean;
addRowLabel?: string;
className?: string;
readOnly?: boolean;
}
export function Table({
columns,
defaultValues,
onChange,
allowAddRow = true,
allowDeleteRow = true,
addRowLabel = "Add row",
className,
readOnly = false,
}: TableProps) {
const { rows, handleAddRow, handleDeleteRow, handleCellChange } = useTable({
columns,
defaultValues,
onChange,
});
const showDeleteColumn = allowDeleteRow && !readOnly;
const showAddButton = allowAddRow && !readOnly;
return (
<div className={cn("flex flex-col gap-3", className)}>
<div className="overflow-hidden rounded-xl border border-zinc-200 bg-white">
<BaseTable>
<TableHeader>
<TableRow className="border-b border-zinc-100 bg-zinc-50/50">
{columns.map((column) => (
<TableHead
key={column}
className="h-10 px-3 text-sm font-medium text-zinc-600"
>
{formatColumnTitle(column)}
</TableHead>
))}
{showDeleteColumn && <TableHead className="w-[50px]" />}
</TableRow>
</TableHeader>
<TableBody>
{rows.map((row, rowIndex) => (
<TableRow key={rowIndex} className="border-none">
{columns.map((column) => (
<TableCell key={`${rowIndex}-${column}`} className="p-2">
{readOnly ? (
<Text
variant="body"
className="px-3 py-2 text-sm text-zinc-800"
>
{row[column] || "-"}
</Text>
) : (
<Input
id={`table-${rowIndex}-${column}`}
label={formatColumnTitle(column)}
hideLabel
value={row[column] ?? ""}
onChange={(e) =>
handleCellChange(rowIndex, column, e.target.value)
}
placeholder={formatPlaceholder(column)}
size="small"
wrapperClassName="mb-0"
/>
)}
</TableCell>
))}
{showDeleteColumn && (
<TableCell className="p-2">
<Button
variant="icon"
size="icon"
onClick={() => handleDeleteRow(rowIndex)}
aria-label="Delete row"
className="text-zinc-400 transition-colors hover:text-red-500"
>
<Trash2 className="h-4 w-4" />
</Button>
</TableCell>
)}
</TableRow>
))}
{showAddButton && (
<TableRow>
<TableCell
colSpan={columns.length + (showDeleteColumn ? 1 : 0)}
className="p-2"
>
<Button
variant="outline"
size="small"
onClick={handleAddRow}
leftIcon={<Plus className="h-4 w-4" />}
className="w-fit"
>
{addRowLabel}
</Button>
</TableCell>
</TableRow>
)}
</TableBody>
</BaseTable>
</div>
</div>
);
}
export { type RowData } from "./useTable";

View File

@@ -0,0 +1,7 @@
export const formatColumnTitle = (key: string): string => {
return key.charAt(0).toUpperCase() + key.slice(1);
};
export const formatPlaceholder = (key: string): string => {
return `Enter ${key.toLowerCase()}`;
};

View File

@@ -0,0 +1,81 @@
import { useState, useEffect } from "react";
export type RowData = Record<string, string>;
interface UseTableOptions {
columns: string[];
defaultValues?: RowData[];
onChange?: (rows: RowData[]) => void;
}
export function useTable({
columns,
defaultValues,
onChange,
}: UseTableOptions) {
const createEmptyRow = (): RowData => {
const emptyRow: RowData = {};
columns.forEach((column) => {
emptyRow[column] = "";
});
return emptyRow;
};
const [rows, setRows] = useState<RowData[]>(() => {
if (defaultValues && defaultValues.length > 0) {
return defaultValues;
}
return [];
});
useEffect(() => {
if (defaultValues !== undefined) {
setRows(defaultValues);
}
}, [defaultValues]);
const updateRows = (newRows: RowData[]) => {
setRows(newRows);
onChange?.(newRows);
};
const handleAddRow = () => {
const newRows = [...rows, createEmptyRow()];
updateRows(newRows);
};
const handleDeleteRow = (rowIndex: number) => {
const newRows = rows.filter((_, index) => index !== rowIndex);
updateRows(newRows);
};
const handleCellChange = (
rowIndex: number,
columnKey: string,
value: string,
) => {
const newRows = rows.map((row, index) => {
if (index === rowIndex) {
return {
...row,
[columnKey]: value,
};
}
return row;
});
updateRows(newRows);
};
const clearAll = () => {
updateRows([]);
};
return {
rows,
handleAddRow,
handleDeleteRow,
handleCellChange,
clearAll,
createEmptyRow,
};
}

View File

@@ -30,6 +30,8 @@ export const FormRenderer = ({
return generateUiSchemaForCustomFields(preprocessedSchema, uiSchema);
}, [preprocessedSchema, uiSchema]);
console.log("preprocessedSchema", preprocessedSchema);
return (
<div className={"mb-6 mt-4"}>
<Form

View File

@@ -5,19 +5,14 @@ import { useAnyOfField } from "./useAnyOfField";
import { getHandleId, updateUiOption } from "../../helpers";
import { useEdgeStore } from "@/app/(platform)/build/stores/edgeStore";
import { ANY_OF_FLAG } from "../../constants";
import { findCustomFieldId } from "../../registry";
export const AnyOfField = (props: FieldProps) => {
const { registry, schema } = props;
const { fields } = registry;
const { SchemaField: _SchemaField } = fields;
const { nodeId } = registry.formContext;
const { isInputConnected } = useEdgeStore();
const uiOptions = getUiOptions(props.uiSchema, props.globalUiOptions);
const Widget = getWidget({ type: "string" }, "select", registry.widgets);
const {
handleOptionChange,
enumOptions,
@@ -26,6 +21,15 @@ export const AnyOfField = (props: FieldProps) => {
field_id,
} = useAnyOfField(props);
const parentCustomFieldId = findCustomFieldId(schema);
if (parentCustomFieldId) {
return null;
}
const uiOptions = getUiOptions(props.uiSchema, props.globalUiOptions);
const Widget = getWidget({ type: "string" }, "select", registry.widgets);
const handleId = getHandleId({
uiOptions,
id: field_id + ANY_OF_FLAG,
@@ -40,12 +44,21 @@ export const AnyOfField = (props: FieldProps) => {
const isHandleConnected = isInputConnected(nodeId, handleId);
// Now anyOf can render - custom fields if the option schema matches a custom field
const optionCustomFieldId = optionSchema
? findCustomFieldId(optionSchema)
: null;
const optionUiSchema = optionCustomFieldId
? { ...updatedUiSchema, "ui:field": optionCustomFieldId }
: updatedUiSchema;
const optionsSchemaField =
(optionSchema && optionSchema.type !== "null" && (
<_SchemaField
{...props}
schema={optionSchema}
uiSchema={updatedUiSchema}
uiSchema={optionUiSchema}
/>
)) ||
null;

View File

@@ -17,6 +17,7 @@ interface InputExpanderModalProps {
defaultValue: string;
description?: string;
placeholder?: string;
inputType?: "text" | "json";
}
export const InputExpanderModal: FC<InputExpanderModalProps> = ({
@@ -27,6 +28,7 @@ export const InputExpanderModal: FC<InputExpanderModalProps> = ({
defaultValue,
description,
placeholder,
inputType = "text",
}) => {
const [tempValue, setTempValue] = useState(defaultValue);
const [isCopied, setIsCopied] = useState(false);
@@ -78,7 +80,10 @@ export const InputExpanderModal: FC<InputExpanderModalProps> = ({
hideLabel
id="input-expander-modal"
value={tempValue}
className="!min-h-[300px] rounded-2xlarge"
className={cn(
"!min-h-[300px] rounded-2xlarge",
inputType === "json" && "font-mono text-sm",
)}
onChange={(e) => setTempValue(e.target.value)}
placeholder={placeholder || "Enter text..."}
autoFocus

View File

@@ -88,6 +88,8 @@ export const CredentialsField = (props: FieldProps) => {
showTitle={false}
readOnly={formContext?.readOnly}
isOptional={!isRequired}
className="w-full"
variant="node"
/>
{/* Optional credentials toggle - only show in builder canvas, not run dialogs */}

View File

@@ -0,0 +1,124 @@
"use client";
import { FieldProps, getTemplate, getUiOptions } from "@rjsf/utils";
import { Input } from "@/components/atoms/Input/Input";
import { Button } from "@/components/atoms/Button/Button";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { ArrowsOutIcon } from "@phosphor-icons/react";
import { InputExpanderModal } from "../../base/standard/widgets/TextInput/TextInputExpanderModal";
import { getHandleId, updateUiOption } from "../../helpers";
import { useJsonTextField } from "./useJsonTextField";
import { getPlaceholder } from "./helpers";
export const JsonTextField = (props: FieldProps) => {
const {
formData,
onChange,
schema,
registry,
uiSchema,
required,
name,
fieldPathId,
} = props;
const uiOptions = getUiOptions(uiSchema);
const TitleFieldTemplate = getTemplate(
"TitleFieldTemplate",
registry,
uiOptions,
);
const fieldId = fieldPathId?.$id ?? props.id ?? "json-field";
const handleId = getHandleId({
uiOptions,
id: fieldId,
schema: schema,
});
const updatedUiSchema = updateUiOption(uiSchema, {
handleId: handleId,
});
const {
textValue,
isModalOpen,
handleChange,
handleModalOpen,
handleModalClose,
handleModalSave,
} = useJsonTextField({
formData,
onChange,
path: fieldPathId?.path,
});
const placeholder = getPlaceholder(schema);
const title = schema.title || name || "JSON Value";
return (
<div className="flex flex-col gap-2">
<TitleFieldTemplate
id={fieldId}
title={title}
required={required}
schema={schema}
uiSchema={updatedUiSchema}
registry={registry}
/>
<div className="nodrag relative flex items-center gap-2">
<Input
id={fieldId}
hideLabel={true}
type="textarea"
label=""
size="small"
wrapperClassName="mb-0 flex-1 "
value={textValue}
onChange={handleChange}
placeholder={placeholder}
required={required}
disabled={props.disabled}
className="min-h-[60px] pr-8 font-mono text-xs"
/>
<Tooltip delayDuration={0}>
<TooltipTrigger asChild>
<Button
variant="ghost"
size="icon"
onClick={handleModalOpen}
type="button"
className="p-1"
>
<ArrowsOutIcon className="size-4" />
</Button>
</TooltipTrigger>
<TooltipContent>Expand input</TooltipContent>
</Tooltip>
</div>
{schema.description && (
<span className="text-xs text-gray-500">{schema.description}</span>
)}
<InputExpanderModal
isOpen={isModalOpen}
onClose={handleModalClose}
onSave={handleModalSave}
title={`Edit ${title}`}
description={schema.description || "Enter valid JSON"}
defaultValue={textValue}
placeholder={placeholder}
inputType="json"
/>
</div>
);
};
export default JsonTextField;

View File

@@ -0,0 +1,67 @@
import { RJSFSchema } from "@rjsf/utils";
/**
* Converts form data to a JSON string for display
* @param formData - The data to stringify
* @returns JSON string or empty string if data is null/undefined
*/
export function stringifyFormData(formData: unknown): string {
if (formData === undefined || formData === null) {
return "";
}
try {
return JSON.stringify(formData, null, 2);
} catch {
return "";
}
}
/**
* Parses a JSON string into an object/array
* @param value - The JSON string to parse
* @returns Parsed value or undefined if parsing fails or empty
*/
export function parseJsonValue(value: string): unknown | undefined {
const trimmed = value.trim();
if (trimmed === "") {
return undefined;
}
try {
return JSON.parse(trimmed);
} catch {
return undefined;
}
}
/**
* Gets the appropriate placeholder text based on schema type
* @param schema - The JSON schema
* @returns Placeholder string
*/
export function getPlaceholder(schema: RJSFSchema): string {
if (schema.type === "array") {
return '["item1", "item2"] or [{"key": "value"}]';
}
if (schema.type === "object") {
return '{"key": "value"}';
}
return "Enter JSON value...";
}
/**
* Checks if a JSON string is valid
* @param value - The JSON string to validate
* @returns true if valid JSON, false otherwise
*/
export function isValidJson(value: string): boolean {
if (value.trim() === "") {
return true; // Empty is considered valid (will be undefined)
}
try {
JSON.parse(value);
return true;
} catch {
return false;
}
}

View File

@@ -0,0 +1,107 @@
import { useState, useEffect, useCallback } from "react";
import { FieldProps } from "@rjsf/utils";
import { stringifyFormData, parseJsonValue, isValidJson } from "./helpers";
type FieldOnChange = FieldProps["onChange"];
type FieldPathId = FieldProps["fieldPathId"];
interface UseJsonTextFieldOptions {
formData: unknown;
onChange: FieldOnChange;
path?: FieldPathId["path"];
}
interface UseJsonTextFieldReturn {
textValue: string;
isModalOpen: boolean;
hasError: boolean;
handleChange: (
e: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>,
) => void;
handleModalOpen: () => void;
handleModalClose: () => void;
handleModalSave: (value: string) => void;
}
/**
* Custom hook for managing JSON text field state and handlers
*/
export function useJsonTextField({
formData,
onChange,
path,
}: UseJsonTextFieldOptions): UseJsonTextFieldReturn {
const [textValue, setTextValue] = useState(() => stringifyFormData(formData));
const [isModalOpen, setIsModalOpen] = useState(false);
const [hasError, setHasError] = useState(false);
// Update text value when formData changes externally
useEffect(() => {
const newValue = stringifyFormData(formData);
setTextValue(newValue);
setHasError(false);
}, [formData]);
const handleChange = useCallback(
(e: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>) => {
const value = e.target.value;
setTextValue(value);
// Validate JSON and update error state
const valid = isValidJson(value);
setHasError(!valid);
// Try to parse and update formData
if (value.trim() === "") {
onChange(undefined, path ?? []);
return;
}
const parsed = parseJsonValue(value);
if (parsed !== undefined) {
onChange(parsed, path ?? []);
}
},
[onChange, path],
);
const handleModalOpen = useCallback(() => {
setIsModalOpen(true);
}, []);
const handleModalClose = useCallback(() => {
setIsModalOpen(false);
}, []);
const handleModalSave = useCallback(
(value: string) => {
setTextValue(value);
setIsModalOpen(false);
// Validate and update
const valid = isValidJson(value);
setHasError(!valid);
if (value.trim() === "") {
onChange(undefined, path ?? []);
return;
}
const parsed = parseJsonValue(value);
if (parsed !== undefined) {
onChange(parsed, path ?? []);
}
},
[onChange, path],
);
return {
textValue,
isModalOpen,
hasError,
handleChange,
handleModalOpen,
handleModalClose,
handleModalSave,
};
}

View File

@@ -0,0 +1,57 @@
import React from "react";
import { FieldProps, getUiOptions } from "@rjsf/utils";
import { BlockIOObjectSubSchema } from "@/lib/autogpt-server-api/types";
import {
MultiSelector,
MultiSelectorContent,
MultiSelectorInput,
MultiSelectorItem,
MultiSelectorList,
MultiSelectorTrigger,
} from "@/components/__legacy__/ui/multiselect";
import { cn } from "@/lib/utils";
import { useMultiSelectField } from "./useMultiSelectField";
export const MultiSelectField = (props: FieldProps) => {
const { schema, formData, onChange, fieldPathId } = props;
const uiOptions = getUiOptions(props.uiSchema);
const { optionSchema, options, selection, createChangeHandler } =
useMultiSelectField({
schema: schema as BlockIOObjectSubSchema,
formData,
});
const handleValuesChange = createChangeHandler(onChange, fieldPathId);
const displayName = schema.title || "options";
return (
<div className={cn("flex flex-col", uiOptions.className)}>
<MultiSelector
className="nodrag"
values={selection}
onValuesChange={handleValuesChange}
>
<MultiSelectorTrigger className="rounded-3xl border border-zinc-200 bg-white px-2 shadow-none">
<MultiSelectorInput
placeholder={
(schema as any).placeholder ?? `Select ${displayName}...`
}
/>
</MultiSelectorTrigger>
<MultiSelectorContent className="nowheel">
<MultiSelectorList>
{options
.map((key) => ({ ...optionSchema[key], key }))
.map(({ key, title, description }) => (
<MultiSelectorItem key={key} value={key} title={description}>
{title ?? key}
</MultiSelectorItem>
))}
</MultiSelectorList>
</MultiSelectorContent>
</MultiSelector>
</div>
);
};

View File

@@ -0,0 +1 @@
export { MultiSelectField } from "./MultiSelectField";

View File

@@ -0,0 +1,65 @@
import { FieldProps } from "@rjsf/utils";
import { BlockIOObjectSubSchema } from "@/lib/autogpt-server-api/types";
type FormData = Record<string, boolean> | null | undefined;
interface UseMultiSelectFieldOptions {
schema: BlockIOObjectSubSchema;
formData: FormData;
}
export function useMultiSelectField({
schema,
formData,
}: UseMultiSelectFieldOptions) {
const getOptionSchema = (): Record<string, BlockIOObjectSubSchema> => {
if (schema.properties) {
return schema.properties as Record<string, BlockIOObjectSubSchema>;
}
if (
"anyOf" in schema &&
Array.isArray(schema.anyOf) &&
schema.anyOf.length > 0 &&
"properties" in schema.anyOf[0]
) {
return (schema.anyOf[0] as BlockIOObjectSubSchema).properties as Record<
string,
BlockIOObjectSubSchema
>;
}
return {};
};
const optionSchema = getOptionSchema();
const options = Object.keys(optionSchema);
const getSelection = (): string[] => {
if (!formData || typeof formData !== "object") {
return [];
}
return Object.entries(formData)
.filter(([_, value]) => value === true)
.map(([key]) => key);
};
const selection = getSelection();
const createChangeHandler =
(
onChange: FieldProps["onChange"],
fieldPathId: FieldProps["fieldPathId"],
) =>
(values: string[]) => {
const newValue = Object.fromEntries(
options.map((opt) => [opt, values.includes(opt)]),
);
onChange(newValue, fieldPathId?.path);
};
return {
optionSchema,
options,
selection,
createChangeHandler,
};
}

View File

@@ -0,0 +1,52 @@
import { descriptionId, FieldProps, getTemplate, titleId } from "@rjsf/utils";
import { Table, RowData } from "@/components/molecules/Table/Table";
import { useMemo } from "react";
export const TableField = (props: FieldProps) => {
const { schema, formData, onChange, fieldPathId, registry, uiSchema } = props;
const itemSchema = schema.items as any;
const properties = itemSchema?.properties || {};
const columns: string[] = useMemo(() => {
return Object.keys(properties);
}, [properties]);
const handleChange = (rows: RowData[]) => {
onChange(rows, fieldPathId?.path.slice(0, -1));
};
const TitleFieldTemplate = getTemplate("TitleFieldTemplate", registry);
const DescriptionFieldTemplate = getTemplate(
"DescriptionFieldTemplate",
registry,
);
return (
<div className="flex flex-col gap-2">
<TitleFieldTemplate
id={titleId(fieldPathId)}
title={schema.title || ""}
required={true}
schema={schema}
uiSchema={uiSchema}
registry={registry}
/>
<DescriptionFieldTemplate
id={descriptionId(fieldPathId)}
description={schema.description || ""}
schema={schema}
registry={registry}
/>
<Table
columns={columns}
defaultValues={formData}
onChange={handleChange}
allowAddRow={true}
allowDeleteRow={true}
addRowLabel="Add row"
/>
</div>
);
};

View File

@@ -1,6 +1,10 @@
import { FieldProps, RJSFSchema, RegistryFieldsType } from "@rjsf/utils";
import { CredentialsField } from "./CredentialField/CredentialField";
import { GoogleDrivePickerField } from "./GoogleDrivePickerField/GoogleDrivePickerField";
import { JsonTextField } from "./JsonTextField/JsonTextField";
import { MultiSelectField } from "./MultiSelectField/MultiSelectField";
import { isMultiSelectSchema } from "../utils/schema-utils";
import { TableField } from "./TableField/TableField";
export interface CustomFieldDefinition {
id: string;
@@ -8,6 +12,9 @@ export interface CustomFieldDefinition {
component: (props: FieldProps<any, RJSFSchema, any>) => JSX.Element | null;
}
/** Field ID for JsonTextField - used to render nested complex types as text input */
export const JSON_TEXT_FIELD_ID = "custom/json_text_field";
export const CUSTOM_FIELDS: CustomFieldDefinition[] = [
{
id: "custom/credential_field",
@@ -30,6 +37,28 @@ export const CUSTOM_FIELDS: CustomFieldDefinition[] = [
},
component: GoogleDrivePickerField,
},
{
id: "custom/json_text_field",
// Not matched by schema - assigned via uiSchema for nested complex types
matcher: () => false,
component: JsonTextField,
},
{
id: "custom/multi_select_field",
matcher: isMultiSelectSchema,
component: MultiSelectField,
},
{
id: "custom/table_field",
matcher: (schema: any) => {
return (
schema.type === "array" &&
"format" in schema &&
schema.format === "table"
);
},
component: TableField,
},
];
export function findCustomFieldId(schema: any): string | null {

View File

@@ -1,19 +1,46 @@
import { RJSFSchema, UiSchema } from "@rjsf/utils";
import { findCustomFieldId } from "../custom/custom-registry";
import {
findCustomFieldId,
JSON_TEXT_FIELD_ID,
} from "../custom/custom-registry";
function isComplexType(schema: RJSFSchema): boolean {
return schema.type === "object" || schema.type === "array";
}
function hasComplexAnyOfOptions(schema: RJSFSchema): boolean {
const options = schema.anyOf || schema.oneOf;
if (!Array.isArray(options)) return false;
return options.some(
(opt: any) =>
opt &&
typeof opt === "object" &&
(opt.type === "object" || opt.type === "array"),
);
}
/**
* Generates uiSchema with ui:field settings for custom fields based on schema matchers.
* This is the standard RJSF way to route fields to custom components.
*
* Nested complex types (arrays/objects inside arrays/objects) are rendered as JsonTextField
* to avoid deeply nested form UIs. Users can enter raw JSON for these fields.
*
* @param schema - The JSON schema
* @param existingUiSchema - Existing uiSchema to merge with
* @param insideComplexType - Whether we're already inside a complex type (object/array)
*/
export function generateUiSchemaForCustomFields(
schema: RJSFSchema,
existingUiSchema: UiSchema = {},
insideComplexType: boolean = false,
): UiSchema {
const uiSchema: UiSchema = { ...existingUiSchema };
if (schema.properties) {
for (const [key, propSchema] of Object.entries(schema.properties)) {
if (propSchema && typeof propSchema === "object") {
// First check for custom field matchers (credentials, google drive, etc.)
const customFieldId = findCustomFieldId(propSchema);
if (customFieldId) {
@@ -21,8 +48,33 @@ export function generateUiSchemaForCustomFields(
...(uiSchema[key] as object),
"ui:field": customFieldId,
};
// Skip further processing for custom fields
continue;
}
// Handle nested complex types - render as JsonTextField
if (insideComplexType && isComplexType(propSchema as RJSFSchema)) {
uiSchema[key] = {
...(uiSchema[key] as object),
"ui:field": JSON_TEXT_FIELD_ID,
};
// Don't recurse further - this field is now a text input
continue;
}
// Handle anyOf/oneOf inside complex types
if (
insideComplexType &&
hasComplexAnyOfOptions(propSchema as RJSFSchema)
) {
uiSchema[key] = {
...(uiSchema[key] as object),
"ui:field": JSON_TEXT_FIELD_ID,
};
continue;
}
// Recurse into object properties
if (
propSchema.type === "object" &&
propSchema.properties &&
@@ -31,6 +83,7 @@ export function generateUiSchemaForCustomFields(
const nestedUiSchema = generateUiSchemaForCustomFields(
propSchema as RJSFSchema,
(uiSchema[key] as UiSchema) || {},
true, // Now inside a complex type
);
uiSchema[key] = {
...(uiSchema[key] as object),
@@ -38,9 +91,11 @@ export function generateUiSchemaForCustomFields(
};
}
// Handle arrays
if (propSchema.type === "array" && propSchema.items) {
const itemsSchema = propSchema.items as RJSFSchema;
if (itemsSchema && typeof itemsSchema === "object") {
// Check for custom field on array items
const itemsCustomFieldId = findCustomFieldId(itemsSchema);
if (itemsCustomFieldId) {
uiSchema[key] = {
@@ -49,10 +104,28 @@ export function generateUiSchemaForCustomFields(
"ui:field": itemsCustomFieldId,
},
};
} else if (isComplexType(itemsSchema)) {
// Array items that are complex types become JsonTextField
uiSchema[key] = {
...(uiSchema[key] as object),
items: {
"ui:field": JSON_TEXT_FIELD_ID,
},
};
} else if (hasComplexAnyOfOptions(itemsSchema)) {
// Array items with anyOf containing complex types become JsonTextField
uiSchema[key] = {
...(uiSchema[key] as object),
items: {
"ui:field": JSON_TEXT_FIELD_ID,
},
};
} else if (itemsSchema.properties) {
// Recurse into object items (but they're now inside a complex type)
const itemsUiSchema = generateUiSchemaForCustomFields(
itemsSchema,
((uiSchema[key] as UiSchema)?.items as UiSchema) || {},
true, // Inside complex type (array)
);
if (Object.keys(itemsUiSchema).length > 0) {
uiSchema[key] = {
@@ -63,6 +136,61 @@ export function generateUiSchemaForCustomFields(
}
}
}
// Handle anyOf/oneOf at root level - process complex options
if (!insideComplexType) {
const anyOfOptions = propSchema.anyOf || propSchema.oneOf;
if (Array.isArray(anyOfOptions)) {
for (let i = 0; i < anyOfOptions.length; i++) {
const option = anyOfOptions[i] as RJSFSchema;
if (option && typeof option === "object") {
// Handle anyOf array options with complex items
if (option.type === "array" && option.items) {
const itemsSchema = option.items as RJSFSchema;
if (itemsSchema && typeof itemsSchema === "object") {
// Array items that are complex types become JsonTextField
if (isComplexType(itemsSchema)) {
uiSchema[key] = {
...(uiSchema[key] as object),
items: {
"ui:field": JSON_TEXT_FIELD_ID,
},
};
} else if (hasComplexAnyOfOptions(itemsSchema)) {
uiSchema[key] = {
...(uiSchema[key] as object),
items: {
"ui:field": JSON_TEXT_FIELD_ID,
},
};
}
}
}
// Recurse into anyOf object options with properties
if (
option.type === "object" &&
option.properties &&
typeof option.properties === "object"
) {
const optionUiSchema = generateUiSchemaForCustomFields(
option,
{},
true, // Inside complex type (anyOf object option)
);
if (Object.keys(optionUiSchema).length > 0) {
// Store under the property key - RJSF will apply it
uiSchema[key] = {
...(uiSchema[key] as object),
...optionUiSchema,
};
}
}
}
}
}
}
}
}
}

View File

@@ -1,7 +1,11 @@
import { getUiOptions, RJSFSchema, UiSchema } from "@rjsf/utils";
export function isAnyOfSchema(schema: RJSFSchema | undefined): boolean {
return Array.isArray(schema?.anyOf) && schema!.anyOf.length > 0;
return (
Array.isArray(schema?.anyOf) &&
schema!.anyOf.length > 0 &&
schema?.enum === undefined
);
}
export const isAnyOfChild = (
@@ -33,3 +37,21 @@ export function isOptionalType(schema: RJSFSchema | undefined): {
export function isAnyOfSelector(name: string) {
return name.includes("anyof_select");
}
export function isMultiSelectSchema(schema: RJSFSchema | undefined): boolean {
if (typeof schema !== "object" || schema === null) {
return false;
}
if ("anyOf" in schema || "oneOf" in schema) {
return false;
}
return !!(
schema.type === "object" &&
schema.properties &&
Object.values(schema.properties).every(
(prop: any) => prop.type === "boolean",
)
);
}

View File

@@ -1,16 +0,0 @@
window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.typesetPromise()
})

View File

@@ -1,6 +0,0 @@
document$.subscribe(function () {
var tables = document.querySelectorAll("article table:not([class])")
tables.forEach(function (table) {
new Tablesort(table)
})
})

View File

@@ -1,48 +1,35 @@
# Contributing to the Docs
We welcome contributions to our documentation! If you would like to contribute, please follow the steps below.
We welcome contributions to our documentation! Our docs are hosted on GitBook and synced with GitHub.
## Setting up the Docs
## How It Works
1. Clone the repository:
- Documentation lives in the `docs/` directory on the `gitbook` branch
- GitBook automatically syncs changes from GitHub
- You can edit docs directly on GitHub or locally
## Editing Docs Locally
1. Clone the repository and switch to the gitbook branch:
```shell
git clone github.com/Significant-Gravitas/AutoGPT.git
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd AutoGPT
git checkout gitbook
```
1. Install the dependencies:
2. Make your changes to markdown files in `docs/`
```shell
python -m pip install -r docs/requirements.txt
```
3. Preview changes:
- Push to a branch and create a PR - GitBook will generate a preview
- Or use any markdown preview tool locally
or
## Adding a New Page
```shell
python3 -m pip install -r docs/requirements.txt
```
1. Start iterating using mkdocs' live server:
```shell
mkdocs serve
```
1. Open your browser and navigate to `http://127.0.0.1:8000`.
1. The server will automatically reload the docs when you save your changes.
## Adding a new page
1. Create a new markdown file in the `docs/content` directory.
1. Add the new page to the `nav` section in the `mkdocs.yml` file.
1. Add the content to the new markdown file.
1. Run `mkdocs serve` to see your changes.
## Checking links
To check for broken links in the documentation, run `mkdocs build` and look for warnings in the console output.
1. Create a new markdown file in the appropriate `docs/` subdirectory
2. Add the new page to the relevant `SUMMARY.md` file to include it in the navigation
3. Submit a pull request to the `gitbook` branch
## Submitting a Pull Request
When you're ready to submit your changes, please create a pull request. We will review your changes and merge them if they are appropriate.
When you're ready to submit your changes, create a pull request targeting the `gitbook` branch. We will review your changes and merge them if appropriate.

View File

@@ -1,194 +0,0 @@
site_name: AutoGPT Documentation
site_url: https://docs.agpt.co/
repo_url: https://github.com/Significant-Gravitas/AutoGPT
repo_name: AutoGPT
edit_uri: edit/master/docs/content
docs_dir: content
nav:
- Home: index.md
- The AutoGPT Platform 🆕:
- Getting Started:
- Setup AutoGPT (Local-Host): platform/getting-started.md
- Edit an Agent: platform/edit-agent.md
- Delete an Agent: platform/delete-agent.md
- Download & Import and Agent: platform/download-agent-from-marketplace-local.md
- Create a Basic Agent: platform/create-basic-agent.md
- Submit an Agent to the Marketplace: platform/submit-agent-to-marketplace.md
- Advanced Setup: platform/advanced_setup.md
- Agent Blocks: platform/agent-blocks.md
- Build your own Blocks: platform/new_blocks.md
- Block SDK Guide: platform/block-sdk-guide.md
- Using Ollama: platform/ollama.md
- Using AI/ML API: platform/aimlapi.md
- Using D-ID: platform/d_id.md
- Blocks: platform/blocks/blocks.md
- API:
- Introduction: platform/integrating/api-guide.md
- OAuth & SSO: platform/integrating/oauth-guide.md
- Contributing:
- Tests: platform/contributing/tests.md
- OAuth Flows: platform/contributing/oauth-integration-flow.md
- AutoGPT Classic:
- Introduction: classic/index.md
- Setup:
- Setting up AutoGPT: classic/setup/index.md
- Set up with Docker: classic/setup/docker.md
- For Developers: classic/setup/for-developers.md
- Configuration:
- Options: classic/configuration/options.md
- Search: classic/configuration/search.md
- Voice: classic/configuration/voice.md
- Usage: classic/usage.md
- Help us improve AutoGPT:
- Share your debug logs with us: classic/share-your-logs.md
- Contribution guide: contributing.md
- Running tests: classic/testing.md
- Code of Conduct: code-of-conduct.md
- Benchmark:
- Readme: https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/benchmark/README.md
- Forge:
- Introduction: forge/get-started.md
- Components:
- Introduction: forge/components/introduction.md
- Agents: forge/components/agents.md
- Components: forge/components/components.md
- Protocols: forge/components/protocols.md
- Commands: forge/components/commands.md
- Built in Components: forge/components/built-in-components.md
- Creating Components: forge/components/creating-components.md
- Frontend:
- Readme: https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/frontend/README.md
- Contribute:
- Introduction: contribute/index.md
- Testing: ../../autogpt_platform/backend/TESTING.md
# - Challenges:
# - Introduction: challenges/introduction.md
# - List of Challenges:
# - Memory:
# - Introduction: challenges/memory/introduction.md
# - Memory Challenge A: challenges/memory/challenge_a.md
# - Memory Challenge B: challenges/memory/challenge_b.md
# - Memory Challenge C: challenges/memory/challenge_c.md
# - Memory Challenge D: challenges/memory/challenge_d.md
# - Information retrieval:
# - Introduction: challenges/information_retrieval/introduction.md
# - Information Retrieval Challenge A: challenges/information_retrieval/challenge_a.md
# - Information Retrieval Challenge B: challenges/information_retrieval/challenge_b.md
# - Submit a Challenge: challenges/submit.md
# - Beat a Challenge: challenges/beat.md
- License: https://github.com/Significant-Gravitas/AutoGPT/blob/master/LICENSE
theme:
name: material
custom_dir: overrides
language: en
icon:
repo: fontawesome/brands/github
logo: material/book-open-variant
edit: material/pencil
view: material/eye
favicon: assets/favicon.png
features:
- navigation.sections
- navigation.footer
- navigation.top
- navigation.tracking
- navigation.tabs
# - navigation.path
- toc.follow
- toc.integrate
- content.action.edit
- content.action.view
- content.code.copy
- content.code.annotate
- content.tabs.link
palette:
# Palette toggle for light mode
- media: "(prefers-color-scheme: light)"
scheme: default
toggle:
icon: material/weather-night
name: Switch to dark mode
# Palette toggle for dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
toggle:
icon: material/weather-sunny
name: Switch to light mode
markdown_extensions:
# Python Markdown
- abbr
- admonition
- attr_list
- def_list
- footnotes
- md_in_html
- toc:
permalink: true
- tables
# Python Markdown Extensions
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.critic
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
- pymdownx.highlight
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.snippets:
base_path: ['.','../']
check_paths: true
dedent_subsections: true
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
plugins:
- table-reader
- search
- git-revision-date-localized:
enable_creation_date: true
extra:
social:
- icon: fontawesome/brands/github
link: https://github.com/Significant-Gravitas/AutoGPT
- icon: fontawesome/brands/x-twitter
link: https://x.com/Auto_GPT
- icon: fontawesome/brands/instagram
link: https://www.instagram.com/autogpt/
- icon: fontawesome/brands/discord
link: https://discord.gg/autogpt
extra:
analytics:
provider: google
property: G-XKPNKB9XG6
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
- _javascript/tablesort.js
- _javascript/mathjax.js
- https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js?features=es6
- https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js

View File

@@ -1,5 +0,0 @@
# Netlify config for AutoGPT docs
[build]
publish = "public/"
command = "mkdocs build -d public"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -1,61 +0,0 @@
{% extends "base.html" %}
{% block extrahead %}
<style>
svg.ms-w-12 {
width: 2rem !important;
}
svg.ms-h-12 {
height: 2rem !important;
}
/* Temporary solution for the font sizes */
#headlessui-dialog-panel-2 p{
font-size: 16px;
}
#headlessui-dialog-panel-2 textarea{
font-size: 16px;
}
#headlessui-dialog-panel-2 span{
font-size: 14px;
}
#headlessui-dialog-panel-2 div{
font-size: 16px;
}
#headlessui-dialog-panel-2 a{
font-size: 14px;
line-height: 1rem;
}
#headlessui-dialog-panel-2 button{
border: 1px solid #0000001d;
line-height: 1rem;
}
/* Temporary solution for the font size */
#headlessui-dialog-panel-2 > div > div > div > div.ms-w-full.ms-rounded-xl.ms-flex.ms-flex-col.ms-relative > div > div > div > div > div.ms-flex.ms-flex-row.ms-items-center.ms-justify-between.ms-gap-1 > p{
font-size: 14px;
line-height: 1rem;
}
</style>
<div id="my-component-root" class="mendable-search"></div>
<script src="https://unpkg.com/@mendable/search@0.0.155/dist/umd/mendable-bundle.min.js"></script>
{% raw %}
<script>
Mendable.initialize({
anon_key: 'a0bd44db-eb3b-412f-8924-b31c58244a64',
type: 'floatingButton',
style : { accentColor: '#3F51B5', darkMode: false },
floatingButtonStyle:{
backgroundColor: '#3F51B5',
color: '#fff',
},
icon: "https://mendable-storage.s3.amazonaws.com/autoGPT.gif",
botIcon: "https://mendable-storage.s3.amazonaws.com/autoGPTbot.gif",
})
</script>
{% endraw %}
{% endblock %}

34
docs/platform/SUMMARY.md Normal file
View File

@@ -0,0 +1,34 @@
# Table of contents
* [What is the AutoGPT Platform?](what-is-autogpt-platform.md)
## Getting Started
* [Setting Up Auto-GPT (Local Host)](getting-started.md)
* [AutoGPT Platform Installer](installer.md)
* [Edit an Agent](edit-agent.md)
* [Delete an Agent](delete-agent.md)
* [Download & Import an Agent](download-agent-from-marketplace-local.md)
* [Create a Basic Agent](create-basic-agent.md)
* [Submit an Agent to the Marketplace](submit-agent-to-marketplace.md)
## Advanced Setup
* [Advanced Setup](advanced_setup.md)
## Building Blocks
* [Agent Blocks Overview](agent-blocks.md)
* [Build your own Blocks](new_blocks.md)
* [Block SDK Guide](block-sdk-guide.md)
## Using AI Services
* [Using Ollama](ollama.md)
* [Using AI/ML API](aimlapi.md)
* [Using D-ID](d_id.md)
## API & Integrations
* [API Introduction](integrating/api-guide.md)
* [OAuth & SSO](integrating/oauth-guide.md)

View File

@@ -227,7 +227,7 @@ backend/blocks/my_provider/
## Best Practices
1. **Error Handling**: Error output pin is already defined on BlockSchemaOutput
1. **Error Handling**: Use `BlockInputError` for validation failures and `BlockExecutionError` for runtime errors (import from `backend.util.exceptions`). These inherit from `ValueError` so the executor treats them as user-fixable. See [Error Handling in new_blocks.md](new_blocks.md#error-handling) for details.
2. **Credentials**: Use the provider's `credentials_field()` method
3. **Validation**: Use SchemaField constraints (ge, le, min_length, etc.)
4. **Categories**: Choose appropriate categories for discoverability

Some files were not shown because too many files have changed in this diff Show More