Compare commits

...

17 Commits

Author SHA1 Message Date
Zamil Majdy
2a55923ec0 Merge dev to get GraphSettings fix 2026-01-21 09:31:17 -05:00
Zamil Majdy
b714c0c221 fix(backend): handle null values in GraphSettings validation (#11812)
## Summary
- Fixes AUTOGPT-SERVER-76H - Error parsing LibraryAgent from database
due to null values in GraphSettings fields
- When parsing LibraryAgent settings from the database, null values for
`human_in_the_loop_safe_mode` and `sensitive_action_safe_mode` were
causing Pydantic validation errors
- Adds `BeforeValidator` annotations to coerce null values to their
defaults (True and False respectively)

## Test plan
- [x] Verified with unit tests that GraphSettings can now handle
None/null values
- [x] Backend tests pass
- [x] Manually tested with all scenarios (None, empty dict, explicit
values)
2026-01-21 08:40:38 -05:00
Krzysztof Czerwinski
ebabc4287e feat(platform): New LLM Picker UI (#11726)
Add new LLM Picker for the new Builder.

### Changes 🏗️

- Enrich `LlmModelMeta` (in `llm.py`) with human readable model, creator
and provider names and price tier (note: this is temporary measure and
all LlmModelMeta will be removed completely once LLM Registry is ready)
- Add provider icons
- Add custom input field `LlmModelField` and its components&helpers

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] LLM model picker works correctly in the new Builder
  - [x] Legacy LLM model picker works in the old Builder
2026-01-21 10:52:55 +00:00
Zamil Majdy
ad50f57a2b chore: add migration for nodeId field in PendingHumanReview
Adds database migration to add the nodeId column which tracks
the node ID in the graph definition for auto-approval tracking.
2026-01-20 23:03:03 -05:00
Zamil Majdy
aebd961ef5 fix: implement node-specific auto-approval for human reviews
Instead of disabling all safe modes when approving all future actions,
now tracks specific node IDs that should be auto-approved. This means
clicking "Approve all future actions" will only auto-approve future
reviews from the same blocks, not all reviews.

Changes:
- Add nodeId field to PendingHumanReview schema
- Add auto_approved_node_ids set to ExecutionContext
- Update review helper to check auto_approved_node_ids
- Change API from disable_future_reviews to auto_approve_node_ids
- Update frontend to pass node_ids when bulk approving
- Address PR feedback: remove barrel file, JSDoc comments, and cleanup
2026-01-20 22:15:51 -05:00
Zamil Majdy
bcccaa16cc fix: remove unused props from AIAgentSafetyPopup component
Removes hasSensitiveAction and hasHumanInTheLoop props that were only
used by the hook, not the component itself, fixing ESLint unused vars error.
2026-01-20 21:05:39 -05:00
Zamil Majdy
d5ddc41b18 feat: add bulk approval option for human reviews
Add "Approve all future actions" button to the review UI that:
- Approves all current pending reviews
- Disables safe mode for the remainder of the execution run
- Shows helper text about turning auto-approval on/off in settings

Backend changes:
- Add disable_future_reviews flag to ReviewRequest model
- Pass ExecutionContext with disabled safe modes when resuming

Frontend changes:
- Add "Approve all future actions" button to PendingReviewsList
- Include helper text per PRD requirements

Implements SECRT-1795
2026-01-20 20:45:50 -05:00
Zamil Majdy
95eab5b7eb feat: add one-time safety popup for AI-generated agent runs
Show a one-time safety popup the first time a user runs an agent with
sensitive actions or human-in-the-loop blocks. The popup explains that
agents may take real-world actions and that safety checks are enabled.

- Add AI_AGENT_SAFETY_POPUP_SHOWN localStorage key
- Create AIAgentSafetyPopup component with hook
- Integrate popup into RunAgentModal before first run

Implements SECRT-1798
2026-01-20 20:40:18 -05:00
Zamil Majdy
832d6e1696 fix: correct safe mode checks for sensitive action blocks
- Add skip_safe_mode_check parameter to HITLReviewHelper to avoid
  checking the wrong safe mode flag for sensitive action blocks
- Simplify SafeModeToggle and FloatingSafeModeToggle by removing
  unnecessary intermediate variables and isHITLStateUndetermined checks
2026-01-20 20:33:55 -05:00
Zamil Majdy
8b25e62959 feat(backend,frontend): add explicit safe mode toggles for HITL and sensitive actions (#11756)
## Summary

This PR introduces two explicit safe mode toggles for controlling agent
execution behavior, providing clearer and more granular control over
when agents should pause for human review.

### Key Changes

**New Safe Mode Settings:**
- **`human_in_the_loop_safe_mode`** (bool, default `true`) - Controls
whether human-in-the-loop (HITL) blocks pause for review
- **`sensitive_action_safe_mode`** (bool, default `false`) - Controls
whether sensitive action blocks pause for review

**New Computed Properties on LibraryAgent:**
- `has_human_in_the_loop` - Indicates if agent contains HITL blocks
- `has_sensitive_action` - Indicates if agent contains sensitive action
blocks

**Block Changes:**
- Renamed `requires_human_review` to `is_sensitive_action` on blocks for
clarity
- Blocks marked as `is_sensitive_action=True` pause only when
`sensitive_action_safe_mode=True`
- HITL blocks pause when `human_in_the_loop_safe_mode=True`

**Frontend Changes:**
- Two separate toggles in Agent Settings based on block types present
- Toggle visibility based on `has_human_in_the_loop` and
`has_sensitive_action` computed properties
- Settings cog hidden if neither toggle applies
- Proper state management for both toggles with defaults

**AI-Generated Agent Behavior:**
- AI-generated agents set `sensitive_action_safe_mode=True` by default
- This ensures sensitive actions are reviewed for AI-generated content

## Changes

**Backend:**
- `backend/data/graph.py` - Updated `GraphSettings` with two boolean
toggles (non-optional with defaults), added `has_sensitive_action`
computed property
- `backend/data/block.py` - Renamed `requires_human_review` to
`is_sensitive_action`, updated review logic
- `backend/data/execution.py` - Updated `ExecutionContext` with both
safe mode fields
- `backend/api/features/library/model.py` - Added
`has_human_in_the_loop` and `has_sensitive_action` to `LibraryAgent`
- `backend/api/features/library/db.py` - Updated to use
`sensitive_action_safe_mode` parameter
- `backend/executor/utils.py` - Simplified execution context creation

**Frontend:**
- `useAgentSafeMode.ts` - Rewritten to support two independent toggles
- `AgentSettingsModal.tsx` - Shows two separate toggles
- `SelectedSettingsView.tsx` - Shows two separate toggles
- Regenerated API types with new schema

## Test Plan

- [x] All backend tests pass (Python 3.11, 3.12, 3.13)
- [x] All frontend tests pass
- [x] Backend format and lint pass
- [x] Frontend format and lint pass
- [x] Pre-commit hooks pass

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-01-21 00:56:02 +00:00
Zamil Majdy
35a13e3df5 fix(backend): Use explicit schema qualification for pgvector types (#11805)
## Summary
- Fix intermittent "type 'vector' does not exist" errors when using
PgBouncer in transaction mode
- The issue was that `SET search_path` and the actual query could run on
different backend connections
- Use explicit schema qualification (`{schema}.vector`,
`OPERATOR({schema}.<=>)`) instead of relying on search_path

## Test plan
- [x] Tested vector type cast on local: `'[1,2,3]'::platform.vector`
works
- [x] Tested OPERATOR syntax on local: `OPERATOR(platform.<=>)` works
- [x] Tested on dev via kubectl exec: both work correctly
- [ ] Deploy to dev and verify backfill_missing_embeddings endpoint no
longer errors

## Related Issues
Fixes: AUTOGPT-SERVER-763, AUTOGPT-SERVER-764

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:18:16 +00:00
Mewael Tsegay Desta
2169b433c9 feat(backend/blocks): add ConcatenateListsBlock (#11567)
# feat(backend/blocks): add ConcatenateListsBlock

## Description

This PR implements a new block `ConcatenateListsBlock` that concatenates
multiple lists into a single list. This addresses the "good first issue"
for implementing a list concatenation block in the platform/blocks area.

The block takes a list of lists as input and combines all elements in
order into a single concatenated list. This is useful for workflows that
need to merge data from multiple sources or combine results from
different operations.

### Changes 🏗️

- **Added `ConcatenateListsBlock` class** in
`autogpt_platform/backend/backend/blocks/data_manipulation.py`
- Input: `lists: List[List[Any]]` - accepts a list of lists to
concatenate
- Output: `concatenated_list: List[Any]` - returns a single concatenated
list
- Error output: `error: str` - provides clear error messages for invalid
input types
  - Block ID: `3cf9298b-5817-4141-9d80-7c2cc5199c8e`
- Category: `BlockCategory.BASIC` (consistent with other list
manipulation blocks)
  
- **Added comprehensive test suite** in
`autogpt_platform/backend/test/blocks/test_concatenate_lists.py`
  - Tests using built-in `test_input`/`test_output` validation
- Manual test cases covering edge cases (empty lists, single list, empty
input)
  - Error handling tests for invalid input types
  - Category consistency verification
  - All tests passing

- **Implementation details:**
  - Uses `extend()` method for efficient list concatenation
  - Preserves element order from all input lists
- **Runtime type validation**: Explicitly checks `isinstance(lst, list)`
before calling `extend()` to prevent:
- Strings being iterated character-by-character (e.g., `extend("abc")` →
`['a', 'b', 'c']`)
    - Non-iterable types causing `TypeError` (e.g., `extend(1)`)
  - Clear error messages indicating which index has invalid input
- Handles edge cases: empty lists, empty input, single list, None values
  - Follows existing block patterns and conventions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `poetry run pytest test/blocks/test_concatenate_lists.py -v` -
all tests pass
  - [x] Verified block can be imported and instantiated
  - [x] Tested with built-in test cases (4 test scenarios)
  - [x] Tested manual edge cases (empty lists, single list, empty input)
  - [x] Tested error handling for invalid input types
  - [x] Verified category is `BASIC` for consistency
  - [x] Verified no linting errors
- [x] Confirmed block follows same patterns as other blocks in
`data_manipulation.py`

#### Code Quality:
- [x] Code follows existing patterns and conventions
- [x] Type hints are properly used
- [x] Documentation strings are clear and descriptive
- [x] Runtime type validation implemented
- [x] Error handling with clear error messages
- [x] No linting errors
- [x] Prisma client generated successfully

### Testing

**Test Results:**
```
test/blocks/test_concatenate_lists.py::test_concatenate_lists_block_builtin_tests PASSED
test/blocks/test_concatenate_lists.py::test_concatenate_lists_manual PASSED

============================== 2 passed in 8.35s ==============================
```

**Test Coverage:**
- Basic concatenation: `[[1, 2, 3], [4, 5, 6]]` → `[1, 2, 3, 4, 5, 6]`
- Mixed types: `[["a", "b"], ["c"], ["d", "e", "f"]]` → `["a", "b", "c",
"d", "e", "f"]`
- Empty list handling: `[[1, 2], []]` → `[1, 2]`
- Empty input: `[]` → `[]`
- Single list: `[[1, 2, 3]]` → `[1, 2, 3]`
- Error handling: Invalid input types (strings, non-lists) produce clear
error messages
- Category verification: Confirmed `BlockCategory.BASIC` for consistency

### Review Feedback Addressed

- **Category Consistency**: Changed from `BlockCategory.DATA` to
`BlockCategory.BASIC` to match other list manipulation blocks
(`AddToListBlock`, `FindInListBlock`, etc.)
- **Type Robustness**: Added explicit runtime validation with
`isinstance(lst, list)` check before calling `extend()` to prevent:
  - Strings being iterated character-by-character
  - Non-iterable types causing `TypeError`
- **Error Handling**: Added `error` output field with clear, descriptive
error messages indicating which index has invalid input
- **Test Coverage**: Added test case for error handling with invalid
input types

### Related Issues

- Addresses: "Implement block to concatenate lists" (good first issue,
platform/blocks, hacktoberfest)

### Notes

- This is a straightforward data manipulation block that doesn't require
external dependencies
- The block will be automatically discovered by the block loading system
- No database or configuration changes required
- Compatible with existing workflow system
- All review feedback has been addressed and incorporated


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds a new list utility and updates docs.
> 
> - **New block**: `ConcatenateListsBlock` in
`backend/blocks/data_manipulation.py`
> - Input `lists: List[List[Any]]`; outputs `concatenated_list` or
`error`
> - Skips `None` entries; emits error for non-list items; preserves
order
> - **Docs**: Adds "Concatenate Lists" section to
`docs/integrations/basic.md` and links it in
`docs/integrations/README.md`
> - **Contributor guide**: New `docs/CLAUDE.md` with manual doc section
guidelines
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4f56dd86c2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 18:04:12 +00:00
Nicholas Tindle
fa0b7029dd fix(platform): make chat credentials type selection deterministic (#11795)
## Background

When using chat to run blocks/agents that support multiple credential
types (e.g., GitHub blocks support both `api_key` and `oauth2`), users
reported that the credentials setup UI would randomly show either "Add
API key" or "Connect account (OAuth)" - seemingly at random between
requests or server restarts.

## Root Cause

The bug was in how the backend selected which credential type to return
when building the missing credentials response:

```python
cred_type = next(iter(field_info.supported_types), "api_key")
```

The problem is that `supported_types` is a **frozenset**. When you call
`iter()` on a frozenset and take `next()`, the iteration order is
**non-deterministic** due to Python's hash randomization. This means:
- `frozenset({'api_key', 'oauth2'})` could iterate as either
`['api_key', 'oauth2']` or `['oauth2', 'api_key']`
- The order varies between Python process restarts and sometimes between
requests
- This caused the UI to randomly show different credential options

### Changes 🏗️

**Backend (`utils.py`, `run_block.py`, `run_agent.py`):**
- Added `_serialize_missing_credential()` helper that uses `sorted()`
for deterministic ordering
- Added `build_missing_credentials_from_graph()` and
`build_missing_credentials_from_field_info()` utilities
- Now returns both `type` (first sorted type, for backwards compat) and
`types` (full array with ALL supported types)

**Frontend (`helpers.ts`, `ChatCredentialsSetup.tsx`,
`useChatMessage.ts`):**
- Updated to read the `types` array from backend response
- Changed `credentialType` (single) to `credentialTypes` (array)
throughout the chat credentials flow
- Passes all supported types to `CredentialsInput` via
`credentials_types` schema field

### Result

Now `useCredentials.ts` correctly sets both `supportsApiKey=true` AND
`supportsOAuth2=true` when both are supported, ensuring:
1. **Deterministic behavior** - no more random type selection
2. **All saved credentials shown** - credentials of any supported type
appear in the selection list

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified GitHub block shows consistent credential options across
page reloads
- [x] Verified both OAuth and API key credentials appear in selection
when user has both saved
- [x] Verified backend returns `types: ["api_key", "oauth2"]` array
(checked via Python REPL)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensures deterministic credential type selection and surfaces all
supported types end-to-end.
> 
> - Backend: add `_serialize_missing_credential`,
`build_missing_credentials_from_graph/field_info`;
`run_agent`/`run_block` now return missing credentials with stable
ordering and both `type` (first) and `types` (all).
> - Frontend: chat helpers and UI (`helpers.ts`,
`ChatCredentialsSetup.tsx`, `useChatMessage.ts`) now read `types`,
switch from single `credentialType` to `credentialTypes`, and pass all
supported `credentials_types` in schemas.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d80f4f0e0. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-20 16:19:57 +00:00
Abhimanyu Yadav
c20ca47bb0 feat(frontend): enhance RunGraph and RunInputDialog components with loading states and improved UI (#11808)
### Changes 🏗️

- Enhanced UI for the Run Graph button with improved loading states and
animations
- Added color-coded edges in the flow editor based on output data types
- Improved the layout of the Run Input Dialog with a two-column grid
design
- Refined the styling of flow editor controls with consistent icon sizes
and colors
- Updated tutorial icons with better color and size customization
- Fixed credential field display to show provider name with "credential"
suffix
- Optimized draft saving by excluding node position changes to prevent
excessive saves when dragging nodes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that the Run Graph button shows proper loading states
  - [x] Confirmed that edges display correct colors based on data types
- [x] Tested the Run Input Dialog layout with various input
configurations
  - [x] Checked that flow editor controls display consistently
  - [x] Verified that tutorial icons render properly
  - [x] Confirmed credential fields show proper provider names
- [x] Tested that dragging nodes doesn't trigger unnecessary draft saves
2026-01-20 15:50:23 +00:00
Abhimanyu Yadav
7756e2d12d refactor(frontend): refactor credentials input with unified CredentialsGroupedView component (#11801)
### Changes 🏗️

- Refactored the credentials input handling in the RunInputDialog to use
the shared CredentialsGroupedView component
- Moved CredentialsGroupedView from agent library to a shared component
location for reuse
- Fixed source name handling in edge creation to properly handle tool
source names
- Improved node output UI by replacing custom expand/collapse with
Accordion component
- Fixed timing of hardcoded values synchronization with handle IDs to
ensure proper loading
- Enabled NEW_FLOW_EDITOR and BUILDER_VIEW_SWITCH feature flags by
default

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified credentials input works in both agent run dialog and
builder run dialog
  - [x] Confirmed node output accordion works correctly
- [x] Tested flow editor with tools to ensure source name handling works
properly
  - [x] Verified hardcoded values sync correctly with handle IDs

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2026-01-20 12:20:25 +00:00
Swifty
bc75d70e7d refactor(backend): Improve Langfuse tracing with v3 SDK patterns and @observe decorators (#11803)
<!-- Clearly explain the need for these changes: -->

This PR improves the Langfuse tracing implementation in the chat feature
by adopting the v3 SDK patterns, resulting in cleaner code and better
observability.

### Changes 🏗️

- **Simplified Langfuse client usage**: Replace manual client
initialization with `langfuse.get_client()` global singleton
- **Use v3 context managers**: Switch to
`start_as_current_observation()` and `propagate_attributes()` for
automatic trace propagation
- **Auto-instrument OpenAI calls**: Use `langfuse.openai` wrapper for
automatic LLM call tracing instead of manual generation tracking
- **Add `@observe` decorators**: All chat tools now have
`@observe(as_type="tool")` decorators for automatic tool execution
tracing:
  - `add_understanding`
  - `view_agent_output` (renamed from `agent_output`)
  - `create_agent`
  - `edit_agent`
  - `find_agent`
  - `find_block`
  - `find_library_agent`
  - `get_doc_page`
  - `run_agent`
  - `run_block`
  - `search_docs`
- **Remove manual trace lifecycle**: Eliminated the verbose `finally`
block that manually ended traces/generations
- **Rename tool**: `agent_output` → `view_agent_output` for clarity

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified chat feature works with Langfuse tracing enabled
- [x] Confirmed traces appear correctly in Langfuse dashboard with tool
spans
  - [x] Tested tool execution flows show up as nested observations

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - uses existing Langfuse environment
variables.
2026-01-19 20:56:51 +00:00
Nicholas Tindle
c1a1767034 feat(docs): Add block documentation auto-generation system (#11707)
- Add generate_block_docs.py script that introspects block code to
generate markdown
- Support manual content preservation via <!-- MANUAL: --> markers
- Add migrate_block_docs.py to preserve existing manual content from git
HEAD
- Add CI workflow (docs-block-sync.yml) to fail if docs drift from code
- Add Claude PR review workflow (docs-claude-review.yml) for doc changes
- Add manual LLM enhancement workflow (docs-enhance.yml)
- Add GitBook configuration (.gitbook.yaml, SUMMARY.md)
- Fix non-deterministic category ordering (categories is a set)
- Add comprehensive test suite (32 tests)
- Generate docs for 444 blocks with 66 preserved manual sections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Extensively test code generation for the docs pages



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces an automated documentation pipeline for blocks and
integrates it into CI.
> 
> - Adds `scripts/generate_block_docs.py` (+ tests) to introspect blocks
and generate `docs/integrations/**`, preserving `<!-- MANUAL: -->`
sections
> - New CI workflows: **docs-block-sync** (fails if docs drift),
**docs-claude-review** (AI review for block/docs PRs), and
**docs-enhance** (optional LLM improvements)
> - Updates existing Claude workflows to use `CLAUDE_CODE_OAUTH_TOKEN`
instead of `ANTHROPIC_API_KEY`
> - Improves numerous block descriptions/typos and links across backend
blocks to standardize docs output
> - Commits initial generated docs including
`docs/integrations/README.md` and many provider/category pages
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
631e53e0f6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 07:03:19 +00:00
278 changed files with 23736 additions and 2784 deletions

View File

@@ -93,5 +93,5 @@ jobs:
Error logs:
${{ toJSON(fromJSON(steps.failure_details.outputs.result).errorLogs) }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: "--allowedTools 'Edit,MultiEdit,Write,Read,Glob,Grep,LS,Bash(git:*),Bash(bun:*),Bash(npm:*),Bash(npx:*),Bash(gh:*)'"

View File

@@ -7,7 +7,7 @@
# - Provide actionable recommendations for the development team
#
# Triggered on: Dependabot PRs (opened, synchronize)
# Requirements: ANTHROPIC_API_KEY secret must be configured
# Requirements: CLAUDE_CODE_OAUTH_TOKEN secret must be configured
name: Claude Dependabot PR Review
@@ -308,7 +308,7 @@ jobs:
id: claude_review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*)"
prompt: |

View File

@@ -323,7 +323,7 @@ jobs:
id: claude
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Bash(npm:*),Bash(pnpm:*),Bash(poetry:*),Bash(git:*),Edit,Replace,NotebookEditCell,mcp__github_inline_comment__create_inline_comment,Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr edit:*)"
--model opus

78
.github/workflows/docs-block-sync.yml vendored Normal file
View File

@@ -0,0 +1,78 @@
name: Block Documentation Sync Check
on:
push:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/integrations/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
pull_request:
branches: [master, dev]
paths:
- "autogpt_platform/backend/backend/blocks/**"
- "docs/integrations/**"
- "autogpt_platform/backend/scripts/generate_block_docs.py"
- ".github/workflows/docs-block-sync.yml"
jobs:
check-docs-sync:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Check block documentation is in sync
working-directory: autogpt_platform/backend
run: |
echo "Checking if block documentation is in sync with code..."
poetry run python scripts/generate_block_docs.py --check
- name: Show diff if out of sync
if: failure()
working-directory: autogpt_platform/backend
run: |
echo "::error::Block documentation is out of sync with code!"
echo ""
echo "To fix this, run the following command locally:"
echo " cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
echo ""
echo "Then commit the updated documentation files."
echo ""
echo "Regenerating docs to show diff..."
poetry run python scripts/generate_block_docs.py
echo ""
echo "Changes detected:"
git diff ../../docs/integrations/ || true

View File

@@ -0,0 +1,95 @@
name: Claude Block Docs Review
on:
pull_request:
types: [opened, synchronize]
paths:
- "docs/integrations/**"
- "autogpt_platform/backend/backend/blocks/**"
jobs:
claude-review:
# Only run for PRs from members/collaborators
if: |
github.event.pull_request.author_association == 'OWNER' ||
github.event.pull_request.author_association == 'MEMBER' ||
github.event.pull_request.author_association == 'COLLABORATOR'
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
pull-requests: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Code Review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Read,Glob,Grep,Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*)"
prompt: |
You are reviewing a PR that modifies block documentation or block code for AutoGPT.
## Your Task
Review the changes in this PR and provide constructive feedback. Focus on:
1. **Documentation Accuracy**: For any block code changes, verify that:
- Input/output tables in docs match the actual block schemas
- Description text accurately reflects what the block does
- Any new blocks have corresponding documentation
2. **Manual Content Quality**: Check manual sections (marked with `<!-- MANUAL: -->` markers):
- "How it works" sections should have clear technical explanations
- "Possible use case" sections should have practical, real-world examples
- Content should be helpful for users trying to understand the blocks
3. **Template Compliance**: Ensure docs follow the standard template:
- What it is (brief intro)
- What it does (description)
- How it works (technical explanation)
- Inputs table
- Outputs table
- Possible use case
4. **Cross-references**: Check that links and anchors are correct
## Review Process
1. First, get the PR diff to see what changed: `gh pr diff ${{ github.event.pull_request.number }}`
2. Read any modified block files to understand the implementation
3. Read corresponding documentation files to verify accuracy
4. Provide your feedback as a PR comment
Be constructive and specific. If everything looks good, say so!
If there are issues, explain what's wrong and suggest how to fix it.

194
.github/workflows/docs-enhance.yml vendored Normal file
View File

@@ -0,0 +1,194 @@
name: Enhance Block Documentation
on:
workflow_dispatch:
inputs:
block_pattern:
description: 'Block file pattern to enhance (e.g., "google/*.md" or "*" for all blocks)'
required: true
default: '*'
type: string
dry_run:
description: 'Dry run mode - show proposed changes without committing'
type: boolean
default: true
max_blocks:
description: 'Maximum number of blocks to process (0 for unlimited)'
type: number
default: 10
jobs:
enhance-docs:
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: write
pull-requests: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Set up Python dependency cache
uses: actions/cache@v4
with:
path: ~/.cache/pypoetry
key: poetry-${{ runner.os }}-${{ hashFiles('autogpt_platform/backend/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install Poetry
run: |
cd autogpt_platform/backend
HEAD_POETRY_VERSION=$(python3 ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
curl -sSL https://install.python-poetry.org | POETRY_VERSION=$HEAD_POETRY_VERSION python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
working-directory: autogpt_platform/backend
run: |
poetry install --only main
poetry run prisma generate
- name: Run Claude Enhancement
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowedTools "Read,Edit,Glob,Grep,Write,Bash(git:*),Bash(gh:*),Bash(find:*),Bash(ls:*)"
prompt: |
You are enhancing block documentation for AutoGPT. Your task is to improve the MANUAL sections
of block documentation files by reading the actual block implementations and writing helpful content.
## Configuration
- Block pattern: ${{ inputs.block_pattern }}
- Dry run: ${{ inputs.dry_run }}
- Max blocks to process: ${{ inputs.max_blocks }}
## Your Task
1. **Find Documentation Files**
Find block documentation files matching the pattern in `docs/integrations/`
Pattern: ${{ inputs.block_pattern }}
Use: `find docs/integrations -name "*.md" -type f`
2. **For Each Documentation File** (up to ${{ inputs.max_blocks }} files):
a. Read the documentation file
b. Identify which block(s) it documents (look for the block class name)
c. Find and read the corresponding block implementation in `autogpt_platform/backend/backend/blocks/`
d. Improve the MANUAL sections:
**"How it works" section** (within `<!-- MANUAL: how_it_works -->` markers):
- Explain the technical flow of the block
- Describe what APIs or services it connects to
- Note any important configuration or prerequisites
- Keep it concise but informative (2-4 paragraphs)
**"Possible use case" section** (within `<!-- MANUAL: use_case -->` markers):
- Provide 2-3 practical, real-world examples
- Make them specific and actionable
- Show how this block could be used in an automation workflow
3. **Important Rules**
- ONLY modify content within `<!-- MANUAL: -->` and `<!-- END MANUAL -->` markers
- Do NOT modify auto-generated sections (inputs/outputs tables, descriptions)
- Keep content accurate based on the actual block implementation
- Write for users who may not be technical experts
4. **Output**
${{ inputs.dry_run == true && 'DRY RUN MODE: Show proposed changes for each file but do NOT actually edit the files. Describe what you would change.' || 'LIVE MODE: Actually edit the files to improve the documentation.' }}
## Example Improvements
**Before (How it works):**
```
_Add technical explanation here._
```
**After (How it works):**
```
This block connects to the GitHub API to retrieve issue information. When executed,
it authenticates using your GitHub credentials and fetches issue details including
title, body, labels, and assignees.
The block requires a valid GitHub OAuth connection with repository access permissions.
It supports both public and private repositories you have access to.
```
**Before (Possible use case):**
```
_Add practical use case examples here._
```
**After (Possible use case):**
```
**Customer Support Automation**: Monitor a GitHub repository for new issues with
the "bug" label, then automatically create a ticket in your support system and
notify the on-call engineer via Slack.
**Release Notes Generation**: When a new release is published, gather all closed
issues since the last release and generate a summary for your changelog.
```
Begin by finding and listing the documentation files to process.
- name: Create PR with enhanced documentation
if: ${{ inputs.dry_run == false }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Check if there are changes
if git diff --quiet docs/integrations/; then
echo "No changes to commit"
exit 0
fi
# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Create branch and commit
BRANCH_NAME="docs/enhance-blocks-$(date +%Y%m%d-%H%M%S)"
git checkout -b "$BRANCH_NAME"
git add docs/integrations/
git commit -m "docs: enhance block documentation with LLM-generated content
Pattern: ${{ inputs.block_pattern }}
Max blocks: ${{ inputs.max_blocks }}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push and create PR
git push -u origin "$BRANCH_NAME"
gh pr create \
--title "docs: LLM-enhanced block documentation" \
--body "## Summary
This PR contains LLM-enhanced documentation for block files matching pattern: \`${{ inputs.block_pattern }}\`
The following manual sections were improved:
- **How it works**: Technical explanations based on block implementations
- **Possible use case**: Practical, real-world examples
## Review Checklist
- [ ] Content is accurate based on block implementations
- [ ] Examples are practical and helpful
- [ ] No auto-generated sections were modified
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)" \
--base dev

View File

@@ -4,14 +4,9 @@ from collections.abc import AsyncGenerator
from typing import Any
import orjson
from langfuse import Langfuse
from openai import (
APIConnectionError,
APIError,
APIStatusError,
AsyncOpenAI,
RateLimitError,
)
from langfuse import get_client, propagate_attributes
from langfuse.openai import openai # type: ignore
from openai import APIConnectionError, APIError, APIStatusError, RateLimitError
from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam
from backend.data.understanding import (
@@ -21,7 +16,6 @@ from backend.data.understanding import (
from backend.util.exceptions import NotFoundError
from backend.util.settings import Settings
from . import db as chat_db
from .config import ChatConfig
from .model import (
ChatMessage,
@@ -50,10 +44,10 @@ logger = logging.getLogger(__name__)
config = ChatConfig()
settings = Settings()
client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
client = openai.AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
# Langfuse client (lazy initialization)
_langfuse_client: Langfuse | None = None
langfuse = get_client()
class LangfuseNotConfiguredError(Exception):
@@ -69,65 +63,6 @@ def _is_langfuse_configured() -> bool:
)
def _get_langfuse_client() -> Langfuse:
"""Get or create the Langfuse client for prompt management and tracing."""
global _langfuse_client
if _langfuse_client is None:
if not _is_langfuse_configured():
raise LangfuseNotConfiguredError(
"Langfuse is not configured. The chat feature requires Langfuse for prompt management. "
"Please set the LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY environment variables."
)
_langfuse_client = Langfuse(
public_key=settings.secrets.langfuse_public_key,
secret_key=settings.secrets.langfuse_secret_key,
host=settings.secrets.langfuse_host or "https://cloud.langfuse.com",
)
return _langfuse_client
def _get_environment() -> str:
"""Get the current environment name for Langfuse tagging."""
return settings.config.app_env.value
def _get_langfuse_prompt() -> str:
"""Fetch the latest production prompt from Langfuse.
Returns:
The compiled prompt text from Langfuse.
Raises:
Exception: If Langfuse is unavailable or prompt fetch fails.
"""
try:
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
compiled = prompt.compile()
logger.info(
f"Fetched prompt '{config.langfuse_prompt_name}' from Langfuse "
f"(version: {prompt.version})"
)
return compiled
except Exception as e:
logger.error(f"Failed to fetch prompt from Langfuse: {e}")
raise
async def _is_first_session(user_id: str) -> bool:
"""Check if this is the user's first chat session.
Returns True if the user has 1 or fewer sessions (meaning this is their first).
"""
try:
session_count = await chat_db.get_user_session_count(user_id)
return session_count <= 1
except Exception as e:
logger.warning(f"Failed to check session count for user {user_id}: {e}")
return False # Default to non-onboarding if we can't check
async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
"""Build the full system prompt including business understanding if available.
@@ -139,8 +74,6 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
Tuple of (compiled prompt string, Langfuse prompt object for tracing)
"""
langfuse = _get_langfuse_client()
# cache_ttl_seconds=0 disables SDK caching to always get the latest prompt
prompt = langfuse.get_prompt(config.langfuse_prompt_name, cache_ttl_seconds=0)
@@ -158,7 +91,7 @@ async def _build_system_prompt(user_id: str | None) -> tuple[str, Any]:
context = "This is the first time you are meeting the user. Greet them and introduce them to the platform"
compiled = prompt.compile(users_information=context)
return compiled, prompt
return compiled, understanding
async def _generate_session_title(message: str) -> str | None:
@@ -217,6 +150,7 @@ async def assign_user_to_session(
async def stream_chat_completion(
session_id: str,
message: str | None = None,
tool_call_response: str | None = None,
is_user_message: bool = True,
user_id: str | None = None,
retry_count: int = 0,
@@ -256,11 +190,6 @@ async def stream_chat_completion(
yield StreamFinish()
return
# Langfuse observations will be created after session is loaded (need messages for input)
# Initialize to None so finally block can safely check and end them
trace = None
generation = None
# Only fetch from Redis if session not provided (initial call)
if session is None:
session = await get_chat_session(session_id, user_id)
@@ -336,297 +265,259 @@ async def stream_chat_completion(
asyncio.create_task(_update_title())
# Build system prompt with business understanding
system_prompt, langfuse_prompt = await _build_system_prompt(user_id)
# Build input messages including system prompt for complete Langfuse logging
trace_input_messages = [{"role": "system", "content": system_prompt}] + [
m.model_dump() for m in session.messages
]
system_prompt, understanding = await _build_system_prompt(user_id)
# Create Langfuse trace for this LLM call (each call gets its own trace, grouped by session_id)
# Using v3 SDK: start_observation creates a root span, update_trace sets trace-level attributes
try:
langfuse = _get_langfuse_client()
env = _get_environment()
trace = langfuse.start_observation(
name="chat_completion",
input={"messages": trace_input_messages},
metadata={
"environment": env,
"model": config.model,
"message_count": len(session.messages),
"prompt_name": langfuse_prompt.name if langfuse_prompt else None,
"prompt_version": langfuse_prompt.version if langfuse_prompt else None,
},
)
# Set trace-level attributes (session_id, user_id, tags)
trace.update_trace(
input = message
if not message and tool_call_response:
input = tool_call_response
langfuse = get_client()
with langfuse.start_as_current_observation(
as_type="span",
name="user-copilot-request",
input=input,
) as span:
with propagate_attributes(
session_id=session_id,
user_id=user_id,
tags=[env, "copilot"],
)
except Exception as e:
logger.warning(f"Failed to create Langfuse trace: {e}")
tags=["copilot"],
metadata={
"users_information": format_understanding_for_prompt(understanding)[
:200
] # langfuse only accepts upto to 200 chars
},
):
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
accumulated_tool_calls: list[dict[str, Any]] = []
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
try:
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
# Create Langfuse generation for each LLM call, linked to the prompt
# Using v3 SDK: start_observation with as_type="generation"
generation = (
trace.start_observation(
as_type="generation",
name="llm_call",
model=config.model,
input={"messages": trace_input_messages},
prompt=langfuse_prompt,
# Initialize variables that will be used in finally block (must be defined before try)
assistant_response = ChatMessage(
role="assistant",
content="",
)
if trace
else None
)
accumulated_tool_calls: list[dict[str, Any]] = []
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
# Wrap main logic in try/finally to ensure Langfuse observations are always ended
has_yielded_end = False
has_yielded_error = False
has_done_tool_call = False
has_received_text = False
text_streaming_ended = False
tool_response_messages: list[ChatMessage] = []
should_retry = False
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
# Generate unique IDs for AI SDK protocol
import uuid as uuid_module
message_id = str(uuid_module.uuid4())
text_block_id = str(uuid_module.uuid4())
# Yield message start
yield StreamStart(messageId=message_id)
try:
async for chunk in _stream_chat_chunks(
session=session,
tools=tools,
system_prompt=system_prompt,
text_block_id=text_block_id,
):
if isinstance(chunk, StreamTextStart):
# Emit text-start before first text delta
if not has_received_text:
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextDelta):
delta = chunk.delta or ""
assert assistant_response.content is not None
assistant_response.content += delta
has_received_text = True
yield chunk
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
yield chunk
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode("utf-8"),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
elif isinstance(chunk, StreamTextEnd):
# Emit text-end after text completes
if has_received_text and not text_streaming_ended:
text_streaming_ended = True
if assistant_response.content:
logger.warn(
f"StreamTextEnd: Attempting to set output {assistant_response.content}"
)
span.update_trace(output=assistant_response.content)
span.update(output=assistant_response.content)
yield chunk
elif isinstance(chunk, StreamToolInputStart):
# Emit text-end before first tool call, but only if we've received text
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
elif isinstance(chunk, StreamToolInputAvailable):
# Accumulate tool calls in OpenAI format
accumulated_tool_calls.append(
{
"id": chunk.toolCallId,
"type": "function",
"function": {
"name": chunk.toolName,
"arguments": orjson.dumps(chunk.input).decode(
"utf-8"
),
},
}
)
elif isinstance(chunk, StreamToolOutputAvailable):
result_content = (
chunk.output
if isinstance(chunk.output, str)
else orjson.dumps(chunk.output).decode("utf-8")
)
tool_response_messages.append(
ChatMessage(
role="tool",
content=result_content,
tool_call_id=chunk.toolCallId,
)
)
has_done_tool_call = True
# Track if any tool execution failed
if not chunk.success:
logger.warning(
f"Tool {chunk.toolName} (ID: {chunk.toolCallId}) execution failed"
)
yield chunk
elif isinstance(chunk, StreamFinish):
if not has_done_tool_call:
# Emit text-end before finish if we received text but haven't closed it
if has_received_text and not text_streaming_ended:
yield StreamTextEnd(id=text_block_id)
text_streaming_ended = True
has_yielded_end = True
yield chunk
elif isinstance(chunk, StreamError):
has_yielded_error = True
elif isinstance(chunk, StreamUsage):
session.usage.append(
Usage(
prompt_tokens=chunk.promptTokens,
completion_tokens=chunk.completionTokens,
total_tokens=chunk.totalTokens,
)
)
else:
logger.error(
f"Unknown chunk type: {type(chunk)}", exc_info=True
)
if assistant_response.content:
langfuse.update_current_trace(output=assistant_response.content)
langfuse.update_current_span(output=assistant_response.content)
elif tool_response_messages:
langfuse.update_current_trace(output=str(tool_response_messages))
langfuse.update_current_span(output=str(tool_response_messages))
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(
e, (orjson.JSONDecodeError, KeyError, TypeError)
)
if is_retryable and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
)
should_retry = True
else:
logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True)
except Exception as e:
logger.error(f"Error during stream: {e!s}", exc_info=True)
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
# Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.)
is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError))
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
if is_retryable and retry_count < config.max_retries:
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
logger.info(
f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}"
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
should_retry = True
else:
# Non-retryable error or max retries exceeded
# Save any partial progress before reporting error
messages_to_save: list[ChatMessage] = []
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Add assistant message if it has content or tool calls
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
session.messages.extend(messages_to_save)
await upsert_chat_session(session)
if not has_yielded_error:
error_message = str(e)
if not is_retryable:
error_message = f"Non-retryable error: {error_message}"
elif retry_count >= config.max_retries:
error_message = f"Max retries ({config.max_retries}) exceeded: {error_message}"
error_response = StreamError(errorText=error_message)
yield error_response
if not has_yielded_end:
yield StreamFinish()
return
# Handle retry outside of exception handler to avoid nesting
if should_retry and retry_count < config.max_retries:
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
retry_count=retry_count + 1,
session=session,
context=context,
):
yield chunk
return # Exit after retry to avoid double-saving in finally block
# Normal completion path - save session and handle tool call continuation
logger.info(
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
f"Normal completion path: session={session.session_id}, "
f"current message_count={len(session.messages)}"
)
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
# Build the messages list in the correct order
messages_to_save: list[ChatMessage] = []
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
):
yield chunk
finally:
# Always end Langfuse observations to prevent resource leaks
# Guard against None and catch errors to avoid masking original exceptions
if generation is not None:
try:
latest_usage = session.usage[-1] if session.usage else None
generation.update(
model=config.model,
output={
"content": assistant_response.content,
"tool_calls": accumulated_tool_calls or None,
},
usage_details=(
{
"input": latest_usage.prompt_tokens,
"output": latest_usage.completion_tokens,
"total": latest_usage.total_tokens,
}
if latest_usage
else None
),
# Add assistant message with tool_calls if any
if accumulated_tool_calls:
assistant_response.tool_calls = accumulated_tool_calls
logger.info(
f"Added {len(accumulated_tool_calls)} tool calls to assistant message"
)
if assistant_response.content or assistant_response.tool_calls:
messages_to_save.append(assistant_response)
logger.info(
f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}"
)
generation.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse generation: {e}")
if trace is not None:
try:
if accumulated_tool_calls:
trace.update_trace(output={"tool_calls": accumulated_tool_calls})
else:
trace.update_trace(output={"response": assistant_response.content})
trace.end()
except Exception as e:
logger.warning(f"Failed to end Langfuse trace: {e}")
# Add tool response messages after assistant message
messages_to_save.extend(tool_response_messages)
logger.info(
f"Saving {len(tool_response_messages)} tool response messages, "
f"total_to_save={len(messages_to_save)}"
)
session.messages.extend(messages_to_save)
logger.info(
f"Extended session messages, new message_count={len(session.messages)}"
)
await upsert_chat_session(session)
# If we did a tool call, stream the chat completion again to get the next response
if has_done_tool_call:
logger.info(
"Tool call executed, streaming chat completion again to get assistant response"
)
async for chunk in stream_chat_completion(
session_id=session.session_id,
user_id=user_id,
session=session, # Pass session object to avoid Redis refetch
context=context,
tool_call_response=str(tool_response_messages),
):
yield chunk
# Retry configuration for OpenAI API calls
@@ -900,5 +791,4 @@ async def _yield_tool_call(
session=session,
)
logger.info(f"Yielding Tool execution response: {tool_execution_response}")
yield tool_execution_response

View File

@@ -30,7 +30,7 @@ TOOL_REGISTRY: dict[str, BaseTool] = {
"find_library_agent": FindLibraryAgentTool(),
"run_agent": RunAgentTool(),
"run_block": RunBlockTool(),
"agent_output": AgentOutputTool(),
"view_agent_output": AgentOutputTool(),
"search_docs": SearchDocsTool(),
"get_doc_page": GetDocPageTool(),
}

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.understanding import (
BusinessUnderstandingInput,
@@ -59,6 +61,7 @@ and automations for the user's specific needs."""
"""Requires authentication to store user-specific data."""
return True
@observe(as_type="tool", name="add_understanding")
async def _execute(
self,
user_id: str | None,

View File

@@ -218,6 +218,7 @@ async def save_agent_to_library(
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)

View File

@@ -5,6 +5,7 @@ import re
from datetime import datetime, timedelta, timezone
from typing import Any
from langfuse import observe
from pydantic import BaseModel, field_validator
from backend.api.features.chat.model import ChatSession
@@ -103,7 +104,7 @@ class AgentOutputTool(BaseTool):
@property
def name(self) -> str:
return "agent_output"
return "view_agent_output"
@property
def description(self) -> str:
@@ -328,6 +329,7 @@ class AgentOutputTool(BaseTool):
total_executions=len(available_executions) if available_executions else 1,
)
@observe(as_type="tool", name="view_agent_output")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -78,6 +80,7 @@ class CreateAgentTool(BaseTool):
"required": ["description"],
}
@observe(as_type="tool", name="create_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,8 @@
import logging
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_generator import (
@@ -85,6 +87,7 @@ class EditAgentTool(BaseTool):
"required": ["agent_id", "changes"],
}
@observe(as_type="tool", name="edit_agent")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -35,6 +37,7 @@ class FindAgentTool(BaseTool):
"required": ["query"],
}
@observe(as_type="tool", name="find_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -1,6 +1,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -55,6 +56,7 @@ class FindBlockTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_block")
async def _execute(
self,
user_id: str | None,

View File

@@ -2,6 +2,8 @@
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from .agent_search import search_agents
@@ -41,6 +43,7 @@ class FindLibraryAgentTool(BaseTool):
def requires_auth(self) -> bool:
return True
@observe(as_type="tool", name="find_library_agent")
async def _execute(
self, user_id: str | None, session: ChatSession, **kwargs
) -> ToolResponseBase:

View File

@@ -4,6 +4,8 @@ import logging
from pathlib import Path
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.api.features.chat.tools.base import BaseTool
from backend.api.features.chat.tools.models import (
@@ -71,6 +73,7 @@ class GetDocPageTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="get_doc_page")
async def _execute(
self,
user_id: str | None,

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from pydantic import BaseModel, Field, field_validator
from backend.api.features.chat.config import ChatConfig
@@ -32,7 +33,7 @@ from .models import (
UserReadiness,
)
from .utils import (
check_user_has_required_credentials,
build_missing_credentials_from_graph,
extract_credentials_from_schema,
fetch_graph_from_store_slug,
get_or_create_library_agent,
@@ -154,6 +155,7 @@ class RunAgentTool(BaseTool):
"""All operations require authentication."""
return True
@observe(as_type="tool", name="run_agent")
async def _execute(
self,
user_id: str | None,
@@ -235,15 +237,13 @@ class RunAgentTool(BaseTool):
# Return credentials needed response with input data info
# The UI handles credential setup automatically, so the message
# focuses on asking about input data
credentials = extract_credentials_from_schema(
graph.credentials_input_schema
requirements_creds_dict = build_missing_credentials_from_graph(
graph, None
)
missing_creds_check = await check_user_has_required_credentials(
user_id, credentials
missing_credentials_dict = build_missing_credentials_from_graph(
graph, graph_credentials
)
missing_credentials_dict = {
c.id: c.model_dump() for c in missing_creds_check
}
requirements_creds_list = list(requirements_creds_dict.values())
return SetupRequirementsResponse(
message=self._build_inputs_message(graph, MSG_WHAT_VALUES_TO_USE),
@@ -257,7 +257,7 @@ class RunAgentTool(BaseTool):
ready_to_run=False,
),
requirements={
"credentials": [c.model_dump() for c in credentials],
"credentials": requirements_creds_list,
"inputs": self._get_inputs_list(graph.input_schema),
"execution_modes": self._get_execution_modes(graph),
},

View File

@@ -4,6 +4,8 @@ import logging
from collections import defaultdict
from typing import Any
from langfuse import observe
from backend.api.features.chat.model import ChatSession
from backend.data.block import get_block
from backend.data.execution import ExecutionContext
@@ -20,6 +22,7 @@ from .models import (
ToolResponseBase,
UserReadiness,
)
from .utils import build_missing_credentials_from_field_info
logger = logging.getLogger(__name__)
@@ -127,6 +130,7 @@ class RunBlockTool(BaseTool):
return matched_credentials, missing_credentials
@observe(as_type="tool", name="run_block")
async def _execute(
self,
user_id: str | None,
@@ -186,7 +190,11 @@ class RunBlockTool(BaseTool):
if missing_credentials:
# Return setup requirements response with missing credentials
missing_creds_dict = {c.id: c.model_dump() for c in missing_credentials}
credentials_fields_info = block.input_schema.get_credentials_fields_info()
missing_creds_dict = build_missing_credentials_from_field_info(
credentials_fields_info, set(matched_credentials.keys())
)
missing_creds_list = list(missing_creds_dict.values())
return SetupRequirementsResponse(
message=(
@@ -203,7 +211,7 @@ class RunBlockTool(BaseTool):
ready_to_run=False,
),
requirements={
"credentials": [c.model_dump() for c in missing_credentials],
"credentials": missing_creds_list,
"inputs": self._get_inputs_list(block),
"execution_modes": ["immediate"],
},

View File

@@ -3,6 +3,7 @@
import logging
from typing import Any
from langfuse import observe
from prisma.enums import ContentType
from backend.api.features.chat.model import ChatSession
@@ -87,6 +88,7 @@ class SearchDocsTool(BaseTool):
url_path = path.rsplit(".", 1)[0] if "." in path else path
return f"{DOCS_BASE_URL}/{url_path}"
@observe(as_type="tool", name="search_docs")
async def _execute(
self,
user_id: str | None,

View File

@@ -8,7 +8,7 @@ from backend.api.features.library import model as library_model
from backend.api.features.store import db as store_db
from backend.data import graph as graph_db
from backend.data.graph import GraphModel
from backend.data.model import CredentialsMetaInput
from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput
from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.util.exceptions import NotFoundError
@@ -89,6 +89,59 @@ def extract_credentials_from_schema(
return credentials
def _serialize_missing_credential(
field_key: str, field_info: CredentialsFieldInfo
) -> dict[str, Any]:
"""
Convert credential field info into a serializable dict that preserves all supported
credential types (e.g., api_key + oauth2) so the UI can offer multiple options.
"""
supported_types = sorted(field_info.supported_types)
provider = next(iter(field_info.provider), "unknown")
scopes = sorted(field_info.required_scopes or [])
return {
"id": field_key,
"title": field_key.replace("_", " ").title(),
"provider": provider,
"provider_name": provider.replace("_", " ").title(),
"type": supported_types[0] if supported_types else "api_key",
"types": supported_types,
"scopes": scopes,
}
def build_missing_credentials_from_graph(
graph: GraphModel, matched_credentials: dict[str, CredentialsMetaInput] | None
) -> dict[str, Any]:
"""
Build a missing_credentials mapping from a graph's aggregated credentials inputs,
preserving all supported credential types for each field.
"""
matched_keys = set(matched_credentials.keys()) if matched_credentials else set()
aggregated_fields = graph.aggregate_credentials_inputs()
return {
field_key: _serialize_missing_credential(field_key, field_info)
for field_key, (field_info, _node_fields) in aggregated_fields.items()
if field_key not in matched_keys
}
def build_missing_credentials_from_field_info(
credential_fields: dict[str, CredentialsFieldInfo],
matched_keys: set[str],
) -> dict[str, Any]:
"""
Build missing_credentials mapping from a simple credentials field info dictionary.
"""
return {
field_key: _serialize_missing_credential(field_key, field_info)
for field_key, field_info in credential_fields.items()
if field_key not in matched_keys
}
def extract_credentials_as_dict(
credentials_input_schema: dict[str, Any] | None,
) -> dict[str, CredentialsMetaInput]:

View File

@@ -41,6 +41,7 @@ class PendingHumanReviewModel(BaseModel):
graph_exec_id: str = Field(description="Graph execution ID")
graph_id: str = Field(description="Graph ID")
graph_version: int = Field(description="Graph version")
node_id: str = Field(description="Node ID in the graph definition")
payload: SafeJsonData = Field(description="The actual data payload awaiting review")
instructions: str | None = Field(
description="Instructions or message for the reviewer", default=None
@@ -81,6 +82,7 @@ class PendingHumanReviewModel(BaseModel):
graph_exec_id=review.graphExecId,
graph_id=review.graphId,
graph_version=review.graphVersion,
node_id=review.nodeId,
payload=review.payload,
instructions=review.instructions,
editable=review.editable,
@@ -179,6 +181,15 @@ class ReviewRequest(BaseModel):
reviews: List[ReviewItem] = Field(
description="All reviews with their approval status, data, and messages"
)
auto_approve_node_ids: List[str] = Field(
default_factory=list,
description=(
"List of node IDs (from the graph definition) to auto-approve for "
"the remainder of this execution. Future reviews from these specific "
"nodes will be automatically approved. This only affects the current "
"execution run."
),
)
@model_validator(mode="after")
def validate_review_completeness(self):

View File

@@ -41,6 +41,7 @@ def sample_pending_review(test_user_id: str) -> PendingHumanReviewModel:
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_123",
payload={"data": "test payload", "value": 42},
instructions="Please review this data",
editable=True,
@@ -160,6 +161,7 @@ def test_process_review_action_approve_success(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_123",
payload={"data": "modified payload", "value": 50},
instructions="Please review this data",
editable=True,
@@ -223,6 +225,7 @@ def test_process_review_action_reject_success(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_123",
payload={"data": "test payload"},
instructions="Please review",
editable=True,
@@ -274,6 +277,7 @@ def test_process_review_action_mixed_success(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_456",
payload={"data": "second payload"},
instructions="Second review",
editable=False,
@@ -303,6 +307,7 @@ def test_process_review_action_mixed_success(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_123",
payload={"data": "modified"},
instructions="Please review",
editable=True,
@@ -321,6 +326,7 @@ def test_process_review_action_mixed_success(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_456",
payload={"data": "second payload"},
instructions="Second review",
editable=False,

View File

@@ -5,7 +5,7 @@ import autogpt_libs.auth as autogpt_auth_lib
from fastapi import APIRouter, HTTPException, Query, Security, status
from prisma.enums import ReviewStatus
from backend.data.execution import get_graph_execution_meta
from backend.data.execution import ExecutionContext, get_graph_execution_meta
from backend.data.human_review import (
get_pending_reviews_for_execution,
get_pending_reviews_for_user,
@@ -169,10 +169,23 @@ async def process_review_action(
if not still_has_pending:
# Resume execution
try:
# If auto_approve_node_ids is set, create a context that will
# automatically approve future reviews from these specific nodes
execution_context = None
if request.auto_approve_node_ids:
execution_context = ExecutionContext(
auto_approved_node_ids=set(request.auto_approve_node_ids),
)
logger.info(
f"Auto-approving future reviews for nodes "
f"{request.auto_approve_node_ids} in execution {graph_exec_id}"
)
await add_graph_execution(
graph_id=first_review.graph_id,
user_id=user_id,
graph_exec_id=graph_exec_id,
execution_context=execution_context,
)
logger.info(f"Resumed execution {graph_exec_id}")
except Exception as e:

View File

@@ -401,27 +401,11 @@ async def add_generated_agent_image(
)
def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings:
"""
Initialize GraphSettings based on graph content.
Args:
graph: The graph to analyze
Returns:
GraphSettings with appropriate human_in_the_loop_safe_mode value
"""
if graph.has_human_in_the_loop:
# Graph has HITL blocks - set safe mode to True by default
return GraphSettings(human_in_the_loop_safe_mode=True)
else:
# Graph has no HITL blocks - keep None
return GraphSettings(human_in_the_loop_safe_mode=None)
async def create_library_agent(
graph: graph_db.GraphModel,
user_id: str,
hitl_safe_mode: bool = True,
sensitive_action_safe_mode: bool = False,
create_library_agents_for_sub_graphs: bool = True,
) -> list[library_model.LibraryAgent]:
"""
@@ -430,6 +414,8 @@ async def create_library_agent(
Args:
agent: The agent/Graph to add to the library.
user_id: The user to whom the agent will be added.
hitl_safe_mode: Whether HITL blocks require manual review (default True).
sensitive_action_safe_mode: Whether sensitive action blocks require review.
create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well.
Returns:
@@ -465,7 +451,11 @@ async def create_library_agent(
}
},
settings=SafeJson(
_initialize_graph_settings(graph_entry).model_dump()
GraphSettings.from_graph(
graph_entry,
hitl_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
).model_dump()
),
),
include=library_agent_include(
@@ -627,33 +617,6 @@ async def update_library_agent(
raise DatabaseError("Failed to update library agent") from e
async def update_library_agent_settings(
user_id: str,
agent_id: str,
settings: GraphSettings,
) -> library_model.LibraryAgent:
"""
Updates the settings for a specific LibraryAgent.
Args:
user_id: The owner of the LibraryAgent.
agent_id: The ID of the LibraryAgent to update.
settings: New GraphSettings to apply.
Returns:
The updated LibraryAgent.
Raises:
NotFoundError: If the specified LibraryAgent does not exist.
DatabaseError: If there's an error in the update operation.
"""
return await update_library_agent(
library_agent_id=agent_id,
user_id=user_id,
settings=settings,
)
async def delete_library_agent(
library_agent_id: str, user_id: str, soft_delete: bool = True
) -> None:
@@ -838,7 +801,7 @@ async def add_store_agent_to_library(
"isCreatedByUser": False,
"useGraphIsActiveVersion": False,
"settings": SafeJson(
_initialize_graph_settings(graph_model).model_dump()
GraphSettings.from_graph(graph_model).model_dump()
),
},
include=library_agent_include(
@@ -1228,8 +1191,15 @@ async def fork_library_agent(
)
new_graph = await on_graph_activate(new_graph, user_id=user_id)
# Create a library agent for the new graph
return (await create_library_agent(new_graph, user_id))[0]
# Create a library agent for the new graph, preserving safe mode settings
return (
await create_library_agent(
new_graph,
user_id,
hitl_safe_mode=original_agent.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=original_agent.settings.sensitive_action_safe_mode,
)
)[0]
except prisma.errors.PrismaError as e:
logger.error(f"Database error cloning library agent: {e}")
raise DatabaseError("Failed to fork library agent") from e

View File

@@ -73,6 +73,12 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node"
)
has_human_in_the_loop: bool = pydantic.Field(
description="Whether the agent has human-in-the-loop blocks"
)
has_sensitive_action: bool = pydantic.Field(
description="Whether the agent has sensitive action blocks"
)
trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
@@ -180,6 +186,8 @@ class LibraryAgent(pydantic.BaseModel):
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
has_human_in_the_loop=graph.has_human_in_the_loop,
has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
can_access_graph=can_access_graph,

View File

@@ -52,6 +52,8 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -75,6 +77,8 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -150,6 +154,8 @@ async def test_get_favorite_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -218,6 +224,8 @@ def test_add_agent_to_library_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
new_output=False,
can_access_graph=True,

View File

@@ -154,15 +154,16 @@ async def store_content_embedding(
# Upsert the embedding
# WHERE clause in DO UPDATE prevents PostgreSQL 15 bug with NULLS NOT DISTINCT
# Use {pgvector_schema}.vector for explicit pgvector type qualification
await execute_raw_with_schema(
"""
INSERT INTO {schema_prefix}"UnifiedContentEmbedding" (
"id", "contentType", "contentId", "userId", "embedding", "searchableText", "metadata", "createdAt", "updatedAt"
)
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::vector, $5, $6::jsonb, NOW(), NOW())
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::{pgvector_schema}.vector, $5, $6::jsonb, NOW(), NOW())
ON CONFLICT ("contentType", "contentId", "userId")
DO UPDATE SET
"embedding" = $4::vector,
"embedding" = $4::{pgvector_schema}.vector,
"searchableText" = $5,
"metadata" = $6::jsonb,
"updatedAt" = NOW()
@@ -177,7 +178,6 @@ async def store_content_embedding(
searchable_text,
metadata_json,
client=client,
set_public_search_path=True,
)
logger.info(f"Stored embedding for {content_type}:{content_id}")
@@ -236,7 +236,6 @@ async def get_content_embedding(
content_type,
content_id,
user_id,
set_public_search_path=True,
)
if result and len(result) > 0:
@@ -871,31 +870,46 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params) + 1
content_type_placeholders = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
"$" + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params.extend([ct.value for ct in content_types])
sql = f"""
# Build min_similarity param index before appending
min_similarity_idx = len(params) + 1
params.append(min_similarity)
# Use regular string (not f-string) for template to preserve {schema_prefix} and {schema} placeholders
# Use OPERATOR({pgvector_schema}.<=>) for explicit operator schema qualification
sql = (
"""
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
1 - (embedding <=> '{embedding_str}'::vector) as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders})
{user_filter}
AND 1 - (embedding <=> '{embedding_str}'::vector) >= ${len(params) + 1}
1 - (embedding OPERATOR({pgvector_schema}.<=>) '"""
+ embedding_str
+ """'::{pgvector_schema}.vector) as similarity
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN ("""
+ content_type_placeholders
+ """)
"""
+ user_filter
+ """
AND 1 - (embedding OPERATOR({pgvector_schema}.<=>) '"""
+ embedding_str
+ """'::{pgvector_schema}.vector) >= $"""
+ str(min_similarity_idx)
+ """
ORDER BY similarity DESC
LIMIT $1
"""
params.append(min_similarity)
)
try:
results = await query_raw_with_schema(
sql, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql, *params)
return [
{
"content_id": row["content_id"],
@@ -922,31 +936,41 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params_lexical) + 1
content_type_placeholders_lexical = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
"$" + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params_lexical.extend([ct.value for ct in content_types])
sql_lexical = f"""
# Build query param index before appending
query_param_idx = len(params_lexical) + 1
params_lexical.append(f"%{query}%")
# Use regular string (not f-string) for template to preserve {schema_prefix} placeholders
sql_lexical = (
"""
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
0.0 as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders_lexical})
{user_filter}
AND "searchableText" ILIKE ${len(params_lexical) + 1}
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN ("""
+ content_type_placeholders_lexical
+ """)
"""
+ user_filter
+ """
AND "searchableText" ILIKE $"""
+ str(query_param_idx)
+ """
ORDER BY "updatedAt" DESC
LIMIT $1
"""
params_lexical.append(f"%{query}%")
)
try:
results = await query_raw_with_schema(
sql_lexical, *params_lexical, set_public_search_path=True
)
results = await query_raw_with_schema(sql_lexical, *params_lexical)
return [
{
"content_id": row["content_id"],

View File

@@ -155,18 +155,14 @@ async def test_store_embedding_success(mocker):
)
assert result is True
# execute_raw is called twice: once for SET search_path, once for INSERT
assert mock_client.execute_raw.call_count == 2
# execute_raw is called once for INSERT (no separate SET search_path needed)
assert mock_client.execute_raw.call_count == 1
# First call: SET search_path
first_call_args = mock_client.execute_raw.call_args_list[0][0]
assert "SET search_path" in first_call_args[0]
# Second call: INSERT query with the actual data
second_call_args = mock_client.execute_raw.call_args_list[1][0]
assert "test-version-id" in second_call_args
assert "[0.1,0.2,0.3]" in second_call_args
assert None in second_call_args # userId should be None for store agents
# Verify the INSERT query with the actual data
call_args = mock_client.execute_raw.call_args_list[0][0]
assert "test-version-id" in call_args
assert "[0.1,0.2,0.3]" in call_args
assert None in call_args # userId should be None for store agents
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -12,7 +12,7 @@ from dataclasses import dataclass
from typing import Any, Literal
from prisma.enums import ContentType
from rank_bm25 import BM25Okapi
from rank_bm25 import BM25Okapi # type: ignore[import-untyped]
from backend.api.features.store.embeddings import (
EMBEDDING_DIM,
@@ -295,7 +295,7 @@ async def unified_hybrid_search(
FROM {{schema_prefix}}"UnifiedContentEmbedding" uce
WHERE uce."contentType" = ANY({content_types_param}::{{schema_prefix}}"ContentType"[])
{user_filter}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector
LIMIT 200
)
),
@@ -307,7 +307,7 @@ async def unified_hybrid_search(
uce.metadata,
uce."updatedAt" as updated_at,
-- Semantic score: cosine similarity (1 - distance)
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector), 0) as semantic_score,
-- Lexical score: ts_rank_cd
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match from metadata
@@ -363,9 +363,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0
# Apply BM25 reranking
@@ -585,7 +583,7 @@ async def hybrid_search(
WHERE uce."contentType" = 'STORE_AGENT'::{{schema_prefix}}"ContentType"
AND uce."userId" IS NULL
AND {where_clause}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector
LIMIT 200
) uce
),
@@ -607,7 +605,7 @@ async def hybrid_search(
-- Searchable text for BM25 reranking
COALESCE(sa.agent_name, '') || ' ' || COALESCE(sa.sub_heading, '') || ' ' || COALESCE(sa.description, '') as searchable_text,
-- Semantic score
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector), 0) as semantic_score,
-- Lexical score (raw, will normalize)
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match
@@ -688,9 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0

View File

@@ -761,10 +761,8 @@ async def create_new_graph(
graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
graph.validate_graph(for_run=False)
# The return value of the create graph & library function is intentionally not used here,
# as the graph already valid and no sub-graphs are returned back.
await graph_db.create_graph(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id)
activated_graph = await on_graph_activate(graph, user_id=user_id)
if create_graph.source == "builder":
@@ -888,21 +886,19 @@ async def set_graph_active_version(
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
settings=updated_settings,
)
return library
@@ -919,21 +915,18 @@ async def update_graph_settings(
user_id: Annotated[str, Security(get_user_id)],
) -> GraphSettings:
"""Update graph settings for the user's library agent."""
# Get the library agent for this graph
library_agent = await library_db.get_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id
)
if not library_agent:
raise HTTPException(404, f"Graph #{graph_id} not found in user's library")
# Update the library agent settings
updated_agent = await library_db.update_library_agent_settings(
updated_agent = await library_db.update_library_agent(
library_agent_id=library_agent.id,
user_id=user_id,
agent_id=library_agent.id,
settings=settings,
)
# Return the updated settings
return GraphSettings.model_validate(updated_agent.settings)

View File

@@ -174,7 +174,7 @@ class AIShortformVideoCreatorBlock(Block):
)
frame_rate: int = SchemaField(description="Frame rate of the video", default=60)
generation_preset: GenerationPreset = SchemaField(
description="Generation preset for visual style - only effects AI generated visuals",
description="Generation preset for visual style - only affects AI-generated visuals",
default=GenerationPreset.LEONARDO,
placeholder=GenerationPreset.LEONARDO,
)

View File

@@ -381,7 +381,7 @@ Each range you add needs to be a string, with the upper and lower numbers of the
organization_locations: Optional[list[str]] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
""",

View File

@@ -34,7 +34,7 @@ Each range you add needs to be a string, with the upper and lower numbers of the
organization_locations: list[str] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appear in your search results, even if they match other parameters.
To exclude companies based on location, use the organization_not_locations parameter.
""",

View File

@@ -81,7 +81,7 @@ class StoreValueBlock(Block):
def __init__(self):
super().__init__(
id="1ff065e9-88e8-4358-9d82-8dc91f622ba9",
description="This block forwards an input value as output, allowing reuse without change.",
description="A basic block that stores and forwards a value throughout workflows, allowing it to be reused without changes across multiple blocks.",
categories={BlockCategory.BASIC},
input_schema=StoreValueBlock.Input,
output_schema=StoreValueBlock.Output,
@@ -111,7 +111,7 @@ class PrintToConsoleBlock(Block):
def __init__(self):
super().__init__(
id="f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c",
description="Print the given text to the console, this is used for a debugging purpose.",
description="A debugging block that outputs text to the console for monitoring and troubleshooting workflow execution.",
categories={BlockCategory.BASIC},
input_schema=PrintToConsoleBlock.Input,
output_schema=PrintToConsoleBlock.Output,
@@ -137,7 +137,7 @@ class NoteBlock(Block):
def __init__(self):
super().__init__(
id="cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
description="This block is used to display a sticky note with the given text.",
description="A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes.",
categories={BlockCategory.BASIC},
input_schema=NoteBlock.Input,
output_schema=NoteBlock.Output,

View File

@@ -159,7 +159,7 @@ class FindInDictionaryBlock(Block):
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
description="A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[
@@ -680,3 +680,58 @@ class ListIsEmptyBlock(Block):
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.list) == 0
class ConcatenateListsBlock(Block):
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to concatenate together. All lists will be combined in order into a single list.",
placeholder="e.g., [[1, 2], [3, 4], [5, 6]]",
)
class Output(BlockSchemaOutput):
concatenated_list: List[Any] = SchemaField(
description="The concatenated list containing all elements from all input lists in order."
)
error: str = SchemaField(
description="Error message if concatenation failed due to invalid input types."
)
def __init__(self):
super().__init__(
id="3cf9298b-5817-4141-9d80-7c2cc5199c8e",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order.",
categories={BlockCategory.BASIC},
input_schema=ConcatenateListsBlock.Input,
output_schema=ConcatenateListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], [4, 5, 6]]},
{"lists": [["a", "b"], ["c"], ["d", "e", "f"]]},
{"lists": [[1, 2], []]},
{"lists": []},
],
test_output=[
("concatenated_list", [1, 2, 3, 4, 5, 6]),
("concatenated_list", ["a", "b", "c", "d", "e", "f"]),
("concatenated_list", [1, 2]),
("concatenated_list", []),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
concatenated = []
for idx, lst in enumerate(input_data.lists):
if lst is None:
# Skip None values to avoid errors
continue
if not isinstance(lst, list):
# Type validation: each item must be a list
# Strings are iterable and would cause extend() to iterate character-by-character
# Non-iterable types would raise TypeError
yield "error", (
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return
concatenated.extend(lst)
yield "concatenated_list", concatenated

View File

@@ -51,7 +51,7 @@ class GithubCommentBlock(Block):
def __init__(self):
super().__init__(
id="a8db4d8d-db1c-4a25-a1b0-416a8c33602b",
description="This block posts a comment on a specified GitHub issue or pull request.",
description="A block that posts comments on GitHub issues or pull requests using the GitHub API.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubCommentBlock.Input,
output_schema=GithubCommentBlock.Output,
@@ -151,7 +151,7 @@ class GithubUpdateCommentBlock(Block):
def __init__(self):
super().__init__(
id="b3f4d747-10e3-4e69-8c51-f2be1d99c9a7",
description="This block updates a comment on a specified GitHub issue or pull request.",
description="A block that updates an existing comment on a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUpdateCommentBlock.Input,
output_schema=GithubUpdateCommentBlock.Output,
@@ -249,7 +249,7 @@ class GithubListCommentsBlock(Block):
def __init__(self):
super().__init__(
id="c4b5fb63-0005-4a11-b35a-0c2467bd6b59",
description="This block lists all comments for a specified GitHub issue or pull request.",
description="A block that retrieves all comments from a GitHub issue or pull request, including comment metadata and content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListCommentsBlock.Input,
output_schema=GithubListCommentsBlock.Output,
@@ -363,7 +363,7 @@ class GithubMakeIssueBlock(Block):
def __init__(self):
super().__init__(
id="691dad47-f494-44c3-a1e8-05b7990f2dab",
description="This block creates a new issue on a specified GitHub repository.",
description="A block that creates new issues on GitHub repositories with a title and body content.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubMakeIssueBlock.Input,
output_schema=GithubMakeIssueBlock.Output,
@@ -433,7 +433,7 @@ class GithubReadIssueBlock(Block):
def __init__(self):
super().__init__(
id="6443c75d-032a-4772-9c08-230c707c8acc",
description="This block reads the body, title, and user of a specified GitHub issue.",
description="A block that retrieves information about a specific GitHub issue, including its title, body content, and creator.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubReadIssueBlock.Input,
output_schema=GithubReadIssueBlock.Output,
@@ -510,7 +510,7 @@ class GithubListIssuesBlock(Block):
def __init__(self):
super().__init__(
id="c215bfd7-0e57-4573-8f8c-f7d4963dcd74",
description="This block lists all issues for a specified GitHub repository.",
description="A block that retrieves a list of issues from a GitHub repository with their titles and URLs.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubListIssuesBlock.Input,
output_schema=GithubListIssuesBlock.Output,
@@ -597,7 +597,7 @@ class GithubAddLabelBlock(Block):
def __init__(self):
super().__init__(
id="98bd6b77-9506-43d5-b669-6b9733c4b1f1",
description="This block adds a label to a specified GitHub issue or pull request.",
description="A block that adds a label to a GitHub issue or pull request for categorization and organization.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAddLabelBlock.Input,
output_schema=GithubAddLabelBlock.Output,
@@ -657,7 +657,7 @@ class GithubRemoveLabelBlock(Block):
def __init__(self):
super().__init__(
id="78f050c5-3e3a-48c0-9e5b-ef1ceca5589c",
description="This block removes a label from a specified GitHub issue or pull request.",
description="A block that removes a label from a GitHub issue or pull request.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubRemoveLabelBlock.Input,
output_schema=GithubRemoveLabelBlock.Output,
@@ -720,7 +720,7 @@ class GithubAssignIssueBlock(Block):
def __init__(self):
super().__init__(
id="90507c72-b0ff-413a-886a-23bbbd66f542",
description="This block assigns a user to a specified GitHub issue.",
description="A block that assigns a GitHub user to an issue for task ownership and tracking.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubAssignIssueBlock.Input,
output_schema=GithubAssignIssueBlock.Output,
@@ -786,7 +786,7 @@ class GithubUnassignIssueBlock(Block):
def __init__(self):
super().__init__(
id="d154002a-38f4-46c2-962d-2488f2b05ece",
description="This block unassigns a user from a specified GitHub issue.",
description="A block that removes a user's assignment from a GitHub issue.",
categories={BlockCategory.DEVELOPER_TOOLS},
input_schema=GithubUnassignIssueBlock.Input,
output_schema=GithubUnassignIssueBlock.Output,

View File

@@ -353,7 +353,7 @@ class GmailReadBlock(GmailBase):
def __init__(self):
super().__init__(
id="25310c70-b89b-43ba-b25c-4dfa7e2a481c",
description="This block reads emails from Gmail.",
description="A block that retrieves and reads emails from a Gmail account based on search criteria, returning detailed message information including subject, sender, body, and attachments.",
categories={BlockCategory.COMMUNICATION},
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
input_schema=GmailReadBlock.Input,
@@ -743,7 +743,7 @@ class GmailListLabelsBlock(GmailBase):
def __init__(self):
super().__init__(
id="3e1c2c1c-c689-4520-b956-1f3bf4e02bb7",
description="This block lists all labels in Gmail.",
description="A block that retrieves all labels (categories) from a Gmail account for organizing and categorizing emails.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailListLabelsBlock.Input,
output_schema=GmailListLabelsBlock.Output,
@@ -807,7 +807,7 @@ class GmailAddLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="f884b2fb-04f4-4265-9658-14f433926ac9",
description="This block adds a label to a Gmail message.",
description="A block that adds a label to a specific email message in Gmail, creating the label if it doesn't exist.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailAddLabelBlock.Input,
output_schema=GmailAddLabelBlock.Output,
@@ -893,7 +893,7 @@ class GmailRemoveLabelBlock(GmailBase):
def __init__(self):
super().__init__(
id="0afc0526-aba1-4b2b-888e-a22b7c3f359d",
description="This block removes a label from a Gmail message.",
description="A block that removes a label from a specific email message in a Gmail account.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailRemoveLabelBlock.Input,
output_schema=GmailRemoveLabelBlock.Output,
@@ -961,7 +961,7 @@ class GmailGetThreadBlock(GmailBase):
def __init__(self):
super().__init__(
id="21a79166-9df7-4b5f-9f36-96f639d86112",
description="Get a full Gmail thread by ID",
description="A block that retrieves an entire Gmail thread (email conversation) by ID, returning all messages with decoded bodies for reading complete conversations.",
categories={BlockCategory.COMMUNICATION},
input_schema=GmailGetThreadBlock.Input,
output_schema=GmailGetThreadBlock.Output,

View File

@@ -282,7 +282,7 @@ class GoogleSheetsReadBlock(Block):
def __init__(self):
super().__init__(
id="5724e902-3635-47e9-a108-aaa0263a4988",
description="This block reads data from a Google Sheets spreadsheet.",
description="A block that reads data from a Google Sheets spreadsheet using A1 notation range selection.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsReadBlock.Input,
output_schema=GoogleSheetsReadBlock.Output,
@@ -409,7 +409,7 @@ class GoogleSheetsWriteBlock(Block):
def __init__(self):
super().__init__(
id="d9291e87-301d-47a8-91fe-907fb55460e5",
description="This block writes data to a Google Sheets spreadsheet.",
description="A block that writes data to a Google Sheets spreadsheet at a specified A1 notation range.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsWriteBlock.Input,
output_schema=GoogleSheetsWriteBlock.Output,

View File

@@ -55,6 +55,7 @@ class HITLReviewHelper:
async def _handle_review_request(
input_data: Any,
user_id: str,
node_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
@@ -62,6 +63,7 @@ class HITLReviewHelper:
execution_context: ExecutionContext,
block_name: str = "Block",
editable: bool = False,
skip_safe_mode_check: bool = False,
) -> Optional[ReviewResult]:
"""
Handle a review request for a block that requires human review.
@@ -69,6 +71,7 @@ class HITLReviewHelper:
Args:
input_data: The input data to be reviewed
user_id: ID of the user requesting the review
node_id: ID of the node in the graph definition
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph
@@ -76,6 +79,8 @@ class HITLReviewHelper:
execution_context: Current execution context
block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data
skip_safe_mode_check: If True, skip the safe mode check (caller already
verified). Used by sensitive action blocks that check their own flag.
Returns:
ReviewResult if review is complete, None if waiting for human input
@@ -84,7 +89,11 @@ class HITLReviewHelper:
Exception: If review creation or status update fails
"""
# Skip review if safe mode is disabled - return auto-approved result
if not execution_context.safe_mode:
# (unless caller already checked and wants to skip this check)
if (
not skip_safe_mode_check
and not execution_context.human_in_the_loop_safe_mode
):
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
)
@@ -96,12 +105,27 @@ class HITLReviewHelper:
node_exec_id=node_exec_id,
)
# Skip review if this specific node has been auto-approved by the user
if node_id in execution_context.auto_approved_node_ids:
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - "
f"node {node_id} is auto-approved"
)
return ReviewResult(
data=input_data,
status=ReviewStatus.APPROVED,
message="Auto-approved (user approved all future actions for this block)",
processed=True,
node_exec_id=node_exec_id,
)
result = await HITLReviewHelper.get_or_create_human_review(
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
node_id=node_id,
input_data=input_data,
message=f"Review required for {block_name} execution",
editable=editable,
@@ -129,6 +153,7 @@ class HITLReviewHelper:
async def handle_review_decision(
input_data: Any,
user_id: str,
node_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
@@ -136,6 +161,7 @@ class HITLReviewHelper:
execution_context: ExecutionContext,
block_name: str = "Block",
editable: bool = False,
skip_safe_mode_check: bool = False,
) -> Optional[ReviewDecision]:
"""
Handle a review request and return the decision in a single call.
@@ -143,6 +169,7 @@ class HITLReviewHelper:
Args:
input_data: The input data to be reviewed
user_id: ID of the user requesting the review
node_id: ID of the node in the graph definition
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph
@@ -150,6 +177,8 @@ class HITLReviewHelper:
execution_context: Current execution context
block_name: Name of the block requesting review
editable: Whether the reviewer can edit the data
skip_safe_mode_check: If True, skip the safe mode check (caller already
verified). Used by sensitive action blocks that check their own flag.
Returns:
ReviewDecision if review is complete (approved/rejected),
@@ -158,6 +187,7 @@ class HITLReviewHelper:
review_result = await HITLReviewHelper._handle_review_request(
input_data=input_data,
user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
@@ -165,6 +195,7 @@ class HITLReviewHelper:
execution_context=execution_context,
block_name=block_name,
editable=editable,
skip_safe_mode_check=skip_safe_mode_check,
)
if review_result is None:

View File

@@ -97,6 +97,7 @@ class HumanInTheLoopBlock(Block):
input_data: Input,
*,
user_id: str,
node_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
@@ -104,7 +105,17 @@ class HumanInTheLoopBlock(Block):
execution_context: ExecutionContext,
**_kwargs,
) -> BlockOutput:
if not execution_context.safe_mode:
# Check if this specific node has been auto-approved by the user
if node_id in execution_context.auto_approved_node_ids:
logger.info(
f"HITL block skipping review for node {node_exec_id} - "
f"node {node_id} is auto-approved"
)
yield "approved_data", input_data.data
yield "review_message", "Auto-approved (user approved all future actions for this block)"
return
if not execution_context.human_in_the_loop_safe_mode:
logger.info(
f"HITL block skipping review for node {node_exec_id} - safe mode disabled"
)
@@ -115,6 +126,7 @@ class HumanInTheLoopBlock(Block):
decision = await self.handle_review_decision(
input_data=input_data.data,
user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,

View File

@@ -76,7 +76,7 @@ class AgentInputBlock(Block):
super().__init__(
**{
"id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"description": "Base block for user inputs.",
"description": "A block that accepts and processes user input values within a workflow, supporting various input types and validation.",
"input_schema": AgentInputBlock.Input,
"output_schema": AgentInputBlock.Output,
"test_input": [
@@ -168,7 +168,7 @@ class AgentOutputBlock(Block):
def __init__(self):
super().__init__(
id="363ae599-353e-4804-937e-b2ee3cef3da4",
description="Stores the output of the graph for users to see.",
description="A block that records and formats workflow results for display to users, with optional Jinja2 template formatting support.",
input_schema=AgentOutputBlock.Input,
output_schema=AgentOutputBlock.Output,
test_input=[

View File

@@ -79,6 +79,10 @@ class ModelMetadata(NamedTuple):
provider: str
context_window: int
max_output_tokens: int | None
display_name: str
provider_name: str
creator_name: str
price_tier: Literal[1, 2, 3]
class LlmModelMeta(EnumMeta):
@@ -171,6 +175,26 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
V0_1_5_LG = "v0-1.5-lg"
V0_1_0_MD = "v0-1.0-md"
@classmethod
def __get_pydantic_json_schema__(cls, schema, handler):
json_schema = handler(schema)
llm_model_metadata = {}
for model in cls:
model_name = model.value
metadata = model.metadata
llm_model_metadata[model_name] = {
"creator": metadata.creator_name,
"creator_name": metadata.creator_name,
"title": metadata.display_name,
"provider": metadata.provider,
"provider_name": metadata.provider_name,
"name": model_name,
"price_tier": metadata.price_tier,
}
json_schema["llm_model"] = True
json_schema["llm_model_metadata"] = llm_model_metadata
return json_schema
@property
def metadata(self) -> ModelMetadata:
return MODEL_METADATA[self]
@@ -190,119 +214,291 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
MODEL_METADATA = {
# https://platform.openai.com/docs/models
LlmModel.O3: ModelMetadata("openai", 200000, 100000),
LlmModel.O3_MINI: ModelMetadata("openai", 200000, 100000), # o3-mini-2025-01-31
LlmModel.O1: ModelMetadata("openai", 200000, 100000), # o1-2024-12-17
LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12
LlmModel.O3: ModelMetadata("openai", 200000, 100000, "O3", "OpenAI", "OpenAI", 2),
LlmModel.O3_MINI: ModelMetadata(
"openai", 200000, 100000, "O3 Mini", "OpenAI", "OpenAI", 1
), # o3-mini-2025-01-31
LlmModel.O1: ModelMetadata(
"openai", 200000, 100000, "O1", "OpenAI", "OpenAI", 3
), # o1-2024-12-17
LlmModel.O1_MINI: ModelMetadata(
"openai", 128000, 65536, "O1 Mini", "OpenAI", "OpenAI", 2
), # o1-mini-2024-09-12
# GPT-5 models
LlmModel.GPT5_2: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_1: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_MINI: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_NANO: ModelMetadata("openai", 400000, 128000),
LlmModel.GPT5_CHAT: ModelMetadata("openai", 400000, 16384),
LlmModel.GPT41: ModelMetadata("openai", 1047576, 32768),
LlmModel.GPT41_MINI: ModelMetadata("openai", 1047576, 32768),
LlmModel.GPT5_2: ModelMetadata(
"openai", 400000, 128000, "GPT-5.2", "OpenAI", "OpenAI", 3
),
LlmModel.GPT5_1: ModelMetadata(
"openai", 400000, 128000, "GPT-5.1", "OpenAI", "OpenAI", 2
),
LlmModel.GPT5: ModelMetadata(
"openai", 400000, 128000, "GPT-5", "OpenAI", "OpenAI", 1
),
LlmModel.GPT5_MINI: ModelMetadata(
"openai", 400000, 128000, "GPT-5 Mini", "OpenAI", "OpenAI", 1
),
LlmModel.GPT5_NANO: ModelMetadata(
"openai", 400000, 128000, "GPT-5 Nano", "OpenAI", "OpenAI", 1
),
LlmModel.GPT5_CHAT: ModelMetadata(
"openai", 400000, 16384, "GPT-5 Chat Latest", "OpenAI", "OpenAI", 2
),
LlmModel.GPT41: ModelMetadata(
"openai", 1047576, 32768, "GPT-4.1", "OpenAI", "OpenAI", 1
),
LlmModel.GPT41_MINI: ModelMetadata(
"openai", 1047576, 32768, "GPT-4.1 Mini", "OpenAI", "OpenAI", 1
),
LlmModel.GPT4O_MINI: ModelMetadata(
"openai", 128000, 16384
"openai", 128000, 16384, "GPT-4o Mini", "OpenAI", "OpenAI", 1
), # gpt-4o-mini-2024-07-18
LlmModel.GPT4O: ModelMetadata("openai", 128000, 16384), # gpt-4o-2024-08-06
LlmModel.GPT4O: ModelMetadata(
"openai", 128000, 16384, "GPT-4o", "OpenAI", "OpenAI", 2
), # gpt-4o-2024-08-06
LlmModel.GPT4_TURBO: ModelMetadata(
"openai", 128000, 4096
"openai", 128000, 4096, "GPT-4 Turbo", "OpenAI", "OpenAI", 3
), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385, 4096), # gpt-3.5-turbo-0125
LlmModel.GPT3_5_TURBO: ModelMetadata(
"openai", 16385, 4096, "GPT-3.5 Turbo", "OpenAI", "OpenAI", 1
), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_4_1_OPUS: ModelMetadata(
"anthropic", 200000, 32000
"anthropic", 200000, 32000, "Claude Opus 4.1", "Anthropic", "Anthropic", 3
), # claude-opus-4-1-20250805
LlmModel.CLAUDE_4_OPUS: ModelMetadata(
"anthropic", 200000, 32000
"anthropic", 200000, 32000, "Claude Opus 4", "Anthropic", "Anthropic", 3
), # claude-4-opus-20250514
LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 64000
"anthropic", 200000, 64000, "Claude Sonnet 4", "Anthropic", "Anthropic", 2
), # claude-4-sonnet-20250514
LlmModel.CLAUDE_4_5_OPUS: ModelMetadata(
"anthropic", 200000, 64000
"anthropic", 200000, 64000, "Claude Opus 4.5", "Anthropic", "Anthropic", 3
), # claude-opus-4-5-20251101
LlmModel.CLAUDE_4_5_SONNET: ModelMetadata(
"anthropic", 200000, 64000
"anthropic", 200000, 64000, "Claude Sonnet 4.5", "Anthropic", "Anthropic", 3
), # claude-sonnet-4-5-20250929
LlmModel.CLAUDE_4_5_HAIKU: ModelMetadata(
"anthropic", 200000, 64000
"anthropic", 200000, 64000, "Claude Haiku 4.5", "Anthropic", "Anthropic", 2
), # claude-haiku-4-5-20251001
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 64000
"anthropic", 200000, 64000, "Claude 3.7 Sonnet", "Anthropic", "Anthropic", 2
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096
"anthropic", 200000, 4096, "Claude 3 Haiku", "Anthropic", "Anthropic", 1
), # claude-3-haiku-20240307
# https://docs.aimlapi.com/api-overview/model-database/text-models
LlmModel.AIML_API_QWEN2_5_72B: ModelMetadata("aiml_api", 32000, 8000),
LlmModel.AIML_API_LLAMA3_1_70B: ModelMetadata("aiml_api", 128000, 40000),
LlmModel.AIML_API_LLAMA3_3_70B: ModelMetadata("aiml_api", 128000, None),
LlmModel.AIML_API_META_LLAMA_3_1_70B: ModelMetadata("aiml_api", 131000, 2000),
LlmModel.AIML_API_LLAMA_3_2_3B: ModelMetadata("aiml_api", 128000, None),
# https://console.groq.com/docs/models
LlmModel.LLAMA3_3_70B: ModelMetadata("groq", 128000, 32768),
LlmModel.LLAMA3_1_8B: ModelMetadata("groq", 128000, 8192),
# https://ollama.com/library
LlmModel.OLLAMA_LLAMA3_3: ModelMetadata("ollama", 8192, None),
LlmModel.OLLAMA_LLAMA3_2: ModelMetadata("ollama", 8192, None),
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata("ollama", 8192, None),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata("ollama", 8192, None),
LlmModel.OLLAMA_DOLPHIN: ModelMetadata("ollama", 32768, None),
# https://openrouter.ai/models
LlmModel.GEMINI_2_5_PRO: ModelMetadata("open_router", 1050000, 8192),
LlmModel.GEMINI_3_PRO_PREVIEW: ModelMetadata("open_router", 1048576, 65535),
LlmModel.GEMINI_2_5_FLASH: ModelMetadata("open_router", 1048576, 65535),
LlmModel.GEMINI_2_0_FLASH: ModelMetadata("open_router", 1048576, 8192),
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: ModelMetadata(
"open_router", 1048576, 65535
LlmModel.AIML_API_QWEN2_5_72B: ModelMetadata(
"aiml_api", 32000, 8000, "Qwen 2.5 72B Instruct Turbo", "AI/ML", "Qwen", 1
),
LlmModel.AIML_API_LLAMA3_1_70B: ModelMetadata(
"aiml_api",
128000,
40000,
"Llama 3.1 Nemotron 70B Instruct",
"AI/ML",
"Nvidia",
1,
),
LlmModel.AIML_API_LLAMA3_3_70B: ModelMetadata(
"aiml_api", 128000, None, "Llama 3.3 70B Instruct Turbo", "AI/ML", "Meta", 1
),
LlmModel.AIML_API_META_LLAMA_3_1_70B: ModelMetadata(
"aiml_api", 131000, 2000, "Llama 3.1 70B Instruct Turbo", "AI/ML", "Meta", 1
),
LlmModel.AIML_API_LLAMA_3_2_3B: ModelMetadata(
"aiml_api", 128000, None, "Llama 3.2 3B Instruct Turbo", "AI/ML", "Meta", 1
),
# https://console.groq.com/docs/models
LlmModel.LLAMA3_3_70B: ModelMetadata(
"groq", 128000, 32768, "Llama 3.3 70B Versatile", "Groq", "Meta", 1
),
LlmModel.LLAMA3_1_8B: ModelMetadata(
"groq", 128000, 8192, "Llama 3.1 8B Instant", "Groq", "Meta", 1
),
# https://ollama.com/library
LlmModel.OLLAMA_LLAMA3_3: ModelMetadata(
"ollama", 8192, None, "Llama 3.3", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_LLAMA3_2: ModelMetadata(
"ollama", 8192, None, "Llama 3.2", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_LLAMA3_8B: ModelMetadata(
"ollama", 8192, None, "Llama 3", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata(
"ollama", 8192, None, "Llama 3.1 405B", "Ollama", "Meta", 1
),
LlmModel.OLLAMA_DOLPHIN: ModelMetadata(
"ollama", 32768, None, "Dolphin Mistral Latest", "Ollama", "Mistral AI", 1
),
# https://openrouter.ai/models
LlmModel.GEMINI_2_5_PRO: ModelMetadata(
"open_router",
1050000,
8192,
"Gemini 2.5 Pro Preview 03.25",
"OpenRouter",
"Google",
2,
),
LlmModel.GEMINI_3_PRO_PREVIEW: ModelMetadata(
"open_router", 1048576, 65535, "Gemini 3 Pro Preview", "OpenRouter", "Google", 2
),
LlmModel.GEMINI_2_5_FLASH: ModelMetadata(
"open_router", 1048576, 65535, "Gemini 2.5 Flash", "OpenRouter", "Google", 1
),
LlmModel.GEMINI_2_0_FLASH: ModelMetadata(
"open_router", 1048576, 8192, "Gemini 2.0 Flash 001", "OpenRouter", "Google", 1
),
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: ModelMetadata(
"open_router",
1048576,
65535,
"Gemini 2.5 Flash Lite Preview 06.17",
"OpenRouter",
"Google",
1,
),
LlmModel.GEMINI_2_0_FLASH_LITE: ModelMetadata(
"open_router",
1048576,
8192,
"Gemini 2.0 Flash Lite 001",
"OpenRouter",
"Google",
1,
),
LlmModel.MISTRAL_NEMO: ModelMetadata(
"open_router", 128000, 4096, "Mistral Nemo", "OpenRouter", "Mistral AI", 1
),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata(
"open_router", 128000, 4096, "Command R 08.2024", "OpenRouter", "Cohere", 1
),
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: ModelMetadata(
"open_router", 128000, 4096, "Command R Plus 08.2024", "OpenRouter", "Cohere", 2
),
LlmModel.DEEPSEEK_CHAT: ModelMetadata(
"open_router", 64000, 2048, "DeepSeek Chat", "OpenRouter", "DeepSeek", 1
),
LlmModel.DEEPSEEK_R1_0528: ModelMetadata(
"open_router", 163840, 163840, "DeepSeek R1 0528", "OpenRouter", "DeepSeek", 1
),
LlmModel.PERPLEXITY_SONAR: ModelMetadata(
"open_router", 127000, 8000, "Sonar", "OpenRouter", "Perplexity", 1
),
LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata(
"open_router", 200000, 8000, "Sonar Pro", "OpenRouter", "Perplexity", 2
),
LlmModel.GEMINI_2_0_FLASH_LITE: ModelMetadata("open_router", 1048576, 8192),
LlmModel.MISTRAL_NEMO: ModelMetadata("open_router", 128000, 4096),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata("open_router", 128000, 4096),
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: ModelMetadata("open_router", 128000, 4096),
LlmModel.DEEPSEEK_CHAT: ModelMetadata("open_router", 64000, 2048),
LlmModel.DEEPSEEK_R1_0528: ModelMetadata("open_router", 163840, 163840),
LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 8000),
LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata("open_router", 200000, 8000),
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata(
"open_router",
128000,
16000,
"Sonar Deep Research",
"OpenRouter",
"Perplexity",
3,
),
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: ModelMetadata(
"open_router", 131000, 4096
"open_router",
131000,
4096,
"Hermes 3 Llama 3.1 405B",
"OpenRouter",
"Nous Research",
1,
),
LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_70B: ModelMetadata(
"open_router", 12288, 12288
"open_router",
12288,
12288,
"Hermes 3 Llama 3.1 70B",
"OpenRouter",
"Nous Research",
1,
),
LlmModel.OPENAI_GPT_OSS_120B: ModelMetadata(
"open_router", 131072, 131072, "GPT-OSS 120B", "OpenRouter", "OpenAI", 1
),
LlmModel.OPENAI_GPT_OSS_20B: ModelMetadata(
"open_router", 131072, 32768, "GPT-OSS 20B", "OpenRouter", "OpenAI", 1
),
LlmModel.AMAZON_NOVA_LITE_V1: ModelMetadata(
"open_router", 300000, 5120, "Nova Lite V1", "OpenRouter", "Amazon", 1
),
LlmModel.AMAZON_NOVA_MICRO_V1: ModelMetadata(
"open_router", 128000, 5120, "Nova Micro V1", "OpenRouter", "Amazon", 1
),
LlmModel.AMAZON_NOVA_PRO_V1: ModelMetadata(
"open_router", 300000, 5120, "Nova Pro V1", "OpenRouter", "Amazon", 1
),
LlmModel.MICROSOFT_WIZARDLM_2_8X22B: ModelMetadata(
"open_router", 65536, 4096, "WizardLM 2 8x22B", "OpenRouter", "Microsoft", 1
),
LlmModel.GRYPHE_MYTHOMAX_L2_13B: ModelMetadata(
"open_router", 4096, 4096, "MythoMax L2 13B", "OpenRouter", "Gryphe", 1
),
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata(
"open_router", 131072, 131072, "Llama 4 Scout", "OpenRouter", "Meta", 1
),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata(
"open_router", 1048576, 1000000, "Llama 4 Maverick", "OpenRouter", "Meta", 1
),
LlmModel.GROK_4: ModelMetadata(
"open_router", 256000, 256000, "Grok 4", "OpenRouter", "xAI", 3
),
LlmModel.GROK_4_FAST: ModelMetadata(
"open_router", 2000000, 30000, "Grok 4 Fast", "OpenRouter", "xAI", 1
),
LlmModel.GROK_4_1_FAST: ModelMetadata(
"open_router", 2000000, 30000, "Grok 4.1 Fast", "OpenRouter", "xAI", 1
),
LlmModel.GROK_CODE_FAST_1: ModelMetadata(
"open_router", 256000, 10000, "Grok Code Fast 1", "OpenRouter", "xAI", 1
),
LlmModel.KIMI_K2: ModelMetadata(
"open_router", 131000, 131000, "Kimi K2", "OpenRouter", "Moonshot AI", 1
),
LlmModel.QWEN3_235B_A22B_THINKING: ModelMetadata(
"open_router",
262144,
262144,
"Qwen 3 235B A22B Thinking 2507",
"OpenRouter",
"Qwen",
1,
),
LlmModel.QWEN3_CODER: ModelMetadata(
"open_router", 262144, 262144, "Qwen 3 Coder", "OpenRouter", "Qwen", 3
),
LlmModel.OPENAI_GPT_OSS_120B: ModelMetadata("open_router", 131072, 131072),
LlmModel.OPENAI_GPT_OSS_20B: ModelMetadata("open_router", 131072, 32768),
LlmModel.AMAZON_NOVA_LITE_V1: ModelMetadata("open_router", 300000, 5120),
LlmModel.AMAZON_NOVA_MICRO_V1: ModelMetadata("open_router", 128000, 5120),
LlmModel.AMAZON_NOVA_PRO_V1: ModelMetadata("open_router", 300000, 5120),
LlmModel.MICROSOFT_WIZARDLM_2_8X22B: ModelMetadata("open_router", 65536, 4096),
LlmModel.GRYPHE_MYTHOMAX_L2_13B: ModelMetadata("open_router", 4096, 4096),
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000),
LlmModel.GROK_4: ModelMetadata("open_router", 256000, 256000),
LlmModel.GROK_4_FAST: ModelMetadata("open_router", 2000000, 30000),
LlmModel.GROK_4_1_FAST: ModelMetadata("open_router", 2000000, 30000),
LlmModel.GROK_CODE_FAST_1: ModelMetadata("open_router", 256000, 10000),
LlmModel.KIMI_K2: ModelMetadata("open_router", 131000, 131000),
LlmModel.QWEN3_235B_A22B_THINKING: ModelMetadata("open_router", 262144, 262144),
LlmModel.QWEN3_CODER: ModelMetadata("open_router", 262144, 262144),
# Llama API models
LlmModel.LLAMA_API_LLAMA_4_SCOUT: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA_4_SCOUT: ModelMetadata(
"llama_api",
128000,
4028,
"Llama 4 Scout 17B 16E Instruct FP8",
"Llama API",
"Meta",
1,
),
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata(
"llama_api",
128000,
4028,
"Llama 4 Maverick 17B 128E Instruct FP8",
"Llama API",
"Meta",
1,
),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata(
"llama_api", 128000, 4028, "Llama 3.3 8B Instruct", "Llama API", "Meta", 1
),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata(
"llama_api", 128000, 4028, "Llama 3.3 70B Instruct", "Llama API", "Meta", 1
),
# v0 by Vercel models
LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000),
LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000),
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000),
LlmModel.V0_1_5_MD: ModelMetadata("v0", 128000, 64000, "v0 1.5 MD", "V0", "V0", 1),
LlmModel.V0_1_5_LG: ModelMetadata("v0", 512000, 64000, "v0 1.5 LG", "V0", "V0", 1),
LlmModel.V0_1_0_MD: ModelMetadata("v0", 128000, 64000, "v0 1.0 MD", "V0", "V0", 1),
}
DEFAULT_LLM_MODEL = LlmModel.GPT5_2
@@ -854,7 +1050,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
description="Call a Large Language Model (LLM) to generate formatted object based on the given prompt.",
description="A block that generates structured JSON responses using a Large Language Model (LLM), with schema validation and format enforcement.",
categories={BlockCategory.AI},
input_schema=AIStructuredResponseGeneratorBlock.Input,
output_schema=AIStructuredResponseGeneratorBlock.Output,
@@ -1265,7 +1461,7 @@ class AITextGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="1f292d4a-41a4-4977-9684-7c8d560b9f91",
description="Call a Large Language Model (LLM) to generate a string based on the given prompt.",
description="A block that produces text responses using a Large Language Model (LLM) based on customizable prompts and system instructions.",
categories={BlockCategory.AI},
input_schema=AITextGeneratorBlock.Input,
output_schema=AITextGeneratorBlock.Output,
@@ -1361,7 +1557,7 @@ class AITextSummarizerBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="a0a69be1-4528-491c-a85a-a4ab6873e3f0",
description="Utilize a Large Language Model (LLM) to summarize a long text.",
description="A block that summarizes long texts using a Large Language Model (LLM), with configurable focus topics and summary styles.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AITextSummarizerBlock.Input,
output_schema=AITextSummarizerBlock.Output,
@@ -1562,7 +1758,7 @@ class AIConversationBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="32a87eab-381e-4dd4-bdb8-4c47151be35a",
description="Advanced LLM call that takes a list of messages and sends them to the language model.",
description="A block that facilitates multi-turn conversations with a Large Language Model (LLM), maintaining context across message exchanges.",
categories={BlockCategory.AI},
input_schema=AIConversationBlock.Input,
output_schema=AIConversationBlock.Output,
@@ -1682,7 +1878,7 @@ class AIListGeneratorBlock(AIBlockBase):
def __init__(self):
super().__init__(
id="9c0b0450-d199-458b-a731-072189dd6593",
description="Generate a list of values based on the given prompt using a Large Language Model (LLM).",
description="A block that creates lists of items based on prompts using a Large Language Model (LLM), with optional source data for context.",
categories={BlockCategory.AI, BlockCategory.TEXT},
input_schema=AIListGeneratorBlock.Input,
output_schema=AIListGeneratorBlock.Output,

View File

@@ -46,7 +46,7 @@ class PublishToMediumBlock(Block):
class Input(BlockSchemaInput):
author_id: BlockSecret = SecretField(
key="medium_author_id",
description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me" the response will contain the authorId field.""",
description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me\n\nThe response will contain the authorId field.""",
placeholder="Enter the author's Medium AuthorID",
)
title: str = SchemaField(

View File

@@ -50,7 +50,7 @@ class CreateTalkingAvatarVideoBlock(Block):
description="The voice provider to use", default="microsoft"
)
voice_id: str = SchemaField(
description="The voice ID to use, get list of voices [here](https://docs.agpt.co/server/d_id)",
description="The voice ID to use, see [available voice IDs](https://agpt.co/docs/platform/using-ai-services/d_id)",
default="en-US-JennyNeural",
)
presenter_id: str = SchemaField(

View File

@@ -242,7 +242,7 @@ async def test_smart_decision_maker_tracks_llm_stats():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -343,7 +343,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -409,7 +409,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -471,7 +471,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -535,7 +535,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -658,7 +658,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -730,7 +730,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -786,7 +786,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -905,7 +905,7 @@ async def test_smart_decision_maker_agent_mode():
# Create a mock execution context
mock_execution_context = ExecutionContext(
safe_mode=False,
human_in_the_loop_safe_mode=False,
)
# Create a mock execution processor for agent mode tests
@@ -1027,7 +1027,7 @@ async def test_smart_decision_maker_traditional_mode_default():
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests

View File

@@ -386,7 +386,7 @@ async def test_output_yielding_with_dynamic_fields():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_processor = MagicMock()
async for output_name, output_value in block.run(
@@ -609,7 +609,9 @@ async def test_validation_errors_dont_pollute_conversation():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False
)
# Create a proper mock execution processor for agent mode
from collections import defaultdict

View File

@@ -474,7 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type
self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.requires_human_review: bool = False
self.is_sensitive_action: bool = False
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -622,6 +622,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
input_data: BlockInput,
*,
user_id: str,
node_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
@@ -637,8 +638,9 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
# Skip review if not required or safe mode is disabled
if not self.requires_human_review or not execution_context.safe_mode:
if not (
self.is_sensitive_action and execution_context.sensitive_action_safe_mode
):
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper
@@ -647,6 +649,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
decision = await HITLReviewHelper.handle_review_decision(
input_data=input_data,
user_id=user_id,
node_id=node_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,

View File

@@ -99,10 +99,15 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.OPENAI_GPT_OSS_20B: 1,
LlmModel.GEMINI_2_5_PRO: 4,
LlmModel.GEMINI_3_PRO_PREVIEW: 5,
LlmModel.GEMINI_2_5_FLASH: 1,
LlmModel.GEMINI_2_0_FLASH: 1,
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.MISTRAL_NEMO: 1,
LlmModel.COHERE_COMMAND_R_08_2024: 1,
LlmModel.COHERE_COMMAND_R_PLUS_08_2024: 3,
LlmModel.DEEPSEEK_CHAT: 2,
LlmModel.DEEPSEEK_R1_0528: 1,
LlmModel.PERPLEXITY_SONAR: 1,
LlmModel.PERPLEXITY_SONAR_PRO: 5,
LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: 10,
@@ -126,11 +131,6 @@ MODEL_COST: dict[LlmModel, int] = {
LlmModel.KIMI_K2: 1,
LlmModel.QWEN3_235B_A22B_THINKING: 1,
LlmModel.QWEN3_CODER: 9,
LlmModel.GEMINI_2_5_FLASH: 1,
LlmModel.GEMINI_2_0_FLASH: 1,
LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: 1,
LlmModel.GEMINI_2_0_FLASH_LITE: 1,
LlmModel.DEEPSEEK_R1_0528: 1,
# v0 by Vercel models
LlmModel.V0_1_5_MD: 1,
LlmModel.V0_1_5_LG: 2,

View File

@@ -38,20 +38,6 @@ POOL_TIMEOUT = os.getenv("DB_POOL_TIMEOUT")
if POOL_TIMEOUT:
DATABASE_URL = add_param(DATABASE_URL, "pool_timeout", POOL_TIMEOUT)
# Add public schema to search_path for pgvector type access
# The vector extension is in public schema, but search_path is determined by schema parameter
# Extract the schema from DATABASE_URL or default to 'public' (matching get_database_schema())
parsed_url = urlparse(DATABASE_URL)
url_params = dict(parse_qsl(parsed_url.query))
db_schema = url_params.get("schema", "public")
# Build search_path, avoiding duplicates if db_schema is already 'public'
search_path_schemas = list(
dict.fromkeys([db_schema, "public"])
) # Preserves order, removes duplicates
search_path = ",".join(search_path_schemas)
# This allows using ::vector without schema qualification
DATABASE_URL = add_param(DATABASE_URL, "options", f"-c search_path={search_path}")
HTTP_TIMEOUT = int(POOL_TIMEOUT) if POOL_TIMEOUT else None
prisma = Prisma(
@@ -127,38 +113,48 @@ async def _raw_with_schema(
*args,
execute: bool = False,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> list[dict] | int:
"""Internal: Execute raw SQL with proper schema handling.
Use query_raw_with_schema() or execute_raw_with_schema() instead.
Supports placeholders:
- {schema_prefix}: Table/type prefix (e.g., "platform".)
- {schema}: Raw schema name for application tables (e.g., platform)
- {pgvector_schema}: Schema where pgvector is installed (defaults to "public")
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix}, {schema}, and/or {pgvector_schema} placeholders
*args: Query parameters
execute: If False, executes SELECT query. If True, executes INSERT/UPDATE/DELETE.
client: Optional Prisma client for transactions (only used when execute=True).
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
- list[dict] if execute=False (query results)
- int if execute=True (number of affected rows)
Example with vector type:
await execute_raw_with_schema(
'INSERT INTO {schema_prefix}"Embedding" (vec) VALUES ($1::{pgvector_schema}.vector)',
embedding_data
)
"""
schema = get_database_schema()
schema_prefix = f'"{schema}".' if schema != "public" else ""
formatted_query = query_template.format(schema_prefix=schema_prefix)
# pgvector extension is typically installed in "public" schema
# On Supabase it may be in "extensions" but "public" is the common default
pgvector_schema = "public"
formatted_query = query_template.format(
schema_prefix=schema_prefix,
schema=schema,
pgvector_schema=pgvector_schema,
)
import prisma as prisma_module
db_client = client if client else prisma_module.get_client()
# Set search_path to include public schema if requested
# Prisma doesn't support the 'options' connection parameter, so we set it per-session
# This is idempotent and safe to call multiple times
if set_public_search_path:
await db_client.execute_raw(f"SET search_path = {schema}, public") # type: ignore
if execute:
result = await db_client.execute_raw(formatted_query, *args) # type: ignore
else:
@@ -167,16 +163,12 @@ async def _raw_with_schema(
return result
async def query_raw_with_schema(
query_template: str, *args, set_public_search_path: bool = False
) -> list[dict]:
async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
"""Execute raw SQL SELECT query with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
List of result rows as dictionaries
@@ -187,23 +179,20 @@ async def query_raw_with_schema(
user_id
)
"""
return await _raw_with_schema(query_template, *args, execute=False, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=False) # type: ignore
async def execute_raw_with_schema(
query_template: str,
*args,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> int:
"""Execute raw SQL command (INSERT/UPDATE/DELETE) with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
client: Optional Prisma client for transactions
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
Number of affected rows
@@ -215,7 +204,7 @@ async def execute_raw_with_schema(
client=tx # Optional transaction client
)
"""
return await _raw_with_schema(query_template, *args, execute=True, client=client, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=True, client=client) # type: ignore
class BaseDbModel(BaseModel):

View File

@@ -81,10 +81,12 @@ class ExecutionContext(BaseModel):
This includes information needed by blocks, sub-graphs, and execution management.
"""
safe_mode: bool = True
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
user_timezone: str = "UTC"
root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None
auto_approved_node_ids: set[str] = Field(default_factory=set)
# -------------------------- Models -------------------------- #

View File

@@ -3,7 +3,7 @@ import logging
import uuid
from collections import defaultdict
from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, Literal, Optional, cast
from typing import TYPE_CHECKING, Annotated, Any, Literal, Optional, cast
from prisma.enums import SubmissionStatus
from prisma.models import (
@@ -20,7 +20,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import BaseModel, Field, create_model
from pydantic import BaseModel, BeforeValidator, Field, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -62,7 +62,29 @@ logger = logging.getLogger(__name__)
class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool | None = None
# Use Annotated with BeforeValidator to coerce None to default values.
# This handles cases where the database has null values for these fields.
human_in_the_loop_safe_mode: Annotated[
bool, BeforeValidator(lambda v: v if v is not None else True)
] = True
sensitive_action_safe_mode: Annotated[
bool, BeforeValidator(lambda v: v if v is not None else False)
] = False
@classmethod
def from_graph(
cls,
graph: "GraphModel",
hitl_safe_mode: bool | None = None,
sensitive_action_safe_mode: bool = False,
) -> "GraphSettings":
# Default to True if not explicitly set
if hitl_safe_mode is None:
hitl_safe_mode = True
return cls(
human_in_the_loop_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
)
class Link(BaseDbModel):
@@ -244,10 +266,14 @@ class BaseGraph(BaseDbModel):
return any(
node.block_id
for node in self.nodes
if (
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
or node.block.requires_human_review
)
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
)
@computed_field
@property
def has_sensitive_action(self) -> bool:
return any(
node.block_id for node in self.nodes if node.block.is_sensitive_action
)
@property

View File

@@ -38,6 +38,7 @@ async def get_or_create_human_review(
graph_exec_id: str,
graph_id: str,
graph_version: int,
node_id: str,
input_data: SafeJsonData,
message: str,
editable: bool,
@@ -53,6 +54,7 @@ async def get_or_create_human_review(
graph_exec_id: ID of the graph execution
graph_id: ID of the graph template
graph_version: Version of the graph template
node_id: ID of the node in the graph definition
input_data: The data to be reviewed
message: Instructions for the reviewer
editable: Whether the data can be edited
@@ -73,6 +75,7 @@ async def get_or_create_human_review(
"graphExecId": graph_exec_id,
"graphId": graph_id,
"graphVersion": graph_version,
"nodeId": node_id,
"payload": SafeJson(input_data),
"instructions": message,
"editable": editable,

View File

@@ -23,6 +23,7 @@ def sample_db_review():
mock_review.graphExecId = "test_graph_exec_456"
mock_review.graphId = "test_graph_789"
mock_review.graphVersion = 1
mock_review.nodeId = "node_def_123"
mock_review.payload = {"data": "test payload"}
mock_review.instructions = "Please review"
mock_review.editable = True
@@ -55,6 +56,7 @@ async def test_get_or_create_human_review_new(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_123",
input_data={"data": "test payload"},
message="Please review",
editable=True,
@@ -84,6 +86,7 @@ async def test_get_or_create_human_review_approved(
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
node_id="node_def_123",
input_data={"data": "test payload"},
message="Please review",
editable=True,
@@ -183,6 +186,7 @@ async def test_process_all_reviews_for_execution_success(
updated_review.graphExecId = "test_graph_exec_456"
updated_review.graphId = "test_graph_789"
updated_review.graphVersion = 1
updated_review.nodeId = "node_def_123"
updated_review.payload = {"data": "modified"}
updated_review.instructions = "Please review"
updated_review.editable = True
@@ -272,6 +276,7 @@ async def test_process_all_reviews_mixed_approval_rejection(
second_review.graphExecId = "test_graph_exec_456"
second_review.graphId = "test_graph_789"
second_review.graphVersion = 1
second_review.nodeId = "node_def_456"
second_review.payload = {"data": "original"}
second_review.instructions = "Second review"
second_review.editable = True
@@ -296,6 +301,7 @@ async def test_process_all_reviews_mixed_approval_rejection(
approved_review.graphExecId = "test_graph_exec_456"
approved_review.graphId = "test_graph_789"
approved_review.graphVersion = 1
approved_review.nodeId = "node_def_123"
approved_review.payload = {"data": "modified"}
approved_review.instructions = "Please review"
approved_review.editable = True
@@ -313,6 +319,7 @@ async def test_process_all_reviews_mixed_approval_rejection(
rejected_review.graphExecId = "test_graph_exec_456"
rejected_review.graphId = "test_graph_789"
rejected_review.graphVersion = 1
rejected_review.nodeId = "node_def_456"
rejected_review.payload = {"data": "original"}
rejected_review.instructions = "Please review"
rejected_review.editable = True

View File

@@ -328,6 +328,8 @@ async def clear_business_understanding(user_id: str) -> bool:
def format_understanding_for_prompt(understanding: BusinessUnderstanding) -> str:
"""Format business understanding as text for system prompt injection."""
if not understanding:
return ""
sections = []
# User info section

View File

@@ -309,7 +309,7 @@ def ensure_embeddings_coverage():
# Process in batches until no more missing embeddings
while True:
result = db_client.backfill_missing_embeddings(batch_size=10)
result = db_client.backfill_missing_embeddings(batch_size=100)
total_processed += result["processed"]
total_success += result["success"]

View File

@@ -873,11 +873,8 @@ async def add_graph_execution(
settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id)
execution_context = ExecutionContext(
safe_mode=(
settings.human_in_the_loop_safe_mode
if settings.human_in_the_loop_safe_mode is not None
else True
),
human_in_the_loop_safe_mode=settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=settings.sensitive_action_safe_mode,
user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
),

View File

@@ -386,6 +386,7 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)
@@ -651,6 +652,7 @@ async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)

View File

@@ -0,0 +1,2 @@
-- AlterTable
ALTER TABLE "PendingHumanReview" ADD COLUMN "nodeId" TEXT NOT NULL DEFAULT '';

View File

@@ -573,6 +573,7 @@ model PendingHumanReview {
graphExecId String
graphId String
graphVersion Int
nodeId String // The node ID in the graph definition (for auto-approval tracking)
payload Json // The actual payload data to be reviewed
instructions String? // Instructions/message for the reviewer
editable Boolean @default(true) // Whether the reviewer can edit the data

View File

@@ -0,0 +1,864 @@
#!/usr/bin/env python3
"""
Block Documentation Generator
Generates markdown documentation for all blocks from code introspection.
Preserves manually-written content between marker comments.
Usage:
# Generate all docs
poetry run python scripts/generate_block_docs.py
# Check mode for CI (exits 1 if stale)
poetry run python scripts/generate_block_docs.py --check
# Verbose output
poetry run python scripts/generate_block_docs.py -v
"""
import argparse
import inspect
import logging
import re
import sys
from collections import defaultdict
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
# Add backend to path for imports
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
logger = logging.getLogger(__name__)
# Default output directory relative to repo root
DEFAULT_OUTPUT_DIR = (
Path(__file__).parent.parent.parent.parent / "docs" / "integrations"
)
@dataclass
class FieldDoc:
"""Documentation for a single input/output field."""
name: str
description: str
type_str: str
required: bool
default: Any = None
advanced: bool = False
hidden: bool = False
placeholder: str | None = None
@dataclass
class BlockDoc:
"""Documentation data extracted from a block."""
id: str
name: str
class_name: str
description: str
categories: list[str]
category_descriptions: dict[str, str]
inputs: list[FieldDoc]
outputs: list[FieldDoc]
block_type: str
source_file: str
contributors: list[str] = field(default_factory=list)
# Category to human-readable name mapping
CATEGORY_DISPLAY_NAMES = {
"AI": "AI and Language Models",
"BASIC": "Basic Operations",
"TEXT": "Text Processing",
"SEARCH": "Search and Information Retrieval",
"SOCIAL": "Social Media and Content",
"DEVELOPER_TOOLS": "Developer Tools",
"DATA": "Data Processing",
"LOGIC": "Logic and Control Flow",
"COMMUNICATION": "Communication",
"INPUT": "Input/Output",
"OUTPUT": "Input/Output",
"MULTIMEDIA": "Media Generation",
"PRODUCTIVITY": "Productivity",
"HARDWARE": "Hardware",
"AGENT": "Agent Integration",
"CRM": "CRM Services",
"SAFETY": "AI Safety",
"ISSUE_TRACKING": "Issue Tracking",
"MARKETING": "Marketing",
}
# Category to doc file mapping (for grouping related blocks)
CATEGORY_FILE_MAP = {
"BASIC": "basic",
"TEXT": "text",
"AI": "llm",
"SEARCH": "search",
"DATA": "data",
"LOGIC": "logic",
"COMMUNICATION": "communication",
"MULTIMEDIA": "multimedia",
"PRODUCTIVITY": "productivity",
}
def class_name_to_display_name(class_name: str) -> str:
"""Convert BlockClassName to 'Block Class Name'."""
# Remove 'Block' suffix (only at the end, not all occurrences)
name = class_name.removesuffix("Block")
# Insert space before capitals
name = re.sub(r"([a-z])([A-Z])", r"\1 \2", name)
# Handle consecutive capitals (e.g., 'HTTPRequest' -> 'HTTP Request')
name = re.sub(r"([A-Z]+)([A-Z][a-z])", r"\1 \2", name)
return name.strip()
def type_to_readable(type_schema: dict[str, Any] | Any) -> str:
"""Convert JSON schema type to human-readable string."""
if not isinstance(type_schema, dict):
return str(type_schema) if type_schema else "Any"
if "anyOf" in type_schema:
# Union type - show options
any_of = type_schema["anyOf"]
if not isinstance(any_of, list):
return "Any"
options = []
for opt in any_of:
if isinstance(opt, dict) and opt.get("type") == "null":
continue
options.append(type_to_readable(opt))
if not options:
return "None"
if len(options) == 1:
return options[0]
return " | ".join(options)
if "allOf" in type_schema:
all_of = type_schema["allOf"]
if not isinstance(all_of, list) or not all_of:
return "Any"
return type_to_readable(all_of[0])
schema_type = type_schema.get("type")
if schema_type == "array":
items = type_schema.get("items", {})
item_type = type_to_readable(items)
return f"List[{item_type}]"
if schema_type == "object":
if "additionalProperties" in type_schema:
additional_props = type_schema["additionalProperties"]
# additionalProperties: true means any value type is allowed
if additional_props is True:
return "Dict[str, Any]"
value_type = type_to_readable(additional_props)
return f"Dict[str, {value_type}]"
# Check if it's a specific model
title = type_schema.get("title", "Object")
return title
if schema_type == "string":
if "enum" in type_schema:
return " | ".join(f'"{v}"' for v in type_schema["enum"])
if "format" in type_schema:
return f"str ({type_schema['format']})"
return "str"
if schema_type == "integer":
return "int"
if schema_type == "number":
return "float"
if schema_type == "boolean":
return "bool"
if schema_type == "null":
return "None"
# Fallback
return type_schema.get("title", schema_type or "Any")
def safe_get(d: Any, key: str, default: Any = None) -> Any:
"""Safely get a value from a dict-like object."""
if isinstance(d, dict):
return d.get(key, default)
return default
def file_path_to_title(file_path: str) -> str:
"""Convert file path to a readable title.
Examples:
"github/issues.md" -> "GitHub Issues"
"basic.md" -> "Basic"
"llm.md" -> "LLM"
"google/sheets.md" -> "Google Sheets"
"""
# Special case replacements (applied after title casing)
TITLE_FIXES = {
"Llm": "LLM",
"Github": "GitHub",
"Api": "API",
"Ai": "AI",
"Oauth": "OAuth",
"Url": "URL",
"Ci": "CI",
"Pr": "PR",
"Gmb": "GMB", # Google My Business
"Hubspot": "HubSpot",
"Linkedin": "LinkedIn",
"Tiktok": "TikTok",
"Youtube": "YouTube",
}
def apply_fixes(text: str) -> str:
# Split into words, fix each word, rejoin
words = text.split()
fixed_words = [TITLE_FIXES.get(word, word) for word in words]
return " ".join(fixed_words)
path = Path(file_path)
name = path.stem # e.g., "issues" or "sheets"
# Get parent dir if exists
parent = path.parent.name if path.parent.name != "." else None
# Title case and apply fixes
if parent:
parent_title = apply_fixes(parent.replace("_", " ").title())
name_title = apply_fixes(name.replace("_", " ").title())
return f"{parent_title} {name_title}"
return apply_fixes(name.replace("_", " ").title())
def extract_block_doc(block_cls: type) -> BlockDoc:
"""Extract documentation data from a block class."""
block = block_cls.create()
# Get source file
try:
source_file = inspect.getfile(block_cls)
# Make relative to blocks directory
blocks_dir = Path(source_file).parent
while blocks_dir.name != "blocks" and blocks_dir.parent != blocks_dir:
blocks_dir = blocks_dir.parent
source_file = str(Path(source_file).relative_to(blocks_dir.parent))
except (TypeError, ValueError):
source_file = "unknown"
# Extract input fields
input_schema = block.input_schema.jsonschema()
input_properties = safe_get(input_schema, "properties", {})
if not isinstance(input_properties, dict):
input_properties = {}
required_raw = safe_get(input_schema, "required", [])
# Handle edge cases where required might not be a list
if isinstance(required_raw, (list, set, tuple)):
required_inputs = set(required_raw)
else:
required_inputs = set()
inputs = []
for field_name, field_schema in input_properties.items():
if not isinstance(field_schema, dict):
continue
# Skip credentials fields in docs (they're auto-handled)
if "credentials" in field_name.lower():
continue
inputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=field_name in required_inputs,
default=safe_get(field_schema, "default"),
advanced=safe_get(field_schema, "advanced", False) or False,
hidden=safe_get(field_schema, "hidden", False) or False,
placeholder=safe_get(field_schema, "placeholder"),
)
)
# Extract output fields
output_schema = block.output_schema.jsonschema()
output_properties = safe_get(output_schema, "properties", {})
if not isinstance(output_properties, dict):
output_properties = {}
outputs = []
for field_name, field_schema in output_properties.items():
if not isinstance(field_schema, dict):
continue
outputs.append(
FieldDoc(
name=field_name,
description=safe_get(field_schema, "description", ""),
type_str=type_to_readable(field_schema),
required=True, # Outputs are always produced
hidden=safe_get(field_schema, "hidden", False) or False,
)
)
# Get category info (sort for deterministic ordering since it's a set)
categories = []
category_descriptions = {}
for cat in sorted(block.categories, key=lambda c: c.name):
categories.append(cat.name)
category_descriptions[cat.name] = cat.value
# Get contributors
contributors = []
for contrib in block.contributors:
contributors.append(contrib.name if hasattr(contrib, "name") else str(contrib))
return BlockDoc(
id=block.id,
name=class_name_to_display_name(block.name),
class_name=block.name,
description=block.description,
categories=categories,
category_descriptions=category_descriptions,
inputs=inputs,
outputs=outputs,
block_type=block.block_type.value,
source_file=source_file,
contributors=contributors,
)
def generate_anchor(name: str) -> str:
"""Generate markdown anchor from block name."""
return name.lower().replace(" ", "-").replace("(", "").replace(")", "")
def extract_manual_content(existing_content: str) -> dict[str, str]:
"""Extract content between MANUAL markers from existing file."""
manual_sections = {}
# Pattern: <!-- MANUAL: section_name -->content<!-- END MANUAL -->
pattern = r"<!-- MANUAL: (\w+) -->\s*(.*?)\s*<!-- END MANUAL -->"
matches = re.findall(pattern, existing_content, re.DOTALL)
for section_name, content in matches:
manual_sections[section_name] = content.strip()
return manual_sections
def generate_block_markdown(
block: BlockDoc,
manual_content: dict[str, str] | None = None,
) -> str:
"""Generate markdown documentation for a single block."""
manual_content = manual_content or {}
lines = []
# All blocks use ## heading, sections use ### (consistent siblings)
lines.append(f"## {block.name}")
lines.append("")
# What it is (full description)
lines.append("### What it is")
lines.append(block.description or "No description available.")
lines.append("")
# How it works (manual section)
lines.append("### How it works")
how_it_works = manual_content.get(
"how_it_works", "_Add technical explanation here._"
)
lines.append("<!-- MANUAL: how_it_works -->")
lines.append(how_it_works)
lines.append("<!-- END MANUAL -->")
lines.append("")
# Inputs table (auto-generated)
visible_inputs = [f for f in block.inputs if not f.hidden]
if visible_inputs:
lines.append("### Inputs")
lines.append("")
lines.append("| Input | Description | Type | Required |")
lines.append("|-------|-------------|------|----------|")
for inp in visible_inputs:
required = "Yes" if inp.required else "No"
desc = inp.description or "-"
type_str = inp.type_str or "-"
# Normalize newlines and escape pipes for valid table syntax
desc = desc.replace("\n", " ").replace("|", "\\|")
type_str = type_str.replace("|", "\\|")
lines.append(f"| {inp.name} | {desc} | {type_str} | {required} |")
lines.append("")
# Outputs table (auto-generated)
visible_outputs = [f for f in block.outputs if not f.hidden]
if visible_outputs:
lines.append("### Outputs")
lines.append("")
lines.append("| Output | Description | Type |")
lines.append("|--------|-------------|------|")
for out in visible_outputs:
desc = out.description or "-"
type_str = out.type_str or "-"
# Normalize newlines and escape pipes for valid table syntax
desc = desc.replace("\n", " ").replace("|", "\\|")
type_str = type_str.replace("|", "\\|")
lines.append(f"| {out.name} | {desc} | {type_str} |")
lines.append("")
# Possible use case (manual section)
lines.append("### Possible use case")
use_case = manual_content.get("use_case", "_Add practical use case examples here._")
lines.append("<!-- MANUAL: use_case -->")
lines.append(use_case)
lines.append("<!-- END MANUAL -->")
lines.append("")
lines.append("---")
lines.append("")
return "\n".join(lines)
def get_block_file_mapping(blocks: list[BlockDoc]) -> dict[str, list[BlockDoc]]:
"""
Map blocks to their documentation files.
Returns dict of {relative_file_path: [blocks]}
"""
file_mapping = defaultdict(list)
for block in blocks:
# Determine file path based on source file or category
source_path = Path(block.source_file)
# If source is in a subdirectory (e.g., google/gmail.py), use that structure
if len(source_path.parts) > 2: # blocks/subdir/file.py
subdir = source_path.parts[1] # e.g., "google"
# Use the Python filename as the md filename
md_file = source_path.stem + ".md" # e.g., "gmail.md"
file_path = f"{subdir}/{md_file}"
else:
# Use category-based grouping for top-level blocks
primary_category = block.categories[0] if block.categories else "BASIC"
file_name = CATEGORY_FILE_MAP.get(primary_category, "misc")
file_path = f"{file_name}.md"
file_mapping[file_path].append(block)
return dict(file_mapping)
def generate_overview_table(blocks: list[BlockDoc]) -> str:
"""Generate the overview table markdown (blocks.md)."""
lines = []
lines.append("# AutoGPT Blocks Overview")
lines.append("")
lines.append(
'AutoGPT uses a modular approach with various "blocks" to handle different tasks. These blocks are the building blocks of AutoGPT workflows, allowing users to create complex automations by combining simple, specialized components.'
)
lines.append("")
lines.append('!!! info "Creating Your Own Blocks"')
lines.append(" Want to create your own custom blocks? Check out our guides:")
lines.append(" ")
lines.append(
" - [Build your own Blocks](https://docs.agpt.co/platform/new_blocks/) - Step-by-step tutorial with examples"
)
lines.append(
" - [Block SDK Guide](https://docs.agpt.co/platform/block-sdk-guide/) - Advanced SDK patterns with OAuth, webhooks, and provider configuration"
)
lines.append("")
lines.append(
"Below is a comprehensive list of all available blocks, categorized by their primary function. Click on any block name to view its detailed documentation."
)
lines.append("")
# Group blocks by category
by_category = defaultdict(list)
for block in blocks:
primary_cat = block.categories[0] if block.categories else "BASIC"
by_category[primary_cat].append(block)
# Sort categories
category_order = [
"BASIC",
"DATA",
"TEXT",
"AI",
"SEARCH",
"SOCIAL",
"COMMUNICATION",
"DEVELOPER_TOOLS",
"MULTIMEDIA",
"PRODUCTIVITY",
"LOGIC",
"INPUT",
"OUTPUT",
"AGENT",
"CRM",
"SAFETY",
"ISSUE_TRACKING",
"HARDWARE",
"MARKETING",
]
# Track emitted display names to avoid duplicate headers
# (e.g., INPUT and OUTPUT both map to "Input/Output")
emitted_display_names: set[str] = set()
for category in category_order:
if category not in by_category:
continue
display_name = CATEGORY_DISPLAY_NAMES.get(category, category)
# Collect all blocks for this display name (may span multiple categories)
if display_name in emitted_display_names:
# Already emitted header, just add rows to existing table
# Find the position before the last empty line and insert rows
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
# Remove the trailing empty line, add rows, then re-add empty line
lines.pop()
for block in cat_blocks:
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("\n", " ").replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
continue
emitted_display_names.add(display_name)
cat_blocks = sorted(by_category[category], key=lambda b: b.name)
lines.append(f"## {display_name}")
lines.append("")
lines.append("| Block Name | Description |")
lines.append("|------------|-------------|")
for block in cat_blocks:
# Determine link path
file_mapping = get_block_file_mapping([block])
file_path = list(file_mapping.keys())[0]
anchor = generate_anchor(block.name)
# Short description (first sentence)
short_desc = (
block.description.split(".")[0]
if block.description
else "No description"
)
short_desc = short_desc.replace("\n", " ").replace("|", "\\|")
lines.append(f"| [{block.name}]({file_path}#{anchor}) | {short_desc} |")
lines.append("")
return "\n".join(lines)
def load_all_blocks_for_docs() -> list[BlockDoc]:
"""Load all blocks and extract documentation."""
from backend.blocks import load_all_blocks
block_classes = load_all_blocks()
blocks = []
for _block_id, block_cls in block_classes.items():
try:
block_doc = extract_block_doc(block_cls)
blocks.append(block_doc)
except Exception as e:
logger.warning(f"Failed to extract docs for {block_cls.__name__}: {e}")
return blocks
def write_block_docs(
output_dir: Path,
blocks: list[BlockDoc],
verbose: bool = False,
) -> dict[str, str]:
"""
Write block documentation files.
Returns dict of {file_path: content} for all generated files.
"""
output_dir = Path(output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
file_mapping = get_block_file_mapping(blocks)
generated_files = {}
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
# Create subdirectories if needed
full_path.parent.mkdir(parents=True, exist_ok=True)
# Load existing content for manual section preservation
existing_content = ""
if full_path.exists():
existing_content = full_path.read_text()
# Always generate title from file path (with fixes applied)
file_title = file_path_to_title(file_path)
# Extract existing file description if present (preserve manual content)
file_header_pattern = (
r"^# .+?\n<!-- MANUAL: file_description -->\n(.*?)\n<!-- END MANUAL -->"
)
file_header_match = re.search(file_header_pattern, existing_content, re.DOTALL)
if file_header_match:
file_description = file_header_match.group(1)
else:
file_description = "_Add a description of this category of blocks._"
# Generate file header
file_header = f"# {file_title}\n"
file_header += "<!-- MANUAL: file_description -->\n"
file_header += f"{file_description}\n"
file_header += "<!-- END MANUAL -->\n"
# Generate content for each block
content_parts = []
for block in sorted(file_blocks, key=lambda b: b.name):
# Extract manual content specific to this block
# Match block heading (h2) and capture until --- separator
block_pattern = rf"(?:^|\n)## {re.escape(block.name)}\s*\n(.*?)(?=\n---|\Z)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_content = extract_manual_content(block_match.group(1))
else:
manual_content = {}
content_parts.append(
generate_block_markdown(
block,
manual_content,
)
)
full_content = file_header + "\n" + "\n".join(content_parts)
generated_files[str(file_path)] = full_content
if verbose:
print(f" Writing {file_path} ({len(file_blocks)} blocks)")
full_path.write_text(full_content)
# Generate overview file
overview_content = generate_overview_table(blocks)
overview_path = output_dir / "README.md"
generated_files["README.md"] = overview_content
overview_path.write_text(overview_content)
if verbose:
print(" Writing README.md (overview)")
return generated_files
def check_docs_in_sync(output_dir: Path, blocks: list[BlockDoc]) -> bool:
"""
Check if generated docs match existing docs.
Returns True if in sync, False otherwise.
"""
output_dir = Path(output_dir)
file_mapping = get_block_file_mapping(blocks)
all_match = True
out_of_sync_details: list[tuple[str, list[str]]] = []
for file_path, file_blocks in file_mapping.items():
full_path = output_dir / file_path
if not full_path.exists():
block_names = [b.name for b in sorted(file_blocks, key=lambda b: b.name)]
print(f"MISSING: {file_path}")
print(f" Blocks: {', '.join(block_names)}")
out_of_sync_details.append((file_path, block_names))
all_match = False
continue
existing_content = full_path.read_text()
# Always generate title from file path (with fixes applied)
file_title = file_path_to_title(file_path)
# Extract existing file description if present (preserve manual content)
file_header_pattern = (
r"^# .+?\n<!-- MANUAL: file_description -->\n(.*?)\n<!-- END MANUAL -->"
)
file_header_match = re.search(file_header_pattern, existing_content, re.DOTALL)
if file_header_match:
file_description = file_header_match.group(1)
else:
file_description = "_Add a description of this category of blocks._"
# Generate expected file header
file_header = f"# {file_title}\n"
file_header += "<!-- MANUAL: file_description -->\n"
file_header += f"{file_description}\n"
file_header += "<!-- END MANUAL -->\n"
# Extract manual content from existing file
manual_sections_by_block = {}
for block in file_blocks:
block_pattern = rf"(?:^|\n)## {re.escape(block.name)}\s*\n(.*?)(?=\n---|\Z)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if block_match:
manual_sections_by_block[block.name] = extract_manual_content(
block_match.group(1)
)
# Generate expected content and check each block individually
content_parts = []
mismatched_blocks = []
for block in sorted(file_blocks, key=lambda b: b.name):
manual_content = manual_sections_by_block.get(block.name, {})
expected_block_content = generate_block_markdown(
block,
manual_content,
)
content_parts.append(expected_block_content)
# Check if this specific block's section exists and matches
# Include the --- separator to match generate_block_markdown output
block_pattern = rf"(?:^|\n)(## {re.escape(block.name)}\s*\n.*?\n---\n)"
block_match = re.search(block_pattern, existing_content, re.DOTALL)
if not block_match:
mismatched_blocks.append(f"{block.name} (missing)")
elif block_match.group(1).strip() != expected_block_content.strip():
mismatched_blocks.append(block.name)
expected_content = file_header + "\n" + "\n".join(content_parts)
if existing_content.strip() != expected_content.strip():
print(f"OUT OF SYNC: {file_path}")
if mismatched_blocks:
print(f" Affected blocks: {', '.join(mismatched_blocks)}")
out_of_sync_details.append((file_path, mismatched_blocks))
all_match = False
# Check overview
overview_path = output_dir / "README.md"
if overview_path.exists():
existing_overview = overview_path.read_text()
expected_overview = generate_overview_table(blocks)
if existing_overview.strip() != expected_overview.strip():
print("OUT OF SYNC: README.md (overview)")
print(" The blocks overview table needs regeneration")
out_of_sync_details.append(("README.md", ["overview table"]))
all_match = False
else:
print("MISSING: README.md (overview)")
out_of_sync_details.append(("README.md", ["overview table"]))
all_match = False
# Check for unfilled manual sections
unfilled_patterns = [
"_Add a description of this category of blocks._",
"_Add technical explanation here._",
"_Add practical use case examples here._",
]
files_with_unfilled = []
for file_path in file_mapping.keys():
full_path = output_dir / file_path
if full_path.exists():
content = full_path.read_text()
unfilled_count = sum(1 for p in unfilled_patterns if p in content)
if unfilled_count > 0:
files_with_unfilled.append((file_path, unfilled_count))
if files_with_unfilled:
print("\nWARNING: Files with unfilled manual sections:")
for file_path, count in sorted(files_with_unfilled):
print(f" {file_path}: {count} unfilled section(s)")
print(
f"\nTotal: {len(files_with_unfilled)} files with unfilled manual sections"
)
return all_match
def main():
parser = argparse.ArgumentParser(
description="Generate block documentation from code introspection"
)
parser.add_argument(
"--output-dir",
type=Path,
default=DEFAULT_OUTPUT_DIR,
help="Output directory for generated docs",
)
parser.add_argument(
"--check",
action="store_true",
help="Check if docs are in sync (for CI), exit 1 if not",
)
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="Verbose output",
)
args = parser.parse_args()
logging.basicConfig(
level=logging.DEBUG if args.verbose else logging.INFO,
format="%(levelname)s: %(message)s",
)
print("Loading blocks...")
blocks = load_all_blocks_for_docs()
print(f"Found {len(blocks)} blocks")
if args.check:
print(f"Checking docs in {args.output_dir}...")
in_sync = check_docs_in_sync(args.output_dir, blocks)
if in_sync:
print("All documentation is in sync!")
sys.exit(0)
else:
print("\n" + "=" * 60)
print("Documentation is out of sync!")
print("=" * 60)
print("\nTo fix this, run one of the following:")
print("\n Option 1 - Run locally:")
print(
" cd autogpt_platform/backend && poetry run python scripts/generate_block_docs.py"
)
print("\n Option 2 - Ask Claude Code to run it:")
print(' "Run the block docs generator script to sync documentation"')
print("\n" + "=" * 60)
sys.exit(1)
else:
print(f"Generating docs to {args.output_dir}...")
write_block_docs(
args.output_dir,
blocks,
verbose=args.verbose,
)
print("Done!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env python3
"""Tests for the block documentation generator."""
import pytest
from scripts.generate_block_docs import (
class_name_to_display_name,
extract_manual_content,
generate_anchor,
type_to_readable,
)
class TestClassNameToDisplayName:
"""Tests for class_name_to_display_name function."""
def test_simple_block_name(self):
assert class_name_to_display_name("PrintBlock") == "Print"
def test_multi_word_block_name(self):
assert class_name_to_display_name("GetWeatherBlock") == "Get Weather"
def test_consecutive_capitals(self):
assert class_name_to_display_name("HTTPRequestBlock") == "HTTP Request"
def test_ai_prefix(self):
assert class_name_to_display_name("AIConditionBlock") == "AI Condition"
def test_no_block_suffix(self):
assert class_name_to_display_name("SomeClass") == "Some Class"
class TestTypeToReadable:
"""Tests for type_to_readable function."""
def test_string_type(self):
assert type_to_readable({"type": "string"}) == "str"
def test_integer_type(self):
assert type_to_readable({"type": "integer"}) == "int"
def test_number_type(self):
assert type_to_readable({"type": "number"}) == "float"
def test_boolean_type(self):
assert type_to_readable({"type": "boolean"}) == "bool"
def test_array_type(self):
result = type_to_readable({"type": "array", "items": {"type": "string"}})
assert result == "List[str]"
def test_object_type(self):
result = type_to_readable({"type": "object", "title": "MyModel"})
assert result == "MyModel"
def test_anyof_with_null(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "null"}]})
assert result == "str"
def test_anyof_multiple_types(self):
result = type_to_readable({"anyOf": [{"type": "string"}, {"type": "integer"}]})
assert result == "str | int"
def test_enum_type(self):
result = type_to_readable(
{"type": "string", "enum": ["option1", "option2", "option3"]}
)
assert result == '"option1" | "option2" | "option3"'
def test_none_input(self):
assert type_to_readable(None) == "Any"
def test_non_dict_input(self):
assert type_to_readable("string") == "string"
class TestExtractManualContent:
"""Tests for extract_manual_content function."""
def test_extract_how_it_works(self):
content = """
### How it works
<!-- MANUAL: how_it_works -->
This is how it works.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"how_it_works": "This is how it works."}
def test_extract_use_case(self):
content = """
### Possible use case
<!-- MANUAL: use_case -->
Example use case here.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {"use_case": "Example use case here."}
def test_extract_multiple_sections(self):
content = """
<!-- MANUAL: how_it_works -->
How it works content.
<!-- END MANUAL -->
<!-- MANUAL: use_case -->
Use case content.
<!-- END MANUAL -->
"""
result = extract_manual_content(content)
assert result == {
"how_it_works": "How it works content.",
"use_case": "Use case content.",
}
def test_empty_content(self):
result = extract_manual_content("")
assert result == {}
def test_no_markers(self):
result = extract_manual_content("Some content without markers")
assert result == {}
class TestGenerateAnchor:
"""Tests for generate_anchor function."""
def test_simple_name(self):
assert generate_anchor("Print") == "print"
def test_multi_word_name(self):
assert generate_anchor("Get Weather") == "get-weather"
def test_name_with_parentheses(self):
assert generate_anchor("Something (Optional)") == "something-optional"
def test_already_lowercase(self):
assert generate_anchor("already lowercase") == "already-lowercase"
class TestIntegration:
"""Integration tests that require block loading."""
def test_load_blocks(self):
"""Test that blocks can be loaded successfully."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
assert len(blocks) > 0, "Should load at least one block"
def test_block_doc_has_required_fields(self):
"""Test that extracted block docs have required fields."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import load_all_blocks_for_docs
blocks = load_all_blocks_for_docs()
block = blocks[0]
assert hasattr(block, "id")
assert hasattr(block, "name")
assert hasattr(block, "description")
assert hasattr(block, "categories")
assert hasattr(block, "inputs")
assert hasattr(block, "outputs")
def test_file_mapping_is_deterministic(self):
"""Test that file mapping produces consistent results."""
import logging
import sys
from pathlib import Path
logging.disable(logging.CRITICAL)
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.generate_block_docs import (
get_block_file_mapping,
load_all_blocks_for_docs,
)
# Load blocks twice and compare mappings
blocks1 = load_all_blocks_for_docs()
blocks2 = load_all_blocks_for_docs()
mapping1 = get_block_file_mapping(blocks1)
mapping2 = get_block_file_mapping(blocks2)
# Check same files are generated
assert set(mapping1.keys()) == set(mapping2.keys())
# Check same block counts per file
for file_path in mapping1:
assert len(mapping1[file_path]) == len(mapping2[file_path])
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -11,6 +11,7 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -11,6 +11,7 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -27,6 +27,8 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": true,
@@ -34,7 +36,8 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": null
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
},
"marketplace_listing": null
},
@@ -65,6 +68,8 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": false,
@@ -72,7 +77,8 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": null
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
},
"marketplace_listing": null
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 374 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 663 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -5,10 +5,11 @@ import {
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
import { PlayIcon, StopIcon } from "@phosphor-icons/react";
import { CircleNotchIcon, PlayIcon, StopIcon } from "@phosphor-icons/react";
import { useShallow } from "zustand/react/shallow";
import { RunInputDialog } from "../RunInputDialog/RunInputDialog";
import { useRunGraph } from "./useRunGraph";
import { cn } from "@/lib/utils";
export const RunGraph = ({ flowID }: { flowID: string | null }) => {
const {
@@ -24,6 +25,31 @@ export const RunGraph = ({ flowID }: { flowID: string | null }) => {
useShallow((state) => state.isGraphRunning),
);
const isLoading = isExecutingGraph || isTerminatingGraph || isSaving;
// Determine which icon to show with proper animation
const renderIcon = () => {
const iconClass = cn(
"size-4 transition-transform duration-200 ease-out",
!isLoading && "group-hover:scale-110",
);
if (isLoading) {
return (
<CircleNotchIcon
className={cn(iconClass, "animate-spin")}
weight="bold"
/>
);
}
if (isGraphRunning) {
return <StopIcon className={iconClass} weight="fill" />;
}
return <PlayIcon className={iconClass} weight="fill" />;
};
return (
<>
<Tooltip>
@@ -33,18 +59,18 @@ export const RunGraph = ({ flowID }: { flowID: string | null }) => {
variant={isGraphRunning ? "destructive" : "primary"}
data-id={isGraphRunning ? "stop-graph-button" : "run-graph-button"}
onClick={isGraphRunning ? handleStopGraph : handleRunGraph}
disabled={!flowID || isExecutingGraph || isTerminatingGraph}
loading={isExecutingGraph || isTerminatingGraph || isSaving}
disabled={!flowID || isLoading}
className="group"
>
{!isGraphRunning ? (
<PlayIcon className="size-4" />
) : (
<StopIcon className="size-4" />
)}
{renderIcon()}
</Button>
</TooltipTrigger>
<TooltipContent>
{isGraphRunning ? "Stop agent" : "Run agent"}
{isLoading
? "Processing..."
: isGraphRunning
? "Stop agent"
: "Run agent"}
</TooltipContent>
</Tooltip>
<RunInputDialog

View File

@@ -10,6 +10,7 @@ import { useRunInputDialog } from "./useRunInputDialog";
import { CronSchedulerDialog } from "../CronSchedulerDialog/CronSchedulerDialog";
import { useTutorialStore } from "@/app/(platform)/build/stores/tutorialStore";
import { useEffect } from "react";
import { CredentialsGroupedView } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/CredentialsGroupedView";
export const RunInputDialog = ({
isOpen,
@@ -23,19 +24,17 @@ export const RunInputDialog = ({
const hasInputs = useGraphStore((state) => state.hasInputs);
const hasCredentials = useGraphStore((state) => state.hasCredentials);
const inputSchema = useGraphStore((state) => state.inputSchema);
const credentialsSchema = useGraphStore(
(state) => state.credentialsInputSchema,
);
const {
credentialsUiSchema,
credentialFields,
requiredCredentials,
handleManualRun,
handleInputChange,
openCronSchedulerDialog,
setOpenCronSchedulerDialog,
inputValues,
credentialValues,
handleCredentialChange,
handleCredentialFieldChange,
isExecutingGraph,
} = useRunInputDialog({ setIsOpen });
@@ -62,67 +61,67 @@ export const RunInputDialog = ({
isOpen,
set: setIsOpen,
}}
styling={{ maxWidth: "600px", minWidth: "600px" }}
styling={{ maxWidth: "700px", minWidth: "700px" }}
>
<Dialog.Content>
<div className="space-y-6 p-1" data-id="run-input-dialog-content">
{/* Credentials Section */}
{hasCredentials() && (
<div data-id="run-input-credentials-section">
<div className="mb-4">
<Text variant="h4" className="text-gray-900">
Credentials
</Text>
<div
className="grid grid-cols-[1fr_auto] gap-10 p-1"
data-id="run-input-dialog-content"
>
<div className="space-y-6">
{/* Credentials Section */}
{hasCredentials() && credentialFields.length > 0 && (
<div data-id="run-input-credentials-section">
<div className="mb-4">
<Text variant="h4" className="text-gray-900">
Credentials
</Text>
</div>
<div className="px-2" data-id="run-input-credentials-form">
<CredentialsGroupedView
credentialFields={credentialFields}
requiredCredentials={requiredCredentials}
inputCredentials={credentialValues}
inputValues={inputValues}
onCredentialChange={handleCredentialFieldChange}
/>
</div>
</div>
<div className="px-2" data-id="run-input-credentials-form">
<FormRenderer
jsonSchema={credentialsSchema as RJSFSchema}
handleChange={(v) => handleCredentialChange(v.formData)}
uiSchema={credentialsUiSchema}
initialValues={{}}
formContext={{
showHandles: false,
size: "large",
showOptionalToggle: false,
}}
/>
</div>
</div>
)}
)}
{/* Inputs Section */}
{hasInputs() && (
<div data-id="run-input-inputs-section">
<div className="mb-4">
<Text variant="h4" className="text-gray-900">
Inputs
</Text>
{/* Inputs Section */}
{hasInputs() && (
<div data-id="run-input-inputs-section">
<div className="mb-4">
<Text variant="h4" className="text-gray-900">
Inputs
</Text>
</div>
<div data-id="run-input-inputs-form">
<FormRenderer
jsonSchema={inputSchema as RJSFSchema}
handleChange={(v) => handleInputChange(v.formData)}
uiSchema={uiSchema}
initialValues={{}}
formContext={{
showHandles: false,
size: "large",
}}
/>
</div>
</div>
<div data-id="run-input-inputs-form">
<FormRenderer
jsonSchema={inputSchema as RJSFSchema}
handleChange={(v) => handleInputChange(v.formData)}
uiSchema={uiSchema}
initialValues={{}}
formContext={{
showHandles: false,
size: "large",
}}
/>
</div>
</div>
)}
)}
</div>
{/* Action Button */}
<div
className="flex justify-end pt-2"
className="flex flex-col items-end justify-start"
data-id="run-input-actions-section"
>
{purpose === "run" && (
<Button
variant="primary"
size="large"
className="group h-fit min-w-0 gap-2"
className="group h-fit min-w-0 gap-2 px-10"
onClick={handleManualRun}
loading={isExecutingGraph}
data-id="run-input-manual-run-button"
@@ -137,7 +136,7 @@ export const RunInputDialog = ({
<Button
variant="primary"
size="large"
className="group h-fit min-w-0 gap-2"
className="group h-fit min-w-0 gap-2 px-10"
onClick={() => setOpenCronSchedulerDialog(true)}
data-id="run-input-schedule-button"
>

View File

@@ -7,12 +7,11 @@ import {
GraphExecutionMeta,
} from "@/lib/autogpt-server-api";
import { parseAsInteger, parseAsString, useQueryStates } from "nuqs";
import { useMemo, useState } from "react";
import { uiSchema } from "../../../FlowEditor/nodes/uiSchema";
import { isCredentialFieldSchema } from "@/components/renderers/InputRenderer/custom/CredentialField/helpers";
import { useCallback, useMemo, useState } from "react";
import { useNodeStore } from "@/app/(platform)/build/stores/nodeStore";
import { useToast } from "@/components/molecules/Toast/use-toast";
import { useReactFlow } from "@xyflow/react";
import type { CredentialField } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/helpers";
export const useRunInputDialog = ({
setIsOpen,
@@ -120,27 +119,32 @@ export const useRunInputDialog = ({
},
});
// We are rendering the credentials field differently compared to other fields.
// In the node, we have the field name as "credential" - so our library catches it and renders it differently.
// But here we have a different name, something like `Firecrawl credentials`, so here we are telling the library that this field is a credential field type.
// Convert credentials schema to credential fields array for CredentialsGroupedView
const credentialFields: CredentialField[] = useMemo(() => {
if (!credentialsSchema?.properties) return [];
return Object.entries(credentialsSchema.properties);
}, [credentialsSchema]);
const credentialsUiSchema = useMemo(() => {
const dynamicUiSchema: any = { ...uiSchema };
// Get required credentials as a Set
const requiredCredentials = useMemo(() => {
return new Set<string>(credentialsSchema?.required || []);
}, [credentialsSchema]);
if (credentialsSchema?.properties) {
Object.keys(credentialsSchema.properties).forEach((fieldName) => {
const fieldSchema = credentialsSchema.properties[fieldName];
if (isCredentialFieldSchema(fieldSchema)) {
dynamicUiSchema[fieldName] = {
...dynamicUiSchema[fieldName],
"ui:field": "custom/credential_field",
};
// Handler for individual credential changes
const handleCredentialFieldChange = useCallback(
(key: string, value?: CredentialsMetaInput) => {
setCredentialValues((prev) => {
if (value) {
return { ...prev, [key]: value };
} else {
const next = { ...prev };
delete next[key];
return next;
}
});
}
return dynamicUiSchema;
}, [credentialsSchema]);
},
[],
);
const handleManualRun = async () => {
// Filter out incomplete credentials (those without a valid id)
@@ -173,12 +177,14 @@ export const useRunInputDialog = ({
};
return {
credentialsUiSchema,
credentialFields,
requiredCredentials,
inputValues,
credentialValues,
isExecutingGraph,
handleInputChange,
handleCredentialChange,
handleCredentialFieldChange,
handleManualRun,
openCronSchedulerDialog,
setOpenCronSchedulerDialog,

View File

@@ -18,69 +18,110 @@ interface Props {
fullWidth?: boolean;
}
interface SafeModeButtonProps {
isEnabled: boolean;
label: string;
tooltipEnabled: string;
tooltipDisabled: string;
onToggle: () => void;
isPending: boolean;
fullWidth?: boolean;
}
function SafeModeButton({
isEnabled,
label,
tooltipEnabled,
tooltipDisabled,
onToggle,
isPending,
fullWidth = false,
}: SafeModeButtonProps) {
return (
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant={isEnabled ? "primary" : "outline"}
size="small"
onClick={onToggle}
disabled={isPending}
className={cn("justify-start", fullWidth ? "w-full" : "")}
>
{isEnabled ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-200">
{label}: ON
</Text>
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-600">
{label}: OFF
</Text>
</>
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
{label}: {isEnabled ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{isEnabled ? tooltipEnabled : tooltipDisabled}
</div>
</div>
</TooltipContent>
</Tooltip>
);
}
export function FloatingSafeModeToggle({
graph,
className,
fullWidth = false,
}: Props) {
const {
currentSafeMode,
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
isStateUndetermined,
handleToggle,
} = useAgentSafeMode(graph);
if (!shouldShowToggle || isStateUndetermined || isPending) {
if (!shouldShowToggle || isPending) {
return null;
}
return (
<div className={cn("fixed z-50", className)}>
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant={currentSafeMode! ? "primary" : "outline"}
key={graph.id}
size="small"
title={
currentSafeMode!
? "Safe Mode: ON. Human in the loop blocks require manual review"
: "Safe Mode: OFF. Human in the loop blocks proceed automatically"
}
onClick={handleToggle}
className={cn(fullWidth ? "w-full" : "")}
>
{currentSafeMode! ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-200">
Safe Mode: ON
</Text>
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-600">
Safe Mode: OFF
</Text>
</>
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
Safe Mode: {currentSafeMode! ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{currentSafeMode!
? "Human in the loop blocks require manual review"
: "Human in the loop blocks proceed automatically"}
</div>
</div>
</TooltipContent>
</Tooltip>
<div className={cn("fixed z-50 flex flex-col gap-2", className)}>
{showHITLToggle && (
<SafeModeButton
isEnabled={currentHITLSafeMode}
label="Human in the loop block approval"
tooltipEnabled="The agent will pause at human-in-the-loop blocks and wait for your approval"
tooltipDisabled="Human in the loop blocks will proceed automatically"
onToggle={handleHITLToggle}
isPending={isPending}
fullWidth={fullWidth}
/>
)}
{showSensitiveActionToggle && (
<SafeModeButton
isEnabled={currentSensitiveActionSafeMode}
label="Sensitive actions blocks approval"
tooltipEnabled="The agent will pause at sensitive action blocks and wait for your approval"
tooltipDisabled="Sensitive action blocks will proceed automatically"
onToggle={handleSensitiveActionToggle}
isPending={isPending}
fullWidth={fullWidth}
/>
)}
</div>
);
}

View File

@@ -53,14 +53,14 @@ export const CustomControls = memo(
const controls = [
{
id: "zoom-in-button",
icon: <PlusIcon className="size-4" />,
icon: <PlusIcon className="size-3.5 text-zinc-600" />,
label: "Zoom In",
onClick: () => zoomIn(),
className: "h-10 w-10 border-none",
},
{
id: "zoom-out-button",
icon: <MinusIcon className="size-4" />,
icon: <MinusIcon className="size-3.5 text-zinc-600" />,
label: "Zoom Out",
onClick: () => zoomOut(),
className: "h-10 w-10 border-none",
@@ -68,9 +68,9 @@ export const CustomControls = memo(
{
id: "tutorial-button",
icon: isTutorialLoading ? (
<CircleNotchIcon className="size-4 animate-spin" />
<CircleNotchIcon className="size-3.5 animate-spin text-zinc-600" />
) : (
<ChalkboardIcon className="size-4" />
<ChalkboardIcon className="size-3.5 text-zinc-600" />
),
label: isTutorialLoading ? "Loading Tutorial..." : "Start Tutorial",
onClick: handleTutorialClick,
@@ -79,7 +79,7 @@ export const CustomControls = memo(
},
{
id: "fit-view-button",
icon: <FrameCornersIcon className="size-4" />,
icon: <FrameCornersIcon className="size-3.5 text-zinc-600" />,
label: "Fit View",
onClick: () => fitView({ padding: 0.2, duration: 800, maxZoom: 1 }),
className: "h-10 w-10 border-none",
@@ -87,9 +87,9 @@ export const CustomControls = memo(
{
id: "lock-button",
icon: !isLocked ? (
<LockOpenIcon className="size-4" />
<LockOpenIcon className="size-3.5 text-zinc-600" />
) : (
<LockIcon className="size-4" />
<LockIcon className="size-3.5 text-zinc-600" />
),
label: "Toggle Lock",
onClick: () => setIsLocked(!isLocked),

View File

@@ -139,14 +139,6 @@ export const useFlow = () => {
useNodeStore.getState().setNodes([]);
useNodeStore.getState().clearResolutionState();
addNodes(customNodes);
// Sync hardcoded values with handle IDs.
// If a keyvalue field has a key without a value, the backend omits it from hardcoded values.
// But if a handleId exists for that key, it causes inconsistency.
// This ensures hardcoded values stay in sync with handle IDs.
customNodes.forEach((node) => {
useNodeStore.getState().syncHardcodedValuesWithHandleIds(node.id);
});
}
}, [customNodes, addNodes]);
@@ -158,6 +150,14 @@ export const useFlow = () => {
}
}, [graph?.links, addLinks]);
useEffect(() => {
if (customNodes.length > 0 && graph?.links) {
customNodes.forEach((node) => {
useNodeStore.getState().syncHardcodedValuesWithHandleIds(node.id);
});
}
}, [customNodes, graph?.links]);
// update node execution status in nodes
useEffect(() => {
if (

View File

@@ -19,6 +19,8 @@ export type CustomEdgeData = {
beadUp?: number;
beadDown?: number;
beadData?: Map<string, NodeExecutionResult["status"]>;
edgeColorClass?: string;
edgeHexColor?: string;
};
export type CustomEdge = XYEdge<CustomEdgeData, "custom">;
@@ -36,7 +38,6 @@ const CustomEdge = ({
selected,
}: EdgeProps<CustomEdge>) => {
const removeConnection = useEdgeStore((state) => state.removeEdge);
// Subscribe to the brokenEdgeIDs map and check if this edge is broken across any node
const isBroken = useNodeStore((state) => state.isEdgeBroken(id));
const [isHovered, setIsHovered] = useState(false);
@@ -52,6 +53,7 @@ const CustomEdge = ({
const isStatic = data?.isStatic ?? false;
const beadUp = data?.beadUp ?? 0;
const beadDown = data?.beadDown ?? 0;
const edgeColorClass = data?.edgeColorClass;
const handleRemoveEdge = () => {
removeConnection(id);
@@ -70,7 +72,9 @@ const CustomEdge = ({
? "!stroke-red-500 !stroke-[2px] [stroke-dasharray:4]"
: selected
? "stroke-zinc-800"
: "stroke-zinc-500/50 hover:stroke-zinc-500",
: edgeColorClass
? cn(edgeColorClass, "opacity-70 hover:opacity-100")
: "stroke-zinc-500/50 hover:stroke-zinc-500",
)}
/>
<JSBeads

View File

@@ -8,6 +8,7 @@ import { useCallback } from "react";
import { useNodeStore } from "../../../stores/nodeStore";
import { useHistoryStore } from "../../../stores/historyStore";
import { CustomEdge } from "./CustomEdge";
import { getEdgeColorFromOutputType } from "../nodes/helpers";
export const useCustomEdge = () => {
const edges = useEdgeStore((s) => s.edges);
@@ -34,8 +35,13 @@ export const useCustomEdge = () => {
if (exists) return;
const nodes = useNodeStore.getState().nodes;
const isStatic = nodes.find((n) => n.id === conn.source)?.data
?.staticOutput;
const sourceNode = nodes.find((n) => n.id === conn.source);
const isStatic = sourceNode?.data?.staticOutput;
const { colorClass, hexColor } = getEdgeColorFromOutputType(
sourceNode?.data?.outputSchema,
conn.sourceHandle,
);
addEdge({
source: conn.source,
@@ -44,6 +50,8 @@ export const useCustomEdge = () => {
targetHandle: conn.targetHandle,
data: {
isStatic,
edgeColorClass: colorClass,
edgeHexColor: hexColor,
},
});
},

View File

@@ -1,22 +1,21 @@
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import {
Accordion,
AccordionContent,
AccordionItem,
AccordionTrigger,
} from "@/components/molecules/Accordion/Accordion";
import { beautifyString, cn } from "@/lib/utils";
import { CaretDownIcon, CopyIcon, CheckIcon } from "@phosphor-icons/react";
import { CopyIcon, CheckIcon } from "@phosphor-icons/react";
import { NodeDataViewer } from "./components/NodeDataViewer/NodeDataViewer";
import { ContentRenderer } from "./components/ContentRenderer";
import { useNodeOutput } from "./useNodeOutput";
import { ViewMoreData } from "./components/ViewMoreData";
export const NodeDataRenderer = ({ nodeId }: { nodeId: string }) => {
const {
outputData,
isExpanded,
setIsExpanded,
copiedKey,
handleCopy,
executionResultId,
inputData,
} = useNodeOutput(nodeId);
const { outputData, copiedKey, handleCopy, executionResultId, inputData } =
useNodeOutput(nodeId);
if (Object.keys(outputData).length === 0) {
return null;
@@ -25,122 +24,117 @@ export const NodeDataRenderer = ({ nodeId }: { nodeId: string }) => {
return (
<div
data-tutorial-id={`node-output`}
className="flex flex-col gap-3 rounded-b-xl border-t border-zinc-200 px-4 py-4"
className="rounded-b-xl border-t border-zinc-200 px-4 py-2"
>
<div className="flex items-center justify-between">
<Text variant="body-medium" className="!font-semibold text-slate-700">
Node Output
</Text>
<Button
variant="ghost"
size="small"
onClick={() => setIsExpanded(!isExpanded)}
className="h-fit min-w-0 p-1 text-slate-600 hover:text-slate-900"
>
<CaretDownIcon
size={16}
weight="bold"
className={`transition-transform ${isExpanded ? "rotate-180" : ""}`}
/>
</Button>
</div>
<Accordion type="single" collapsible defaultValue="node-output">
<AccordionItem value="node-output" className="border-none">
<AccordionTrigger className="py-2 hover:no-underline">
<Text
variant="body-medium"
className="!font-semibold text-slate-700"
>
Node Output
</Text>
</AccordionTrigger>
<AccordionContent className="pt-2">
<div className="flex max-w-[350px] flex-col gap-4">
<div className="space-y-2">
<Text variant="small-medium">Input</Text>
{isExpanded && (
<>
<div className="flex max-w-[350px] flex-col gap-4">
<div className="space-y-2">
<Text variant="small-medium">Input</Text>
<ContentRenderer value={inputData} shortContent={false} />
<ContentRenderer value={inputData} shortContent={false} />
<div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
data={inputData}
pinName="Input"
execId={executionResultId}
/>
<Button
variant="secondary"
size="small"
onClick={() => handleCopy("input", inputData)}
className={cn(
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
copiedKey === "input" &&
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
)}
>
{copiedKey === "input" ? (
<CheckIcon size={12} className="text-green-600" />
) : (
<CopyIcon size={12} />
)}
</Button>
<div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
data={inputData}
pinName="Input"
execId={executionResultId}
/>
<Button
variant="secondary"
size="small"
onClick={() => handleCopy("input", inputData)}
className={cn(
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
copiedKey === "input" &&
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
)}
>
{copiedKey === "input" ? (
<CheckIcon size={12} className="text-green-600" />
) : (
<CopyIcon size={12} />
)}
</Button>
</div>
</div>
</div>
{Object.entries(outputData)
.slice(0, 2)
.map(([key, value]) => (
<div key={key} className="flex flex-col gap-2">
<div className="flex items-center gap-2">
<Text
variant="small-medium"
className="!font-semibold text-slate-600"
>
Pin:
</Text>
<Text variant="small" className="text-slate-700">
{beautifyString(key)}
</Text>
</div>
<div className="w-full space-y-2">
<Text
variant="small"
className="!font-semibold text-slate-600"
>
Data:
</Text>
<div className="relative space-y-2">
{value.map((item, index) => (
<div key={index}>
<ContentRenderer value={item} shortContent={true} />
{Object.entries(outputData)
.slice(0, 2)
.map(([key, value]) => (
<div key={key} className="flex flex-col gap-2">
<div className="flex items-center gap-2">
<Text
variant="small-medium"
className="!font-semibold text-slate-600"
>
Pin:
</Text>
<Text variant="small" className="text-slate-700">
{beautifyString(key)}
</Text>
</div>
<div className="w-full space-y-2">
<Text
variant="small"
className="!font-semibold text-slate-600"
>
Data:
</Text>
<div className="relative space-y-2">
{value.map((item, index) => (
<div key={index}>
<ContentRenderer value={item} shortContent={true} />
</div>
))}
<div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
data={value}
pinName={key}
execId={executionResultId}
/>
<Button
variant="secondary"
size="small"
onClick={() => handleCopy(key, value)}
className={cn(
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
copiedKey === key &&
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
)}
>
{copiedKey === key ? (
<CheckIcon size={12} className="text-green-600" />
) : (
<CopyIcon size={12} />
)}
</Button>
</div>
))}
<div className="mt-1 flex justify-end gap-1">
<NodeDataViewer
data={value}
pinName={key}
execId={executionResultId}
/>
<Button
variant="secondary"
size="small"
onClick={() => handleCopy(key, value)}
className={cn(
"h-fit min-w-0 gap-1.5 border border-zinc-200 p-2 text-black hover:text-slate-900",
copiedKey === key &&
"border-green-400 bg-green-100 hover:border-green-400 hover:bg-green-200",
)}
>
{copiedKey === key ? (
<CheckIcon size={12} className="text-green-600" />
) : (
<CopyIcon size={12} />
)}
</Button>
</div>
</div>
</div>
</div>
))}
</div>
))}
</div>
{Object.keys(outputData).length > 2 && (
<ViewMoreData outputData={outputData} execId={executionResultId} />
)}
</>
)}
{Object.keys(outputData).length > 2 && (
<ViewMoreData
outputData={outputData}
execId={executionResultId}
/>
)}
</AccordionContent>
</AccordionItem>
</Accordion>
</div>
);
};

View File

@@ -4,7 +4,6 @@ import { useShallow } from "zustand/react/shallow";
import { useState } from "react";
export const useNodeOutput = (nodeId: string) => {
const [isExpanded, setIsExpanded] = useState(true);
const [copiedKey, setCopiedKey] = useState<string | null>(null);
const { toast } = useToast();
@@ -37,13 +36,10 @@ export const useNodeOutput = (nodeId: string) => {
}
};
return {
outputData: outputData,
inputData: inputData,
isExpanded: isExpanded,
setIsExpanded: setIsExpanded,
copiedKey: copiedKey,
setCopiedKey: setCopiedKey,
handleCopy: handleCopy,
outputData,
inputData,
copiedKey,
handleCopy,
executionResultId: nodeExecutionResult?.node_exec_id,
};
};

View File

@@ -187,3 +187,38 @@ export const getTypeDisplayInfo = (schema: any) => {
hexColor,
};
};
export function getEdgeColorFromOutputType(
outputSchema: RJSFSchema | undefined,
sourceHandle: string,
): { colorClass: string; hexColor: string } {
const defaultColor = {
colorClass: "stroke-zinc-500/50",
hexColor: "#6b7280",
};
if (!outputSchema?.properties) return defaultColor;
const properties = outputSchema.properties as Record<string, unknown>;
const handleParts = sourceHandle.split("_#_");
let currentSchema: Record<string, unknown> = properties;
for (let i = 0; i < handleParts.length; i++) {
const part = handleParts[i];
const fieldSchema = currentSchema[part] as Record<string, unknown>;
if (!fieldSchema) return defaultColor;
if (i === handleParts.length - 1) {
const { hexColor, colorClass } = getTypeDisplayInfo(fieldSchema);
return { colorClass: colorClass.replace("!text-", "stroke-"), hexColor };
}
if (fieldSchema.properties) {
currentSchema = fieldSchema.properties as Record<string, unknown>;
} else {
return defaultColor;
}
}
return defaultColor;
}

View File

@@ -1,7 +1,32 @@
// These are SVG Phosphor icons
type IconOptions = {
size?: number;
color?: string;
};
const DEFAULT_SIZE = 16;
const DEFAULT_COLOR = "#52525b"; // zinc-600
const iconPaths = {
ClickIcon: `M88,24V16a8,8,0,0,1,16,0v8a8,8,0,0,1-16,0ZM16,104h8a8,8,0,0,0,0-16H16a8,8,0,0,0,0,16ZM124.42,39.16a8,8,0,0,0,10.74-3.58l8-16a8,8,0,0,0-14.31-7.16l-8,16A8,8,0,0,0,124.42,39.16Zm-96,81.69-16,8a8,8,0,0,0,7.16,14.31l16-8a8,8,0,1,0-7.16-14.31ZM219.31,184a16,16,0,0,1,0,22.63l-12.68,12.68a16,16,0,0,1-22.63,0L132.7,168,115,214.09c0,.1-.08.21-.13.32a15.83,15.83,0,0,1-14.6,9.59l-.79,0a15.83,15.83,0,0,1-14.41-11L32.8,52.92A16,16,0,0,1,52.92,32.8L213,85.07a16,16,0,0,1,1.41,29.8l-.32.13L168,132.69ZM208,195.31,156.69,144h0a16,16,0,0,1,4.93-26l.32-.14,45.95-17.64L48,48l52.2,159.86,17.65-46c0-.11.08-.22.13-.33a16,16,0,0,1,11.69-9.34,16.72,16.72,0,0,1,3-.28,16,16,0,0,1,11.3,4.69L195.31,208Z`,
Keyboard: `M224,48H32A16,16,0,0,0,16,64V192a16,16,0,0,0,16,16H224a16,16,0,0,0,16-16V64A16,16,0,0,0,224,48Zm0,144H32V64H224V192Zm-16-64a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,128Zm0-32a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,96ZM72,160a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16h8A8,8,0,0,1,72,160Zm96,0a8,8,0,0,1-8,8H96a8,8,0,0,1,0-16h64A8,8,0,0,1,168,160Zm40,0a8,8,0,0,1-8,8h-8a8,8,0,0,1,0-16h8A8,8,0,0,1,208,160Z`,
Drag: `M188,80a27.79,27.79,0,0,0-13.36,3.4,28,28,0,0,0-46.64-11A28,28,0,0,0,80,92v20H68a28,28,0,0,0-28,28v12a88,88,0,0,0,176,0V108A28,28,0,0,0,188,80Zm12,72a72,72,0,0,1-144,0V140a12,12,0,0,1,12-12H80v24a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V108a12,12,0,0,1,24,0Z`,
};
function createIcon(path: string, options: IconOptions = {}): string {
const size = options.size ?? DEFAULT_SIZE;
const color = options.color ?? DEFAULT_COLOR;
return `<svg xmlns="http://www.w3.org/2000/svg" width="${size}" height="${size}" fill="${color}" viewBox="0 0 256 256"><path d="${path}"></path></svg>`;
}
export const ICONS = {
ClickIcon: `<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="#000000" viewBox="0 0 256 256"><path d="M88,24V16a8,8,0,0,1,16,0v8a8,8,0,0,1-16,0ZM16,104h8a8,8,0,0,0,0-16H16a8,8,0,0,0,0,16ZM124.42,39.16a8,8,0,0,0,10.74-3.58l8-16a8,8,0,0,0-14.31-7.16l-8,16A8,8,0,0,0,124.42,39.16Zm-96,81.69-16,8a8,8,0,0,0,7.16,14.31l16-8a8,8,0,1,0-7.16-14.31ZM219.31,184a16,16,0,0,1,0,22.63l-12.68,12.68a16,16,0,0,1-22.63,0L132.7,168,115,214.09c0,.1-.08.21-.13.32a15.83,15.83,0,0,1-14.6,9.59l-.79,0a15.83,15.83,0,0,1-14.41-11L32.8,52.92A16,16,0,0,1,52.92,32.8L213,85.07a16,16,0,0,1,1.41,29.8l-.32.13L168,132.69ZM208,195.31,156.69,144h0a16,16,0,0,1,4.93-26l.32-.14,45.95-17.64L48,48l52.2,159.86,17.65-46c0-.11.08-.22.13-.33a16,16,0,0,1,11.69-9.34,16.72,16.72,0,0,1,3-.28,16,16,0,0,1,11.3,4.69L195.31,208Z"></path></svg>`,
Keyboard: `<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="#000000" viewBox="0 0 256 256"><path d="M224,48H32A16,16,0,0,0,16,64V192a16,16,0,0,0,16,16H224a16,16,0,0,0,16-16V64A16,16,0,0,0,224,48Zm0,144H32V64H224V192Zm-16-64a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,128Zm0-32a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16H200A8,8,0,0,1,208,96ZM72,160a8,8,0,0,1-8,8H56a8,8,0,0,1,0-16h8A8,8,0,0,1,72,160Zm96,0a8,8,0,0,1-8,8H96a8,8,0,0,1,0-16h64A8,8,0,0,1,168,160Zm40,0a8,8,0,0,1-8,8h-8a8,8,0,0,1,0-16h8A8,8,0,0,1,208,160Z"></path></svg>`,
Drag: `<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="#000000" viewBox="0 0 256 256"><path d="M188,80a27.79,27.79,0,0,0-13.36,3.4,28,28,0,0,0-46.64-11A28,28,0,0,0,80,92v20H68a28,28,0,0,0-28,28v12a88,88,0,0,0,176,0V108A28,28,0,0,0,188,80Zm12,72a72,72,0,0,1-144,0V140a12,12,0,0,1,12-12H80v24a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V92a12,12,0,0,1,24,0v28a8,8,0,0,0,16,0V108a12,12,0,0,1,24,0Z"></path></svg>`,
ClickIcon: createIcon(iconPaths.ClickIcon),
Keyboard: createIcon(iconPaths.Keyboard),
Drag: createIcon(iconPaths.Drag),
};
export function getIcon(
name: keyof typeof iconPaths,
options?: IconOptions,
): string {
return createIcon(iconPaths[name], options);
}

View File

@@ -11,6 +11,7 @@ import {
} from "./helpers";
import { useNodeStore } from "../../../stores/nodeStore";
import { useEdgeStore } from "../../../stores/edgeStore";
import { useTutorialStore } from "../../../stores/tutorialStore";
let isTutorialLoading = false;
let tutorialLoadingCallback: ((loading: boolean) => void) | null = null;
@@ -60,12 +61,14 @@ export const startTutorial = async () => {
handleTutorialComplete();
removeTutorialStyles();
clearPrefetchedBlocks();
useTutorialStore.getState().setIsTutorialRunning(false);
});
tour.on("cancel", () => {
handleTutorialCancel(tour);
removeTutorialStyles();
clearPrefetchedBlocks();
useTutorialStore.getState().setIsTutorialRunning(false);
});
for (const step of tour.steps) {

View File

@@ -61,12 +61,18 @@ export const convertNodesPlusBlockInfoIntoCustomNodes = (
return customNode;
};
const isToolSourceName = (sourceName: string): boolean =>
sourceName.startsWith("tools_^_");
const cleanupSourceName = (sourceName: string): string =>
isToolSourceName(sourceName) ? "tools" : sourceName;
export const linkToCustomEdge = (link: Link): CustomEdge => ({
id: link.id ?? "",
type: "custom" as const,
source: link.source_id,
target: link.sink_id,
sourceHandle: link.source_name,
sourceHandle: cleanupSourceName(link.source_name),
targetHandle: link.sink_name,
data: {
isStatic: link.is_static,

View File

@@ -267,23 +267,34 @@ export function extractCredentialsNeeded(
| undefined;
if (missingCreds && Object.keys(missingCreds).length > 0) {
const agentName = (setupInfo?.agent_name as string) || "this block";
const credentials = Object.values(missingCreds).map((credInfo) => ({
provider: (credInfo.provider as string) || "unknown",
providerName:
(credInfo.provider_name as string) ||
(credInfo.provider as string) ||
"Unknown Provider",
credentialType:
const credentials = Object.values(missingCreds).map((credInfo) => {
// Normalize to array at boundary - prefer 'types' array, fall back to single 'type'
const typesArray = credInfo.types as
| Array<"api_key" | "oauth2" | "user_password" | "host_scoped">
| undefined;
const singleType =
(credInfo.type as
| "api_key"
| "oauth2"
| "user_password"
| "host_scoped") || "api_key",
title:
(credInfo.title as string) ||
`${(credInfo.provider_name as string) || (credInfo.provider as string)} credentials`,
scopes: credInfo.scopes as string[] | undefined,
}));
| "host_scoped"
| undefined) || "api_key";
const credentialTypes =
typesArray && typesArray.length > 0 ? typesArray : [singleType];
return {
provider: (credInfo.provider as string) || "unknown",
providerName:
(credInfo.provider_name as string) ||
(credInfo.provider as string) ||
"Unknown Provider",
credentialTypes,
title:
(credInfo.title as string) ||
`${(credInfo.provider_name as string) || (credInfo.provider as string)} credentials`,
scopes: credInfo.scopes as string[] | undefined,
};
});
return {
type: "credentials_needed",
toolName,
@@ -358,11 +369,14 @@ export function extractInputsNeeded(
credentials.forEach((cred) => {
const id = cred.id as string;
if (id) {
const credentialTypes = Array.isArray(cred.types)
? cred.types
: [(cred.type as string) || "api_key"];
credentialsSchema[id] = {
type: "object",
properties: {},
credentials_provider: [cred.provider as string],
credentials_types: [(cred.type as string) || "api_key"],
credentials_types: credentialTypes,
credentials_scopes: cred.scopes as string[] | undefined,
};
}

View File

@@ -9,7 +9,9 @@ import { useChatCredentialsSetup } from "./useChatCredentialsSetup";
export interface CredentialInfo {
provider: string;
providerName: string;
credentialType: "api_key" | "oauth2" | "user_password" | "host_scoped";
credentialTypes: Array<
"api_key" | "oauth2" | "user_password" | "host_scoped"
>;
title: string;
scopes?: string[];
}
@@ -30,7 +32,7 @@ function createSchemaFromCredentialInfo(
type: "object",
properties: {},
credentials_provider: [credential.provider],
credentials_types: [credential.credentialType],
credentials_types: credential.credentialTypes,
credentials_scopes: credential.scopes,
discriminator: undefined,
discriminator_mapping: undefined,

View File

@@ -41,7 +41,9 @@ export type ChatMessageData =
credentials: Array<{
provider: string;
providerName: string;
credentialType: "api_key" | "oauth2" | "user_password" | "host_scoped";
credentialTypes: Array<
"api_key" | "oauth2" | "user_password" | "host_scoped"
>;
title: string;
scopes?: string[];
}>;

View File

@@ -31,10 +31,18 @@ export function AgentSettingsModal({
}
}
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
if (!hasHITLBlocks) return null;
if (!shouldShowToggle) return null;
return (
<Dialog
@@ -57,23 +65,48 @@ export function AgentSettingsModal({
)}
<Dialog.Content>
<div className="space-y-6">
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
{showHITLToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
</div>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait for
your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</div>
</Dialog.Content>
</Dialog>

View File

@@ -14,6 +14,10 @@ import {
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { useEffect, useRef, useState } from "react";
import { ScheduleAgentModal } from "../ScheduleAgentModal/ScheduleAgentModal";
import {
AIAgentSafetyPopup,
useAIAgentSafetyPopup,
} from "./components/AIAgentSafetyPopup/AIAgentSafetyPopup";
import { ModalHeader } from "./components/ModalHeader/ModalHeader";
import { ModalRunSection } from "./components/ModalRunSection/ModalRunSection";
import { RunActions } from "./components/RunActions/RunActions";
@@ -83,8 +87,17 @@ export function RunAgentModal({
const [isScheduleModalOpen, setIsScheduleModalOpen] = useState(false);
const [hasOverflow, setHasOverflow] = useState(false);
const [isSafetyPopupOpen, setIsSafetyPopupOpen] = useState(false);
const [pendingRunAction, setPendingRunAction] = useState<(() => void) | null>(
null,
);
const contentRef = useRef<HTMLDivElement>(null);
const { shouldShowPopup, dismissPopup } = useAIAgentSafetyPopup(
agent.has_sensitive_action,
agent.has_human_in_the_loop,
);
const hasAnySetupFields =
Object.keys(agentInputFields || {}).length > 0 ||
Object.keys(agentCredentialsInputFields || {}).length > 0;
@@ -165,6 +178,24 @@ export function RunAgentModal({
onScheduleCreated?.(schedule);
}
function handleRunWithSafetyCheck() {
if (shouldShowPopup) {
setPendingRunAction(() => handleRun);
setIsSafetyPopupOpen(true);
} else {
handleRun();
}
}
function handleSafetyPopupAcknowledge() {
setIsSafetyPopupOpen(false);
dismissPopup();
if (pendingRunAction) {
pendingRunAction();
setPendingRunAction(null);
}
}
return (
<>
<Dialog
@@ -248,7 +279,7 @@ export function RunAgentModal({
)}
<RunActions
defaultRunType={defaultRunType}
onRun={handleRun}
onRun={handleRunWithSafetyCheck}
isExecuting={isExecuting}
isSettingUpTrigger={isSettingUpTrigger}
isRunReady={allRequiredInputsAreSet}
@@ -266,6 +297,11 @@ export function RunAgentModal({
</div>
</Dialog.Content>
</Dialog>
<AIAgentSafetyPopup
isOpen={isSafetyPopupOpen}
onAcknowledge={handleSafetyPopupAcknowledge}
/>
</>
);
}

View File

@@ -0,0 +1,95 @@
"use client";
import { Button } from "@/components/atoms/Button/Button";
import { Text } from "@/components/atoms/Text/Text";
import { Dialog } from "@/components/molecules/Dialog/Dialog";
import { Key, storage } from "@/services/storage/local-storage";
import { ShieldCheckIcon } from "@phosphor-icons/react";
import { useCallback, useEffect, useState } from "react";
interface Props {
onAcknowledge: () => void;
isOpen: boolean;
}
export function AIAgentSafetyPopup({ onAcknowledge, isOpen }: Props) {
function handleAcknowledge() {
// Mark popup as shown so it won't appear again
storage.set(Key.AI_AGENT_SAFETY_POPUP_SHOWN, "true");
onAcknowledge();
}
if (!isOpen) return null;
return (
<Dialog
controlled={{ isOpen, set: () => {} }}
styling={{ maxWidth: "480px" }}
>
<Dialog.Content>
<div className="flex flex-col items-center p-6 text-center">
<div className="mb-6 flex h-16 w-16 items-center justify-center rounded-full bg-blue-50">
<ShieldCheckIcon
weight="fill"
size={32}
className="text-blue-600"
/>
</div>
<Text variant="h3" className="mb-4">
Safety Checks Enabled
</Text>
<Text variant="body" className="mb-2 text-zinc-700">
AI-generated agents may take actions that affect your data or
external systems.
</Text>
<Text variant="body" className="mb-8 text-zinc-700">
AutoGPT includes safety checks so you&apos;ll always have the
opportunity to review and approve sensitive actions before they
happen.
</Text>
<Button
variant="primary"
size="large"
className="w-full"
onClick={handleAcknowledge}
>
Got it
</Button>
</div>
</Dialog.Content>
</Dialog>
);
}
export function useAIAgentSafetyPopup(
hasSensitiveAction: boolean,
hasHumanInTheLoop: boolean,
) {
const [shouldShowPopup, setShouldShowPopup] = useState(false);
const [hasChecked, setHasChecked] = useState(false);
useEffect(() => {
// Only check once after mount (to avoid SSR issues)
if (hasChecked) return;
const hasSeenPopup =
storage.get(Key.AI_AGENT_SAFETY_POPUP_SHOWN) === "true";
const isRelevantAgent = hasSensitiveAction || hasHumanInTheLoop;
setShouldShowPopup(!hasSeenPopup && isRelevantAgent);
setHasChecked(true);
}, [hasSensitiveAction, hasHumanInTheLoop, hasChecked]);
const dismissPopup = useCallback(() => {
setShouldShowPopup(false);
}, []);
return {
shouldShowPopup,
dismissPopup,
};
}

View File

@@ -1,9 +1,9 @@
import { Input } from "@/components/atoms/Input/Input";
import { CredentialsGroupedView } from "@/components/contextual/CredentialsInput/components/CredentialsGroupedView/CredentialsGroupedView";
import { InformationTooltip } from "@/components/molecules/InformationTooltip/InformationTooltip";
import { useMemo } from "react";
import { RunAgentInputs } from "../../../RunAgentInputs/RunAgentInputs";
import { useRunAgentModalContext } from "../../context";
import { CredentialsGroupedView } from "../CredentialsGroupedView/CredentialsGroupedView";
import { ModalSection } from "../ModalSection/ModalSection";
import { WebhookTriggerBanner } from "../WebhookTriggerBanner/WebhookTriggerBanner";
@@ -19,6 +19,8 @@ export function ModalRunSection() {
setInputValue,
agentInputFields,
agentCredentialsInputFields,
inputCredentials,
setInputCredentialsValue,
} = useRunAgentModalContext();
const inputFields = Object.entries(agentInputFields || {});
@@ -102,6 +104,9 @@ export function ModalRunSection() {
<CredentialsGroupedView
credentialFields={credentialFields}
requiredCredentials={requiredCredentials}
inputCredentials={inputCredentials}
inputValues={inputValues}
onCredentialChange={setInputCredentialsValue}
/>
</ModalSection>
) : null}

View File

@@ -5,48 +5,104 @@ import { Graph } from "@/lib/autogpt-server-api/types";
import { cn } from "@/lib/utils";
import { ShieldCheckIcon, ShieldIcon } from "@phosphor-icons/react";
import { useAgentSafeMode } from "@/hooks/useAgentSafeMode";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
interface Props {
graph: GraphModel | LibraryAgent | Graph;
className?: string;
fullWidth?: boolean;
}
export function SafeModeToggle({ graph }: Props) {
interface SafeModeIconButtonProps {
isEnabled: boolean;
label: string;
tooltipEnabled: string;
tooltipDisabled: string;
onToggle: () => void;
isPending: boolean;
}
function SafeModeIconButton({
isEnabled,
label,
tooltipEnabled,
tooltipDisabled,
onToggle,
isPending,
}: SafeModeIconButtonProps) {
return (
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant="icon"
size="icon"
aria-label={`${label}: ${isEnabled ? "ON" : "OFF"}. ${isEnabled ? tooltipEnabled : tooltipDisabled}`}
onClick={onToggle}
disabled={isPending}
className={cn(isPending ? "opacity-0" : "opacity-100")}
>
{isEnabled ? (
<ShieldCheckIcon weight="bold" size={16} />
) : (
<ShieldIcon weight="bold" size={16} />
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
{label}: {isEnabled ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{isEnabled ? tooltipEnabled : tooltipDisabled}
</div>
</div>
</TooltipContent>
</Tooltip>
);
}
export function SafeModeToggle({ graph, className }: Props) {
const {
currentSafeMode,
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
isStateUndetermined,
handleToggle,
} = useAgentSafeMode(graph);
if (!shouldShowToggle || isStateUndetermined) {
if (!shouldShowToggle) {
return null;
}
return (
<Button
variant="icon"
key={graph.id}
size="icon"
aria-label={
currentSafeMode!
? "Safe Mode: ON. Human in the loop blocks require manual review"
: "Safe Mode: OFF. Human in the loop blocks proceed automatically"
}
onClick={handleToggle}
className={cn(isPending ? "opacity-0" : "opacity-100")}
>
{currentSafeMode! ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
</>
<div className={cn("flex gap-1", className)}>
{showHITLToggle && (
<SafeModeIconButton
isEnabled={currentHITLSafeMode}
label="Human-in-the-loop"
tooltipEnabled="The agent will pause at human-in-the-loop blocks and wait for your approval"
tooltipDisabled="Human-in-the-loop blocks will proceed automatically"
onToggle={handleHITLToggle}
isPending={isPending}
/>
)}
</Button>
{showSensitiveActionToggle && (
<SafeModeIconButton
isEnabled={currentSensitiveActionSafeMode}
label="Sensitive actions"
tooltipEnabled="The agent will pause at sensitive action blocks and wait for your approval"
tooltipDisabled="Sensitive action blocks will proceed automatically"
onToggle={handleSensitiveActionToggle}
isPending={isPending}
/>
)}
</div>
);
}

Some files were not shown because too many files have changed in this diff Show More