Compare commits

...

4 Commits

Author SHA1 Message Date
Zamil Majdy
3f1f663ea7 fix(backend): handle null values in GraphSettings validation
Fixes AUTOGPT-SERVER-76H

When parsing LibraryAgent settings from the database, null values for
human_in_the_loop_safe_mode and sensitive_action_safe_mode were causing
Pydantic validation errors. This adds BeforeValidator annotations to
coerce null values to their defaults (True and False respectively).
2026-01-20 23:09:45 -05:00
Zamil Majdy
8b25e62959 feat(backend,frontend): add explicit safe mode toggles for HITL and sensitive actions (#11756)
## Summary

This PR introduces two explicit safe mode toggles for controlling agent
execution behavior, providing clearer and more granular control over
when agents should pause for human review.

### Key Changes

**New Safe Mode Settings:**
- **`human_in_the_loop_safe_mode`** (bool, default `true`) - Controls
whether human-in-the-loop (HITL) blocks pause for review
- **`sensitive_action_safe_mode`** (bool, default `false`) - Controls
whether sensitive action blocks pause for review

**New Computed Properties on LibraryAgent:**
- `has_human_in_the_loop` - Indicates if agent contains HITL blocks
- `has_sensitive_action` - Indicates if agent contains sensitive action
blocks

**Block Changes:**
- Renamed `requires_human_review` to `is_sensitive_action` on blocks for
clarity
- Blocks marked as `is_sensitive_action=True` pause only when
`sensitive_action_safe_mode=True`
- HITL blocks pause when `human_in_the_loop_safe_mode=True`

**Frontend Changes:**
- Two separate toggles in Agent Settings based on block types present
- Toggle visibility based on `has_human_in_the_loop` and
`has_sensitive_action` computed properties
- Settings cog hidden if neither toggle applies
- Proper state management for both toggles with defaults

**AI-Generated Agent Behavior:**
- AI-generated agents set `sensitive_action_safe_mode=True` by default
- This ensures sensitive actions are reviewed for AI-generated content

## Changes

**Backend:**
- `backend/data/graph.py` - Updated `GraphSettings` with two boolean
toggles (non-optional with defaults), added `has_sensitive_action`
computed property
- `backend/data/block.py` - Renamed `requires_human_review` to
`is_sensitive_action`, updated review logic
- `backend/data/execution.py` - Updated `ExecutionContext` with both
safe mode fields
- `backend/api/features/library/model.py` - Added
`has_human_in_the_loop` and `has_sensitive_action` to `LibraryAgent`
- `backend/api/features/library/db.py` - Updated to use
`sensitive_action_safe_mode` parameter
- `backend/executor/utils.py` - Simplified execution context creation

**Frontend:**
- `useAgentSafeMode.ts` - Rewritten to support two independent toggles
- `AgentSettingsModal.tsx` - Shows two separate toggles
- `SelectedSettingsView.tsx` - Shows two separate toggles
- Regenerated API types with new schema

## Test Plan

- [x] All backend tests pass (Python 3.11, 3.12, 3.13)
- [x] All frontend tests pass
- [x] Backend format and lint pass
- [x] Frontend format and lint pass
- [x] Pre-commit hooks pass

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2026-01-21 00:56:02 +00:00
Zamil Majdy
35a13e3df5 fix(backend): Use explicit schema qualification for pgvector types (#11805)
## Summary
- Fix intermittent "type 'vector' does not exist" errors when using
PgBouncer in transaction mode
- The issue was that `SET search_path` and the actual query could run on
different backend connections
- Use explicit schema qualification (`{schema}.vector`,
`OPERATOR({schema}.<=>)`) instead of relying on search_path

## Test plan
- [x] Tested vector type cast on local: `'[1,2,3]'::platform.vector`
works
- [x] Tested OPERATOR syntax on local: `OPERATOR(platform.<=>)` works
- [x] Tested on dev via kubectl exec: both work correctly
- [ ] Deploy to dev and verify backfill_missing_embeddings endpoint no
longer errors

## Related Issues
Fixes: AUTOGPT-SERVER-763, AUTOGPT-SERVER-764

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:18:16 +00:00
Mewael Tsegay Desta
2169b433c9 feat(backend/blocks): add ConcatenateListsBlock (#11567)
# feat(backend/blocks): add ConcatenateListsBlock

## Description

This PR implements a new block `ConcatenateListsBlock` that concatenates
multiple lists into a single list. This addresses the "good first issue"
for implementing a list concatenation block in the platform/blocks area.

The block takes a list of lists as input and combines all elements in
order into a single concatenated list. This is useful for workflows that
need to merge data from multiple sources or combine results from
different operations.

### Changes 🏗️

- **Added `ConcatenateListsBlock` class** in
`autogpt_platform/backend/backend/blocks/data_manipulation.py`
- Input: `lists: List[List[Any]]` - accepts a list of lists to
concatenate
- Output: `concatenated_list: List[Any]` - returns a single concatenated
list
- Error output: `error: str` - provides clear error messages for invalid
input types
  - Block ID: `3cf9298b-5817-4141-9d80-7c2cc5199c8e`
- Category: `BlockCategory.BASIC` (consistent with other list
manipulation blocks)
  
- **Added comprehensive test suite** in
`autogpt_platform/backend/test/blocks/test_concatenate_lists.py`
  - Tests using built-in `test_input`/`test_output` validation
- Manual test cases covering edge cases (empty lists, single list, empty
input)
  - Error handling tests for invalid input types
  - Category consistency verification
  - All tests passing

- **Implementation details:**
  - Uses `extend()` method for efficient list concatenation
  - Preserves element order from all input lists
- **Runtime type validation**: Explicitly checks `isinstance(lst, list)`
before calling `extend()` to prevent:
- Strings being iterated character-by-character (e.g., `extend("abc")` →
`['a', 'b', 'c']`)
    - Non-iterable types causing `TypeError` (e.g., `extend(1)`)
  - Clear error messages indicating which index has invalid input
- Handles edge cases: empty lists, empty input, single list, None values
  - Follows existing block patterns and conventions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `poetry run pytest test/blocks/test_concatenate_lists.py -v` -
all tests pass
  - [x] Verified block can be imported and instantiated
  - [x] Tested with built-in test cases (4 test scenarios)
  - [x] Tested manual edge cases (empty lists, single list, empty input)
  - [x] Tested error handling for invalid input types
  - [x] Verified category is `BASIC` for consistency
  - [x] Verified no linting errors
- [x] Confirmed block follows same patterns as other blocks in
`data_manipulation.py`

#### Code Quality:
- [x] Code follows existing patterns and conventions
- [x] Type hints are properly used
- [x] Documentation strings are clear and descriptive
- [x] Runtime type validation implemented
- [x] Error handling with clear error messages
- [x] No linting errors
- [x] Prisma client generated successfully

### Testing

**Test Results:**
```
test/blocks/test_concatenate_lists.py::test_concatenate_lists_block_builtin_tests PASSED
test/blocks/test_concatenate_lists.py::test_concatenate_lists_manual PASSED

============================== 2 passed in 8.35s ==============================
```

**Test Coverage:**
- Basic concatenation: `[[1, 2, 3], [4, 5, 6]]` → `[1, 2, 3, 4, 5, 6]`
- Mixed types: `[["a", "b"], ["c"], ["d", "e", "f"]]` → `["a", "b", "c",
"d", "e", "f"]`
- Empty list handling: `[[1, 2], []]` → `[1, 2]`
- Empty input: `[]` → `[]`
- Single list: `[[1, 2, 3]]` → `[1, 2, 3]`
- Error handling: Invalid input types (strings, non-lists) produce clear
error messages
- Category verification: Confirmed `BlockCategory.BASIC` for consistency

### Review Feedback Addressed

- **Category Consistency**: Changed from `BlockCategory.DATA` to
`BlockCategory.BASIC` to match other list manipulation blocks
(`AddToListBlock`, `FindInListBlock`, etc.)
- **Type Robustness**: Added explicit runtime validation with
`isinstance(lst, list)` check before calling `extend()` to prevent:
  - Strings being iterated character-by-character
  - Non-iterable types causing `TypeError`
- **Error Handling**: Added `error` output field with clear, descriptive
error messages indicating which index has invalid input
- **Test Coverage**: Added test case for error handling with invalid
input types

### Related Issues

- Addresses: "Implement block to concatenate lists" (good first issue,
platform/blocks, hacktoberfest)

### Notes

- This is a straightforward data manipulation block that doesn't require
external dependencies
- The block will be automatically discovered by the block loading system
- No database or configuration changes required
- Compatible with existing workflow system
- All review feedback has been addressed and incorporated


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds a new list utility and updates docs.
> 
> - **New block**: `ConcatenateListsBlock` in
`backend/blocks/data_manipulation.py`
> - Input `lists: List[List[Any]]`; outputs `concatenated_list` or
`error`
> - Skips `None` entries; emits error for non-list items; preserves
order
> - **Docs**: Adds "Concatenate Lists" section to
`docs/integrations/basic.md` and links it in
`docs/integrations/README.md`
> - **Contributor guide**: New `docs/CLAUDE.md` with manual doc section
guidelines
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4f56dd86c2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 18:04:12 +00:00
32 changed files with 803 additions and 345 deletions

View File

@@ -218,6 +218,7 @@ async def save_agent_to_library(
library_agents = await library_db.create_library_agent(
graph=created_graph,
user_id=user_id,
sensitive_action_safe_mode=True,
create_library_agents_for_sub_graphs=False,
)

View File

@@ -401,27 +401,11 @@ async def add_generated_agent_image(
)
def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings:
"""
Initialize GraphSettings based on graph content.
Args:
graph: The graph to analyze
Returns:
GraphSettings with appropriate human_in_the_loop_safe_mode value
"""
if graph.has_human_in_the_loop:
# Graph has HITL blocks - set safe mode to True by default
return GraphSettings(human_in_the_loop_safe_mode=True)
else:
# Graph has no HITL blocks - keep None
return GraphSettings(human_in_the_loop_safe_mode=None)
async def create_library_agent(
graph: graph_db.GraphModel,
user_id: str,
hitl_safe_mode: bool = True,
sensitive_action_safe_mode: bool = False,
create_library_agents_for_sub_graphs: bool = True,
) -> list[library_model.LibraryAgent]:
"""
@@ -430,6 +414,8 @@ async def create_library_agent(
Args:
agent: The agent/Graph to add to the library.
user_id: The user to whom the agent will be added.
hitl_safe_mode: Whether HITL blocks require manual review (default True).
sensitive_action_safe_mode: Whether sensitive action blocks require review.
create_library_agents_for_sub_graphs: If True, creates LibraryAgent records for sub-graphs as well.
Returns:
@@ -465,7 +451,11 @@ async def create_library_agent(
}
},
settings=SafeJson(
_initialize_graph_settings(graph_entry).model_dump()
GraphSettings.from_graph(
graph_entry,
hitl_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
).model_dump()
),
),
include=library_agent_include(
@@ -627,33 +617,6 @@ async def update_library_agent(
raise DatabaseError("Failed to update library agent") from e
async def update_library_agent_settings(
user_id: str,
agent_id: str,
settings: GraphSettings,
) -> library_model.LibraryAgent:
"""
Updates the settings for a specific LibraryAgent.
Args:
user_id: The owner of the LibraryAgent.
agent_id: The ID of the LibraryAgent to update.
settings: New GraphSettings to apply.
Returns:
The updated LibraryAgent.
Raises:
NotFoundError: If the specified LibraryAgent does not exist.
DatabaseError: If there's an error in the update operation.
"""
return await update_library_agent(
library_agent_id=agent_id,
user_id=user_id,
settings=settings,
)
async def delete_library_agent(
library_agent_id: str, user_id: str, soft_delete: bool = True
) -> None:
@@ -838,7 +801,7 @@ async def add_store_agent_to_library(
"isCreatedByUser": False,
"useGraphIsActiveVersion": False,
"settings": SafeJson(
_initialize_graph_settings(graph_model).model_dump()
GraphSettings.from_graph(graph_model).model_dump()
),
},
include=library_agent_include(
@@ -1228,8 +1191,15 @@ async def fork_library_agent(
)
new_graph = await on_graph_activate(new_graph, user_id=user_id)
# Create a library agent for the new graph
return (await create_library_agent(new_graph, user_id))[0]
# Create a library agent for the new graph, preserving safe mode settings
return (
await create_library_agent(
new_graph,
user_id,
hitl_safe_mode=original_agent.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=original_agent.settings.sensitive_action_safe_mode,
)
)[0]
except prisma.errors.PrismaError as e:
logger.error(f"Database error cloning library agent: {e}")
raise DatabaseError("Failed to fork library agent") from e

View File

@@ -73,6 +73,12 @@ class LibraryAgent(pydantic.BaseModel):
has_external_trigger: bool = pydantic.Field(
description="Whether the agent has an external trigger (e.g. webhook) node"
)
has_human_in_the_loop: bool = pydantic.Field(
description="Whether the agent has human-in-the-loop blocks"
)
has_sensitive_action: bool = pydantic.Field(
description="Whether the agent has sensitive action blocks"
)
trigger_setup_info: Optional[GraphTriggerInfo] = None
# Indicates whether there's a new output (based on recent runs)
@@ -180,6 +186,8 @@ class LibraryAgent(pydantic.BaseModel):
graph.credentials_input_schema if sub_graphs is not None else None
),
has_external_trigger=graph.has_external_trigger,
has_human_in_the_loop=graph.has_human_in_the_loop,
has_sensitive_action=graph.has_sensitive_action,
trigger_setup_info=graph.trigger_setup_info,
new_output=new_output,
can_access_graph=can_access_graph,

View File

@@ -52,6 +52,8 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -75,6 +77,8 @@ async def test_get_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -150,6 +154,8 @@ async def test_get_favorite_library_agents_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
recommended_schedule_cron=None,
new_output=False,
@@ -218,6 +224,8 @@ def test_add_agent_to_library_success(
output_schema={"type": "object", "properties": {}},
credentials_input_schema={"type": "object", "properties": {}},
has_external_trigger=False,
has_human_in_the_loop=False,
has_sensitive_action=False,
status=library_model.LibraryAgentStatus.COMPLETED,
new_output=False,
can_access_graph=True,

View File

@@ -154,15 +154,16 @@ async def store_content_embedding(
# Upsert the embedding
# WHERE clause in DO UPDATE prevents PostgreSQL 15 bug with NULLS NOT DISTINCT
# Use {pgvector_schema}.vector for explicit pgvector type qualification
await execute_raw_with_schema(
"""
INSERT INTO {schema_prefix}"UnifiedContentEmbedding" (
"id", "contentType", "contentId", "userId", "embedding", "searchableText", "metadata", "createdAt", "updatedAt"
)
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::vector, $5, $6::jsonb, NOW(), NOW())
VALUES (gen_random_uuid()::text, $1::{schema_prefix}"ContentType", $2, $3, $4::{pgvector_schema}.vector, $5, $6::jsonb, NOW(), NOW())
ON CONFLICT ("contentType", "contentId", "userId")
DO UPDATE SET
"embedding" = $4::vector,
"embedding" = $4::{pgvector_schema}.vector,
"searchableText" = $5,
"metadata" = $6::jsonb,
"updatedAt" = NOW()
@@ -177,7 +178,6 @@ async def store_content_embedding(
searchable_text,
metadata_json,
client=client,
set_public_search_path=True,
)
logger.info(f"Stored embedding for {content_type}:{content_id}")
@@ -236,7 +236,6 @@ async def get_content_embedding(
content_type,
content_id,
user_id,
set_public_search_path=True,
)
if result and len(result) > 0:
@@ -871,31 +870,46 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params) + 1
content_type_placeholders = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
"$" + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params.extend([ct.value for ct in content_types])
sql = f"""
# Build min_similarity param index before appending
min_similarity_idx = len(params) + 1
params.append(min_similarity)
# Use regular string (not f-string) for template to preserve {schema_prefix} and {schema} placeholders
# Use OPERATOR({pgvector_schema}.<=>) for explicit operator schema qualification
sql = (
"""
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
1 - (embedding <=> '{embedding_str}'::vector) as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders})
{user_filter}
AND 1 - (embedding <=> '{embedding_str}'::vector) >= ${len(params) + 1}
1 - (embedding OPERATOR({pgvector_schema}.<=>) '"""
+ embedding_str
+ """'::{pgvector_schema}.vector) as similarity
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN ("""
+ content_type_placeholders
+ """)
"""
+ user_filter
+ """
AND 1 - (embedding OPERATOR({pgvector_schema}.<=>) '"""
+ embedding_str
+ """'::{pgvector_schema}.vector) >= $"""
+ str(min_similarity_idx)
+ """
ORDER BY similarity DESC
LIMIT $1
"""
params.append(min_similarity)
)
try:
results = await query_raw_with_schema(
sql, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql, *params)
return [
{
"content_id": row["content_id"],
@@ -922,31 +936,41 @@ async def semantic_search(
# Add content type parameters and build placeholders dynamically
content_type_start_idx = len(params_lexical) + 1
content_type_placeholders_lexical = ", ".join(
f'${content_type_start_idx + i}::{{{{schema_prefix}}}}"ContentType"'
"$" + str(content_type_start_idx + i) + '::{schema_prefix}"ContentType"'
for i in range(len(content_types))
)
params_lexical.extend([ct.value for ct in content_types])
sql_lexical = f"""
# Build query param index before appending
query_param_idx = len(params_lexical) + 1
params_lexical.append(f"%{query}%")
# Use regular string (not f-string) for template to preserve {schema_prefix} placeholders
sql_lexical = (
"""
SELECT
"contentId" as content_id,
"contentType" as content_type,
"searchableText" as searchable_text,
metadata,
0.0 as similarity
FROM {{{{schema_prefix}}}}"UnifiedContentEmbedding"
WHERE "contentType" IN ({content_type_placeholders_lexical})
{user_filter}
AND "searchableText" ILIKE ${len(params_lexical) + 1}
FROM {schema_prefix}"UnifiedContentEmbedding"
WHERE "contentType" IN ("""
+ content_type_placeholders_lexical
+ """)
"""
+ user_filter
+ """
AND "searchableText" ILIKE $"""
+ str(query_param_idx)
+ """
ORDER BY "updatedAt" DESC
LIMIT $1
"""
params_lexical.append(f"%{query}%")
)
try:
results = await query_raw_with_schema(
sql_lexical, *params_lexical, set_public_search_path=True
)
results = await query_raw_with_schema(sql_lexical, *params_lexical)
return [
{
"content_id": row["content_id"],

View File

@@ -155,18 +155,14 @@ async def test_store_embedding_success(mocker):
)
assert result is True
# execute_raw is called twice: once for SET search_path, once for INSERT
assert mock_client.execute_raw.call_count == 2
# execute_raw is called once for INSERT (no separate SET search_path needed)
assert mock_client.execute_raw.call_count == 1
# First call: SET search_path
first_call_args = mock_client.execute_raw.call_args_list[0][0]
assert "SET search_path" in first_call_args[0]
# Second call: INSERT query with the actual data
second_call_args = mock_client.execute_raw.call_args_list[1][0]
assert "test-version-id" in second_call_args
assert "[0.1,0.2,0.3]" in second_call_args
assert None in second_call_args # userId should be None for store agents
# Verify the INSERT query with the actual data
call_args = mock_client.execute_raw.call_args_list[0][0]
assert "test-version-id" in call_args
assert "[0.1,0.2,0.3]" in call_args
assert None in call_args # userId should be None for store agents
@pytest.mark.asyncio(loop_scope="session")

View File

@@ -12,7 +12,7 @@ from dataclasses import dataclass
from typing import Any, Literal
from prisma.enums import ContentType
from rank_bm25 import BM25Okapi
from rank_bm25 import BM25Okapi # type: ignore[import-untyped]
from backend.api.features.store.embeddings import (
EMBEDDING_DIM,
@@ -295,7 +295,7 @@ async def unified_hybrid_search(
FROM {{schema_prefix}}"UnifiedContentEmbedding" uce
WHERE uce."contentType" = ANY({content_types_param}::{{schema_prefix}}"ContentType"[])
{user_filter}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector
LIMIT 200
)
),
@@ -307,7 +307,7 @@ async def unified_hybrid_search(
uce.metadata,
uce."updatedAt" as updated_at,
-- Semantic score: cosine similarity (1 - distance)
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector), 0) as semantic_score,
-- Lexical score: ts_rank_cd
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match from metadata
@@ -363,9 +363,7 @@ async def unified_hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0
# Apply BM25 reranking
@@ -585,7 +583,7 @@ async def hybrid_search(
WHERE uce."contentType" = 'STORE_AGENT'::{{schema_prefix}}"ContentType"
AND uce."userId" IS NULL
AND {where_clause}
ORDER BY uce.embedding <=> {embedding_param}::vector
ORDER BY uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector
LIMIT 200
) uce
),
@@ -607,7 +605,7 @@ async def hybrid_search(
-- Searchable text for BM25 reranking
COALESCE(sa.agent_name, '') || ' ' || COALESCE(sa.sub_heading, '') || ' ' || COALESCE(sa.description, '') as searchable_text,
-- Semantic score
COALESCE(1 - (uce.embedding <=> {embedding_param}::vector), 0) as semantic_score,
COALESCE(1 - (uce.embedding OPERATOR({{pgvector_schema}}.<=>) {embedding_param}::{{pgvector_schema}}.vector), 0) as semantic_score,
-- Lexical score (raw, will normalize)
COALESCE(ts_rank_cd(uce.search, plainto_tsquery('english', {query_param})), 0) as lexical_raw,
-- Category match
@@ -688,9 +686,7 @@ async def hybrid_search(
LIMIT {limit_param} OFFSET {offset_param}
"""
results = await query_raw_with_schema(
sql_query, *params, set_public_search_path=True
)
results = await query_raw_with_schema(sql_query, *params)
total = results[0]["total_count"] if results else 0

View File

@@ -761,10 +761,8 @@ async def create_new_graph(
graph.reassign_ids(user_id=user_id, reassign_graph_id=True)
graph.validate_graph(for_run=False)
# The return value of the create graph & library function is intentionally not used here,
# as the graph already valid and no sub-graphs are returned back.
await graph_db.create_graph(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id=user_id)
await library_db.create_library_agent(graph, user_id)
activated_graph = await on_graph_activate(graph, user_id=user_id)
if create_graph.source == "builder":
@@ -888,21 +886,19 @@ async def set_graph_active_version(
async def _update_library_agent_version_and_settings(
user_id: str, agent_graph: graph_db.GraphModel
) -> library_model.LibraryAgent:
# Keep the library agent up to date with the new active version
library = await library_db.update_agent_version_in_library(
user_id, agent_graph.id, agent_graph.version
)
# If the graph has HITL node, initialize the setting if it's not already set.
if (
agent_graph.has_human_in_the_loop
and library.settings.human_in_the_loop_safe_mode is None
):
await library_db.update_library_agent_settings(
updated_settings = GraphSettings.from_graph(
graph=agent_graph,
hitl_safe_mode=library.settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=library.settings.sensitive_action_safe_mode,
)
if updated_settings != library.settings:
library = await library_db.update_library_agent(
library_agent_id=library.id,
user_id=user_id,
agent_id=library.id,
settings=library.settings.model_copy(
update={"human_in_the_loop_safe_mode": True}
),
settings=updated_settings,
)
return library
@@ -919,21 +915,18 @@ async def update_graph_settings(
user_id: Annotated[str, Security(get_user_id)],
) -> GraphSettings:
"""Update graph settings for the user's library agent."""
# Get the library agent for this graph
library_agent = await library_db.get_library_agent_by_graph_id(
graph_id=graph_id, user_id=user_id
)
if not library_agent:
raise HTTPException(404, f"Graph #{graph_id} not found in user's library")
# Update the library agent settings
updated_agent = await library_db.update_library_agent_settings(
updated_agent = await library_db.update_library_agent(
library_agent_id=library_agent.id,
user_id=user_id,
agent_id=library_agent.id,
settings=settings,
)
# Return the updated settings
return GraphSettings.model_validate(updated_agent.settings)

View File

@@ -680,3 +680,58 @@ class ListIsEmptyBlock(Block):
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.list) == 0
class ConcatenateListsBlock(Block):
class Input(BlockSchemaInput):
lists: List[List[Any]] = SchemaField(
description="A list of lists to concatenate together. All lists will be combined in order into a single list.",
placeholder="e.g., [[1, 2], [3, 4], [5, 6]]",
)
class Output(BlockSchemaOutput):
concatenated_list: List[Any] = SchemaField(
description="The concatenated list containing all elements from all input lists in order."
)
error: str = SchemaField(
description="Error message if concatenation failed due to invalid input types."
)
def __init__(self):
super().__init__(
id="3cf9298b-5817-4141-9d80-7c2cc5199c8e",
description="Concatenates multiple lists into a single list. All elements from all input lists are combined in order.",
categories={BlockCategory.BASIC},
input_schema=ConcatenateListsBlock.Input,
output_schema=ConcatenateListsBlock.Output,
test_input=[
{"lists": [[1, 2, 3], [4, 5, 6]]},
{"lists": [["a", "b"], ["c"], ["d", "e", "f"]]},
{"lists": [[1, 2], []]},
{"lists": []},
],
test_output=[
("concatenated_list", [1, 2, 3, 4, 5, 6]),
("concatenated_list", ["a", "b", "c", "d", "e", "f"]),
("concatenated_list", [1, 2]),
("concatenated_list", []),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
concatenated = []
for idx, lst in enumerate(input_data.lists):
if lst is None:
# Skip None values to avoid errors
continue
if not isinstance(lst, list):
# Type validation: each item must be a list
# Strings are iterable and would cause extend() to iterate character-by-character
# Non-iterable types would raise TypeError
yield "error", (
f"Invalid input at index {idx}: expected a list, got {type(lst).__name__}. "
f"All items in 'lists' must be lists (e.g., [[1, 2], [3, 4]])."
)
return
concatenated.extend(lst)
yield "concatenated_list", concatenated

View File

@@ -84,7 +84,7 @@ class HITLReviewHelper:
Exception: If review creation or status update fails
"""
# Skip review if safe mode is disabled - return auto-approved result
if not execution_context.safe_mode:
if not execution_context.human_in_the_loop_safe_mode:
logger.info(
f"Block {block_name} skipping review for node {node_exec_id} - safe mode disabled"
)

View File

@@ -104,7 +104,7 @@ class HumanInTheLoopBlock(Block):
execution_context: ExecutionContext,
**_kwargs,
) -> BlockOutput:
if not execution_context.safe_mode:
if not execution_context.human_in_the_loop_safe_mode:
logger.info(
f"HITL block skipping review for node {node_exec_id} - safe mode disabled"
)

View File

@@ -242,7 +242,7 @@ async def test_smart_decision_maker_tracks_llm_stats():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -343,7 +343,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -409,7 +409,7 @@ async def test_smart_decision_maker_parameter_validation():
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -471,7 +471,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -535,7 +535,7 @@ async def test_smart_decision_maker_parameter_validation():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -658,7 +658,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -730,7 +730,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -786,7 +786,7 @@ async def test_smart_decision_maker_raw_response_conversion():
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests
@@ -905,7 +905,7 @@ async def test_smart_decision_maker_agent_mode():
# Create a mock execution context
mock_execution_context = ExecutionContext(
safe_mode=False,
human_in_the_loop_safe_mode=False,
)
# Create a mock execution processor for agent mode tests
@@ -1027,7 +1027,7 @@ async def test_smart_decision_maker_traditional_mode_default():
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
# Create a mock execution processor for tests

View File

@@ -386,7 +386,7 @@ async def test_output_yielding_with_dynamic_fields():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(human_in_the_loop_safe_mode=False)
mock_execution_processor = MagicMock()
async for output_name, output_value in block.run(
@@ -609,7 +609,9 @@ async def test_validation_errors_dont_pollute_conversation():
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_context = ExecutionContext(
human_in_the_loop_safe_mode=False
)
# Create a proper mock execution processor for agent mode
from collections import defaultdict

View File

@@ -474,7 +474,7 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
self.block_type = block_type
self.webhook_config = webhook_config
self.execution_stats: NodeExecutionStats = NodeExecutionStats()
self.requires_human_review: bool = False
self.is_sensitive_action: bool = False
if self.webhook_config:
if isinstance(self.webhook_config, BlockWebhookConfig):
@@ -637,8 +637,9 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
- should_pause: True if execution should be paused for review
- input_data_to_use: The input data to use (may be modified by reviewer)
"""
# Skip review if not required or safe mode is disabled
if not self.requires_human_review or not execution_context.safe_mode:
if not (
self.is_sensitive_action and execution_context.sensitive_action_safe_mode
):
return False, input_data
from backend.blocks.helpers.review import HITLReviewHelper

View File

@@ -38,20 +38,6 @@ POOL_TIMEOUT = os.getenv("DB_POOL_TIMEOUT")
if POOL_TIMEOUT:
DATABASE_URL = add_param(DATABASE_URL, "pool_timeout", POOL_TIMEOUT)
# Add public schema to search_path for pgvector type access
# The vector extension is in public schema, but search_path is determined by schema parameter
# Extract the schema from DATABASE_URL or default to 'public' (matching get_database_schema())
parsed_url = urlparse(DATABASE_URL)
url_params = dict(parse_qsl(parsed_url.query))
db_schema = url_params.get("schema", "public")
# Build search_path, avoiding duplicates if db_schema is already 'public'
search_path_schemas = list(
dict.fromkeys([db_schema, "public"])
) # Preserves order, removes duplicates
search_path = ",".join(search_path_schemas)
# This allows using ::vector without schema qualification
DATABASE_URL = add_param(DATABASE_URL, "options", f"-c search_path={search_path}")
HTTP_TIMEOUT = int(POOL_TIMEOUT) if POOL_TIMEOUT else None
prisma = Prisma(
@@ -127,38 +113,48 @@ async def _raw_with_schema(
*args,
execute: bool = False,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> list[dict] | int:
"""Internal: Execute raw SQL with proper schema handling.
Use query_raw_with_schema() or execute_raw_with_schema() instead.
Supports placeholders:
- {schema_prefix}: Table/type prefix (e.g., "platform".)
- {schema}: Raw schema name for application tables (e.g., platform)
- {pgvector_schema}: Schema where pgvector is installed (defaults to "public")
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix}, {schema}, and/or {pgvector_schema} placeholders
*args: Query parameters
execute: If False, executes SELECT query. If True, executes INSERT/UPDATE/DELETE.
client: Optional Prisma client for transactions (only used when execute=True).
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
- list[dict] if execute=False (query results)
- int if execute=True (number of affected rows)
Example with vector type:
await execute_raw_with_schema(
'INSERT INTO {schema_prefix}"Embedding" (vec) VALUES ($1::{pgvector_schema}.vector)',
embedding_data
)
"""
schema = get_database_schema()
schema_prefix = f'"{schema}".' if schema != "public" else ""
formatted_query = query_template.format(schema_prefix=schema_prefix)
# pgvector extension is typically installed in "public" schema
# On Supabase it may be in "extensions" but "public" is the common default
pgvector_schema = "public"
formatted_query = query_template.format(
schema_prefix=schema_prefix,
schema=schema,
pgvector_schema=pgvector_schema,
)
import prisma as prisma_module
db_client = client if client else prisma_module.get_client()
# Set search_path to include public schema if requested
# Prisma doesn't support the 'options' connection parameter, so we set it per-session
# This is idempotent and safe to call multiple times
if set_public_search_path:
await db_client.execute_raw(f"SET search_path = {schema}, public") # type: ignore
if execute:
result = await db_client.execute_raw(formatted_query, *args) # type: ignore
else:
@@ -167,16 +163,12 @@ async def _raw_with_schema(
return result
async def query_raw_with_schema(
query_template: str, *args, set_public_search_path: bool = False
) -> list[dict]:
async def query_raw_with_schema(query_template: str, *args) -> list[dict]:
"""Execute raw SQL SELECT query with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
List of result rows as dictionaries
@@ -187,23 +179,20 @@ async def query_raw_with_schema(
user_id
)
"""
return await _raw_with_schema(query_template, *args, execute=False, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=False) # type: ignore
async def execute_raw_with_schema(
query_template: str,
*args,
client: Prisma | None = None,
set_public_search_path: bool = False,
) -> int:
"""Execute raw SQL command (INSERT/UPDATE/DELETE) with proper schema handling.
Args:
query_template: SQL query with {schema_prefix} placeholder
query_template: SQL query with {schema_prefix} and/or {schema} placeholders
*args: Query parameters
client: Optional Prisma client for transactions
set_public_search_path: If True, sets search_path to include public schema.
Needed for pgvector types and other public schema objects.
Returns:
Number of affected rows
@@ -215,7 +204,7 @@ async def execute_raw_with_schema(
client=tx # Optional transaction client
)
"""
return await _raw_with_schema(query_template, *args, execute=True, client=client, set_public_search_path=set_public_search_path) # type: ignore
return await _raw_with_schema(query_template, *args, execute=True, client=client) # type: ignore
class BaseDbModel(BaseModel):

View File

@@ -81,7 +81,8 @@ class ExecutionContext(BaseModel):
This includes information needed by blocks, sub-graphs, and execution management.
"""
safe_mode: bool = True
human_in_the_loop_safe_mode: bool = True
sensitive_action_safe_mode: bool = False
user_timezone: str = "UTC"
root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None

View File

@@ -3,7 +3,7 @@ import logging
import uuid
from collections import defaultdict
from datetime import datetime, timezone
from typing import TYPE_CHECKING, Any, Literal, Optional, cast
from typing import TYPE_CHECKING, Annotated, Any, Literal, Optional, cast
from prisma.enums import SubmissionStatus
from prisma.models import (
@@ -20,7 +20,7 @@ from prisma.types import (
AgentNodeLinkCreateInput,
StoreListingVersionWhereInput,
)
from pydantic import BaseModel, Field, create_model
from pydantic import BaseModel, BeforeValidator, Field, create_model
from pydantic.fields import computed_field
from backend.blocks.agent import AgentExecutorBlock
@@ -62,7 +62,29 @@ logger = logging.getLogger(__name__)
class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool | None = None
# Use Annotated with BeforeValidator to coerce None to default values.
# This handles cases where the database has null values for these fields.
human_in_the_loop_safe_mode: Annotated[
bool, BeforeValidator(lambda v: v if v is not None else True)
] = True
sensitive_action_safe_mode: Annotated[
bool, BeforeValidator(lambda v: v if v is not None else False)
] = False
@classmethod
def from_graph(
cls,
graph: "GraphModel",
hitl_safe_mode: bool | None = None,
sensitive_action_safe_mode: bool = False,
) -> "GraphSettings":
# Default to True if not explicitly set
if hitl_safe_mode is None:
hitl_safe_mode = True
return cls(
human_in_the_loop_safe_mode=hitl_safe_mode,
sensitive_action_safe_mode=sensitive_action_safe_mode,
)
class Link(BaseDbModel):
@@ -244,10 +266,14 @@ class BaseGraph(BaseDbModel):
return any(
node.block_id
for node in self.nodes
if (
node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
or node.block.requires_human_review
)
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
)
@computed_field
@property
def has_sensitive_action(self) -> bool:
return any(
node.block_id for node in self.nodes if node.block.is_sensitive_action
)
@property

View File

@@ -309,7 +309,7 @@ def ensure_embeddings_coverage():
# Process in batches until no more missing embeddings
while True:
result = db_client.backfill_missing_embeddings(batch_size=10)
result = db_client.backfill_missing_embeddings(batch_size=100)
total_processed += result["processed"]
total_success += result["success"]

View File

@@ -873,11 +873,8 @@ async def add_graph_execution(
settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id)
execution_context = ExecutionContext(
safe_mode=(
settings.human_in_the_loop_safe_mode
if settings.human_in_the_loop_safe_mode is not None
else True
),
human_in_the_loop_safe_mode=settings.human_in_the_loop_safe_mode,
sensitive_action_safe_mode=settings.sensitive_action_safe_mode,
user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
),

View File

@@ -386,6 +386,7 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)
@@ -651,6 +652,7 @@ async def test_add_graph_execution_with_nodes_to_skip(mocker: MockerFixture):
mock_user.timezone = "UTC"
mock_settings = mocker.MagicMock()
mock_settings.human_in_the_loop_safe_mode = True
mock_settings.sensitive_action_safe_mode = False
mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user)
mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings)

View File

@@ -11,6 +11,7 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -11,6 +11,7 @@
"forked_from_version": null,
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"id": "graph-123",
"input_schema": {
"properties": {},

View File

@@ -27,6 +27,8 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": true,
@@ -34,7 +36,8 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": null
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
},
"marketplace_listing": null
},
@@ -65,6 +68,8 @@
"properties": {}
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"has_sensitive_action": false,
"trigger_setup_info": null,
"new_output": false,
"can_access_graph": false,
@@ -72,7 +77,8 @@
"is_favorite": false,
"recommended_schedule_cron": null,
"settings": {
"human_in_the_loop_safe_mode": null
"human_in_the_loop_safe_mode": true,
"sensitive_action_safe_mode": false
},
"marketplace_listing": null
}

View File

@@ -18,69 +18,118 @@ interface Props {
fullWidth?: boolean;
}
interface SafeModeButtonProps {
isEnabled: boolean;
label: string;
tooltipEnabled: string;
tooltipDisabled: string;
onToggle: () => void;
isPending: boolean;
fullWidth?: boolean;
}
function SafeModeButton({
isEnabled,
label,
tooltipEnabled,
tooltipDisabled,
onToggle,
isPending,
fullWidth = false,
}: SafeModeButtonProps) {
return (
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant={isEnabled ? "primary" : "outline"}
size="small"
onClick={onToggle}
disabled={isPending}
className={cn("justify-start", fullWidth ? "w-full" : "")}
>
{isEnabled ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-200">
{label}: ON
</Text>
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-600">
{label}: OFF
</Text>
</>
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
{label}: {isEnabled ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{isEnabled ? tooltipEnabled : tooltipDisabled}
</div>
</div>
</TooltipContent>
</Tooltip>
);
}
export function FloatingSafeModeToggle({
graph,
className,
fullWidth = false,
}: Props) {
const {
currentSafeMode,
currentHITLSafeMode,
showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
isStateUndetermined,
handleToggle,
} = useAgentSafeMode(graph);
if (!shouldShowToggle || isStateUndetermined || isPending) {
if (!shouldShowToggle || isPending) {
return null;
}
const showHITL = showHITLToggle && !isHITLStateUndetermined;
const showSensitive = showSensitiveActionToggle;
if (!showHITL && !showSensitive) {
return null;
}
return (
<div className={cn("fixed z-50", className)}>
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant={currentSafeMode! ? "primary" : "outline"}
key={graph.id}
size="small"
title={
currentSafeMode!
? "Safe Mode: ON. Human in the loop blocks require manual review"
: "Safe Mode: OFF. Human in the loop blocks proceed automatically"
}
onClick={handleToggle}
className={cn(fullWidth ? "w-full" : "")}
>
{currentSafeMode! ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-200">
Safe Mode: ON
</Text>
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
<Text variant="body" className="text-zinc-600">
Safe Mode: OFF
</Text>
</>
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
Safe Mode: {currentSafeMode! ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{currentSafeMode!
? "Human in the loop blocks require manual review"
: "Human in the loop blocks proceed automatically"}
</div>
</div>
</TooltipContent>
</Tooltip>
<div className={cn("fixed z-50 flex flex-col gap-2", className)}>
{showHITL && (
<SafeModeButton
isEnabled={currentHITLSafeMode}
label="Human in the loop block approval"
tooltipEnabled="The agent will pause at human-in-the-loop blocks and wait for your approval"
tooltipDisabled="Human in the loop blocks will proceed automatically"
onToggle={handleHITLToggle}
isPending={isPending}
fullWidth={fullWidth}
/>
)}
{showSensitive && (
<SafeModeButton
isEnabled={currentSensitiveActionSafeMode}
label="Sensitive actions blocks approval"
tooltipEnabled="The agent will pause at sensitive action blocks and wait for your approval"
tooltipDisabled="Sensitive action blocks will proceed automatically"
onToggle={handleSensitiveActionToggle}
isPending={isPending}
fullWidth={fullWidth}
/>
)}
</div>
);
}

View File

@@ -31,10 +31,18 @@ export function AgentSettingsModal({
}
}
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
if (!hasHITLBlocks) return null;
if (!shouldShowToggle) return null;
return (
<Dialog
@@ -57,23 +65,48 @@ export function AgentSettingsModal({
)}
<Dialog.Content>
<div className="space-y-6">
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
{showHITLToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
</div>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait for
your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</div>
</Dialog.Content>
</Dialog>

View File

@@ -5,48 +5,112 @@ import { Graph } from "@/lib/autogpt-server-api/types";
import { cn } from "@/lib/utils";
import { ShieldCheckIcon, ShieldIcon } from "@phosphor-icons/react";
import { useAgentSafeMode } from "@/hooks/useAgentSafeMode";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/atoms/Tooltip/BaseTooltip";
interface Props {
graph: GraphModel | LibraryAgent | Graph;
className?: string;
fullWidth?: boolean;
}
export function SafeModeToggle({ graph }: Props) {
interface SafeModeIconButtonProps {
isEnabled: boolean;
label: string;
tooltipEnabled: string;
tooltipDisabled: string;
onToggle: () => void;
isPending: boolean;
}
function SafeModeIconButton({
isEnabled,
label,
tooltipEnabled,
tooltipDisabled,
onToggle,
isPending,
}: SafeModeIconButtonProps) {
return (
<Tooltip delayDuration={100}>
<TooltipTrigger asChild>
<Button
variant="icon"
size="icon"
aria-label={`${label}: ${isEnabled ? "ON" : "OFF"}. ${isEnabled ? tooltipEnabled : tooltipDisabled}`}
onClick={onToggle}
disabled={isPending}
className={cn(isPending ? "opacity-0" : "opacity-100")}
>
{isEnabled ? (
<ShieldCheckIcon weight="bold" size={16} />
) : (
<ShieldIcon weight="bold" size={16} />
)}
</Button>
</TooltipTrigger>
<TooltipContent>
<div className="text-center">
<div className="font-medium">
{label}: {isEnabled ? "ON" : "OFF"}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{isEnabled ? tooltipEnabled : tooltipDisabled}
</div>
</div>
</TooltipContent>
</Tooltip>
);
}
export function SafeModeToggle({ graph, className }: Props) {
const {
currentSafeMode,
currentHITLSafeMode,
showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
isStateUndetermined,
handleToggle,
} = useAgentSafeMode(graph);
if (!shouldShowToggle || isStateUndetermined) {
if (!shouldShowToggle || isHITLStateUndetermined) {
return null;
}
const showHITL = showHITLToggle && !isHITLStateUndetermined;
const showSensitive = showSensitiveActionToggle;
if (!showHITL && !showSensitive) {
return null;
}
return (
<Button
variant="icon"
key={graph.id}
size="icon"
aria-label={
currentSafeMode!
? "Safe Mode: ON. Human in the loop blocks require manual review"
: "Safe Mode: OFF. Human in the loop blocks proceed automatically"
}
onClick={handleToggle}
className={cn(isPending ? "opacity-0" : "opacity-100")}
>
{currentSafeMode! ? (
<>
<ShieldCheckIcon weight="bold" size={16} />
</>
) : (
<>
<ShieldIcon weight="bold" size={16} />
</>
<div className={cn("flex gap-1", className)}>
{showHITL && (
<SafeModeIconButton
isEnabled={currentHITLSafeMode}
label="Human-in-the-loop"
tooltipEnabled="The agent will pause at human-in-the-loop blocks and wait for your approval"
tooltipDisabled="Human-in-the-loop blocks will proceed automatically"
onToggle={handleHITLToggle}
isPending={isPending}
/>
)}
</Button>
{showSensitive && (
<SafeModeIconButton
isEnabled={currentSensitiveActionSafeMode}
label="Sensitive actions"
tooltipEnabled="The agent will pause at sensitive action blocks and wait for your approval"
tooltipDisabled="Sensitive action blocks will proceed automatically"
onToggle={handleSensitiveActionToggle}
isPending={isPending}
/>
)}
</div>
);
}

View File

@@ -13,8 +13,16 @@ interface Props {
}
export function SelectedSettingsView({ agent, onClearSelectedRun }: Props) {
const { currentSafeMode, isPending, hasHITLBlocks, handleToggle } =
useAgentSafeMode(agent);
const {
currentHITLSafeMode,
showHITLToggle,
handleHITLToggle,
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
isPending,
shouldShowToggle,
} = useAgentSafeMode(agent);
return (
<SelectedViewLayout agent={agent}>
@@ -34,24 +42,51 @@ export function SelectedSettingsView({ agent, onClearSelectedRun }: Props) {
</div>
<div className={`${AGENT_LIBRARY_SECTION_PADDING_X} space-y-6`}>
{hasHITLBlocks ? (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">Require human approval</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause and wait for your review before
continuing
</Text>
{shouldShowToggle ? (
<>
{showHITLToggle && (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Human-in-the-loop approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at human-in-the-loop blocks and
wait for your review before continuing
</Text>
</div>
<Switch
checked={currentHITLSafeMode || false}
onCheckedChange={handleHITLToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
<Switch
checked={currentSafeMode || false}
onCheckedChange={handleToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
{showSensitiveActionToggle && (
<div className="flex w-full max-w-2xl flex-col items-start gap-4 rounded-xl border border-zinc-100 bg-white p-6">
<div className="flex w-full items-start justify-between gap-4">
<div className="flex-1">
<Text variant="large-semibold">
Sensitive action approval
</Text>
<Text variant="large" className="mt-1 text-zinc-900">
The agent will pause at sensitive action blocks and wait
for your review before continuing
</Text>
</div>
<Switch
checked={currentSensitiveActionSafeMode}
onCheckedChange={handleSensitiveActionToggle}
disabled={isPending}
className="mt-1"
/>
</div>
</div>
)}
</>
) : (
<div className="rounded-xl border border-zinc-100 bg-white p-6">
<Text variant="body" className="text-muted-foreground">

View File

@@ -6383,6 +6383,11 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -6399,6 +6404,7 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info"
],
"title": "BaseGraph"
@@ -7629,6 +7635,11 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7652,6 +7663,7 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info",
"credentials_input_schema"
],
@@ -7730,6 +7742,11 @@
"title": "Has Human In The Loop",
"readOnly": true
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"readOnly": true
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7754,6 +7771,7 @@
"output_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"trigger_setup_info",
"credentials_input_schema"
],
@@ -7762,8 +7780,14 @@
"GraphSettings": {
"properties": {
"human_in_the_loop_safe_mode": {
"anyOf": [{ "type": "boolean" }, { "type": "null" }],
"title": "Human In The Loop Safe Mode"
"type": "boolean",
"title": "Human In The Loop Safe Mode",
"default": true
},
"sensitive_action_safe_mode": {
"type": "boolean",
"title": "Sensitive Action Safe Mode",
"default": false
}
},
"type": "object",
@@ -7921,6 +7945,16 @@
"title": "Has External Trigger",
"description": "Whether the agent has an external trigger (e.g. webhook) node"
},
"has_human_in_the_loop": {
"type": "boolean",
"title": "Has Human In The Loop",
"description": "Whether the agent has human-in-the-loop blocks"
},
"has_sensitive_action": {
"type": "boolean",
"title": "Has Sensitive Action",
"description": "Whether the agent has sensitive action blocks"
},
"trigger_setup_info": {
"anyOf": [
{ "$ref": "#/components/schemas/GraphTriggerInfo" },
@@ -7967,6 +8001,8 @@
"output_schema",
"credentials_input_schema",
"has_external_trigger",
"has_human_in_the_loop",
"has_sensitive_action",
"new_output",
"can_access_graph",
"is_latest_version",

View File

@@ -20,11 +20,15 @@ function hasHITLBlocks(graph: GraphModel | LibraryAgent | Graph): boolean {
if ("has_human_in_the_loop" in graph) {
return !!graph.has_human_in_the_loop;
}
return false;
}
if (isLibraryAgent(graph)) {
return graph.settings?.human_in_the_loop_safe_mode !== null;
function hasSensitiveActionBlocks(
graph: GraphModel | LibraryAgent | Graph,
): boolean {
if ("has_sensitive_action" in graph) {
return !!graph.has_sensitive_action;
}
return false;
}
@@ -40,7 +44,9 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
const graphId = getGraphId(graph);
const isAgent = isLibraryAgent(graph);
const shouldShowToggle = hasHITLBlocks(graph);
const showHITLToggle = hasHITLBlocks(graph);
const showSensitiveActionToggle = hasSensitiveActionBlocks(graph);
const shouldShowToggle = showHITLToggle || showSensitiveActionToggle;
const { mutateAsync: updateGraphSettings, isPending } =
usePatchV1UpdateGraphSettings();
@@ -56,27 +62,37 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
},
);
const [localSafeMode, setLocalSafeMode] = useState<boolean | null>(null);
const [localHITLSafeMode, setLocalHITLSafeMode] = useState<boolean>(true);
const [localSensitiveActionSafeMode, setLocalSensitiveActionSafeMode] =
useState<boolean>(false);
const [isLocalStateLoaded, setIsLocalStateLoaded] = useState<boolean>(false);
useEffect(() => {
if (!isAgent && libraryAgent) {
const backendValue = libraryAgent.settings?.human_in_the_loop_safe_mode;
if (backendValue !== undefined) {
setLocalSafeMode(backendValue);
}
setLocalHITLSafeMode(
libraryAgent.settings?.human_in_the_loop_safe_mode ?? true,
);
setLocalSensitiveActionSafeMode(
libraryAgent.settings?.sensitive_action_safe_mode ?? false,
);
setIsLocalStateLoaded(true);
}
}, [isAgent, libraryAgent]);
const currentSafeMode = isAgent
? graph.settings?.human_in_the_loop_safe_mode
: localSafeMode;
const currentHITLSafeMode = isAgent
? (graph.settings?.human_in_the_loop_safe_mode ?? true)
: localHITLSafeMode;
const isStateUndetermined = isAgent
? graph.settings?.human_in_the_loop_safe_mode == null
: isLoading || localSafeMode === null;
const currentSensitiveActionSafeMode = isAgent
? (graph.settings?.sensitive_action_safe_mode ?? false)
: localSensitiveActionSafeMode;
const handleToggle = useCallback(async () => {
const newSafeMode = !currentSafeMode;
const isHITLStateUndetermined = isAgent
? false
: isLoading || !isLocalStateLoaded;
const handleHITLToggle = useCallback(async () => {
const newSafeMode = !currentHITLSafeMode;
try {
await updateGraphSettings({
@@ -85,7 +101,7 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
});
if (!isAgent) {
setLocalSafeMode(newSafeMode);
setLocalHITLSafeMode(newSafeMode);
}
if (isAgent) {
@@ -101,37 +117,62 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
queryClient.invalidateQueries({ queryKey: ["v2", "executions"] });
toast({
title: `Safe mode ${newSafeMode ? "enabled" : "disabled"}`,
title: `HITL safe mode ${newSafeMode ? "enabled" : "disabled"}`,
description: newSafeMode
? "Human-in-the-loop blocks will require manual review"
: "Human-in-the-loop blocks will proceed automatically",
duration: 2000,
});
} catch (error) {
const isNotFoundError =
error instanceof Error &&
(error.message.includes("404") || error.message.includes("not found"));
if (!isAgent && isNotFoundError) {
toast({
title: "Safe mode not available",
description:
"To configure safe mode, please save this graph to your library first.",
variant: "destructive",
});
} else {
toast({
title: "Failed to update safe mode",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
}
handleToggleError(error, isAgent, toast);
}
}, [
currentSafeMode,
currentHITLSafeMode,
graphId,
isAgent,
graph.id,
updateGraphSettings,
queryClient,
toast,
]);
const handleSensitiveActionToggle = useCallback(async () => {
const newSafeMode = !currentSensitiveActionSafeMode;
try {
await updateGraphSettings({
graphId,
data: { sensitive_action_safe_mode: newSafeMode },
});
if (!isAgent) {
setLocalSensitiveActionSafeMode(newSafeMode);
}
if (isAgent) {
queryClient.invalidateQueries({
queryKey: getGetV2GetLibraryAgentQueryOptions(graph.id.toString())
.queryKey,
});
}
queryClient.invalidateQueries({
queryKey: ["v1", "graphs", graphId, "executions"],
});
queryClient.invalidateQueries({ queryKey: ["v2", "executions"] });
toast({
title: `Sensitive action safe mode ${newSafeMode ? "enabled" : "disabled"}`,
description: newSafeMode
? "Sensitive action blocks will require manual review"
: "Sensitive action blocks will proceed automatically",
duration: 2000,
});
} catch (error) {
handleToggleError(error, isAgent, toast);
}
}, [
currentSensitiveActionSafeMode,
graphId,
isAgent,
graph.id,
@@ -141,11 +182,53 @@ export function useAgentSafeMode(graph: GraphModel | LibraryAgent | Graph) {
]);
return {
currentSafeMode,
// HITL safe mode
currentHITLSafeMode,
showHITLToggle,
isHITLStateUndetermined,
handleHITLToggle,
// Sensitive action safe mode
currentSensitiveActionSafeMode,
showSensitiveActionToggle,
handleSensitiveActionToggle,
// General
isPending,
shouldShowToggle,
isStateUndetermined,
handleToggle,
hasHITLBlocks: shouldShowToggle,
// Backwards compatibility
currentSafeMode: currentHITLSafeMode,
isStateUndetermined: isHITLStateUndetermined,
handleToggle: handleHITLToggle,
hasHITLBlocks: showHITLToggle,
};
}
function handleToggleError(
error: unknown,
isAgent: boolean,
toast: ReturnType<typeof useToast>["toast"],
) {
const isNotFoundError =
error instanceof Error &&
(error.message.includes("404") || error.message.includes("not found"));
if (!isAgent && isNotFoundError) {
toast({
title: "Safe mode not available",
description:
"To configure safe mode, please save this graph to your library first.",
variant: "destructive",
});
} else {
toast({
title: "Failed to update safe mode",
description:
error instanceof Error
? error.message
: "An unexpected error occurred.",
variant: "destructive",
});
}
}

44
docs/CLAUDE.md Normal file
View File

@@ -0,0 +1,44 @@
# Documentation Guidelines
## Block Documentation Manual Sections
When updating manual sections (`<!-- MANUAL: ... -->`) in block documentation files (e.g., `docs/integrations/basic.md`), follow these formats:
### How It Works Section
Provide a technical explanation of how the block functions:
- Describe the processing logic in 1-2 paragraphs
- Mention any validation, error handling, or edge cases
- Use code examples with backticks when helpful (e.g., `[[1, 2], [3, 4]]` becomes `[1, 2, 3, 4]`)
Example:
```markdown
<!-- MANUAL: how_it_works -->
The block iterates through each list in the input and extends a result list with all elements from each one. It processes lists in order, so `[[1, 2], [3, 4]]` becomes `[1, 2, 3, 4]`.
The block includes validation to ensure each item is actually a list. If a non-list value is encountered, the block outputs an error message instead of proceeding.
<!-- END MANUAL -->
```
### Use Case Section
Provide 3 practical use cases in this format:
- **Bold Heading**: Short one-sentence description
Example:
```markdown
<!-- MANUAL: use_case -->
**Paginated API Merging**: Combine results from multiple API pages into a single list for batch processing or display.
**Parallel Task Aggregation**: Merge outputs from parallel workflow branches that each produce a list of results.
**Multi-Source Data Collection**: Combine data collected from different sources (like multiple RSS feeds or API endpoints) into one unified list.
<!-- END MANUAL -->
```
### Style Guidelines
- Keep descriptions concise and action-oriented
- Focus on practical, real-world scenarios
- Use consistent terminology with other blocks
- Avoid overly technical jargon unless necessary

View File

@@ -31,6 +31,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim
| [Agent Time Input](basic.md#agent-time-input) | Block for time input |
| [Agent Toggle Input](basic.md#agent-toggle-input) | Block for boolean toggle input |
| [Block Installation](basic.md#block-installation) | Given a code string, this block allows the verification and installation of a block code into the system |
| [Concatenate Lists](basic.md#concatenate-lists) | Concatenates multiple lists into a single list |
| [Dictionary Is Empty](basic.md#dictionary-is-empty) | Checks if a dictionary is empty |
| [File Store](basic.md#file-store) | Stores the input file in the temporary directory |
| [Find In Dictionary](basic.md#find-in-dictionary) | A block that looks up a value in a dictionary, list, or object by key or index and returns the corresponding value |

View File

@@ -634,6 +634,42 @@ This enables extensibility by allowing custom blocks to be added without modifyi
---
## Concatenate Lists
### What it is
Concatenates multiple lists into a single list. All elements from all input lists are combined in order.
### How it works
<!-- MANUAL: how_it_works -->
The block iterates through each list in the input and extends a result list with all elements from each one. It processes lists in order, so `[[1, 2], [3, 4]]` becomes `[1, 2, 3, 4]`.
The block includes validation to ensure each item is actually a list. If a non-list value (like a string or number) is encountered, the block outputs an error message instead of proceeding. None values are skipped automatically.
<!-- END MANUAL -->
### Inputs
| Input | Description | Type | Required |
|-------|-------------|------|----------|
| lists | A list of lists to concatenate together. All lists will be combined in order into a single list. | List[List[Any]] | Yes |
### Outputs
| Output | Description | Type |
|--------|-------------|------|
| error | Error message if concatenation failed due to invalid input types. | str |
| concatenated_list | The concatenated list containing all elements from all input lists in order. | List[Any] |
### Possible use case
<!-- MANUAL: use_case -->
**Paginated API Merging**: Combine results from multiple API pages into a single list for batch processing or display.
**Parallel Task Aggregation**: Merge outputs from parallel workflow branches that each produce a list of results.
**Multi-Source Data Collection**: Combine data collected from different sources (like multiple RSS feeds or API endpoints) into one unified list.
<!-- END MANUAL -->
---
## Dictionary Is Empty
### What it is