mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-10 07:38:04 -05:00
## Summary Fix critical bug where deleted agents continue running scheduled and triggered executions indefinitely, consuming credits without user control. ## Problem When agents are deleted from user libraries, their schedules and webhook triggers remain active, leading to: - ❌ Uncontrolled resource consumption - ❌ "Unknown agent" executions that charge credits - ❌ No way for users to stop orphaned executions - ❌ Accumulation of orphaned database records ## Solution ### 1. Prevention: Library Validation Before Execution - Add `is_graph_in_user_library()` function with efficient database queries - Validate graph accessibility before all executions in `validate_and_construct_node_execution_input()` - Use specific `GraphNotInLibraryError` for clear error handling ### 2. Cleanup: Remove Schedules & Webhooks on Deletion - Enhanced `delete_library_agent()` to clean up associated schedules and webhooks - Comprehensive cleanup functions for both scheduled and triggered executions - Proper database transaction handling ### 3. Error-Based Cleanup: Handle Existing Orphaned Resources - Catch `GraphNotInLibraryError` in scheduler and webhook handlers - Automatically clean up orphaned resources when execution fails - Graceful degradation without breaking existing workflows ### 4. Migration: Clean Up Historical Orphans - SQL migration to remove existing orphaned schedules and webhooks - Performance index for faster cleanup queries - Proper logging and error handling ## Key Changes ### Core Library Validation ```python # backend/data/graph.py - Single source of truth async def is_graph_in_user_library(graph_id: str, user_id: str, graph_version: Optional[int] = None) -> bool: where_clause = {"userId": user_id, "agentGraphId": graph_id, "isDeleted": False, "isArchived": False} if graph_version is not None: where_clause["agentGraphVersion"] = graph_version count = await LibraryAgent.prisma().count(where=where_clause) return count > 0 ``` ### Enhanced Agent Deletion ```python # backend/server/v2/library/db.py async def delete_library_agent(library_agent_id: str, user_id: str, soft_delete: bool = True) -> None: # ... existing deletion logic ... await _cleanup_schedules_for_graph(graph_id=graph_id, user_id=user_id) await _cleanup_webhooks_for_graph(graph_id=graph_id, user_id=user_id) ``` ### Execution Prevention ```python # backend/executor/utils.py if not await gdb.is_graph_in_user_library(graph_id=graph_id, user_id=user_id, graph_version=graph.version): raise GraphNotInLibraryError(f"Graph #{graph_id} is not accessible in your library") ``` ### Error-Based Cleanup ```python # backend/executor/scheduler.py & backend/server/integrations/router.py except GraphNotInLibraryError as e: logger.warning(f"Execution blocked for deleted/archived graph {graph_id}") await _cleanup_orphaned_resources_for_graph(graph_id, user_id) ``` ## Technical Implementation ### Database Efficiency - Use `count()` instead of `find_first()` for faster queries - Add performance index: `idx_library_agent_user_graph_active` - Follow existing `prisma.is_connected()` patterns ### Error Handling Hierarchy - **`GraphNotInLibraryError`**: Specific exception for deleted/archived graphs - **`NotAuthorizedError`**: Generic authorization errors (preserved for user ID mismatches) - Clear error messages for better debugging ### Code Organization - Single source of truth for library validation in `backend/data/graph.py` - Import from centralized location to avoid duplication - Top-level imports following codebase conventions ## Testing & Validation ### Functional Testing - ✅ Library validation prevents execution of deleted agents - ✅ Cleanup functions remove schedules and webhooks properly - ✅ Error-based cleanup handles orphaned resources gracefully - ✅ Migration removes existing orphaned records ### Integration Testing - ✅ All existing tests pass (including `test_store_listing_graph`) - ✅ No breaking changes to existing functionality - ✅ Proper error propagation and handling ### Performance Testing - ✅ Efficient database queries with proper indexing - ✅ Minimal overhead for normal execution flows - ✅ Cleanup operations don't impact performance ## Impact ### User Experience - 🎯 **Immediate**: Deleted agents stop running automatically - 🎯 **Ongoing**: No more unexpected credit charges from orphaned executions - 🎯 **Cleanup**: Historical orphaned resources are removed ### System Reliability - 🔒 **Security**: Users can only execute agents they have access to - 🧹 **Cleanup**: Automatic removal of orphaned database records - 📈 **Performance**: Efficient validation with minimal overhead ### Developer Experience - 🎯 **Clear Errors**: Specific exception types for better debugging - 🔧 **Maintainable**: Centralized library validation logic - 📚 **Documented**: Comprehensive error handling patterns ## Files Modified - `backend/data/graph.py` - Library validation function - `backend/server/v2/library/db.py` - Enhanced agent deletion with cleanup - `backend/executor/utils.py` - Execution validation and prevention - `backend/executor/scheduler.py` - Error-based cleanup for schedules - `backend/server/integrations/router.py` - Error-based cleanup for webhooks - `backend/util/exceptions.py` - Specific error type for deleted graphs - `migrations/20251023000000_cleanup_orphaned_schedules_and_webhooks/migration.sql` - Historical cleanup ## Breaking Changes None. All changes are backward compatible and preserve existing functionality. ## Follow-up Tasks - [ ] Monitor cleanup effectiveness in production - [ ] Consider adding metrics for orphaned resource detection - [ ] Potential optimization of cleanup batch operations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
545 lines
18 KiB
Python
545 lines
18 KiB
Python
import logging
|
|
|
|
import fastapi.responses
|
|
import pytest
|
|
|
|
import backend.server.v2.library.model
|
|
import backend.server.v2.store.model
|
|
from backend.blocks.basic import StoreValueBlock
|
|
from backend.blocks.data_manipulation import FindInDictionaryBlock
|
|
from backend.blocks.io import AgentInputBlock
|
|
from backend.blocks.maths import CalculatorBlock, Operation
|
|
from backend.data import execution, graph
|
|
from backend.data.model import User
|
|
from backend.server.model import CreateGraph
|
|
from backend.server.rest_api import AgentServer
|
|
from backend.usecases.sample import create_test_graph, create_test_user
|
|
from backend.util.test import SpinTestServer, wait_execution
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
async def create_graph(s: SpinTestServer, g: graph.Graph, u: User) -> graph.Graph:
|
|
logger.info(f"Creating graph for user {u.id}")
|
|
return await s.agent_server.test_create_graph(CreateGraph(graph=g), u.id)
|
|
|
|
|
|
async def execute_graph(
|
|
agent_server: AgentServer,
|
|
test_graph: graph.Graph,
|
|
test_user: User,
|
|
input_data: dict,
|
|
num_execs: int = 4,
|
|
) -> str:
|
|
logger.info(f"Executing graph {test_graph.id} for user {test_user.id}")
|
|
logger.info(f"Input data: {input_data}")
|
|
|
|
# --- Test adding new executions --- #
|
|
graph_exec = await agent_server.test_execute_graph(
|
|
user_id=test_user.id,
|
|
graph_id=test_graph.id,
|
|
graph_version=test_graph.version,
|
|
node_input=input_data,
|
|
)
|
|
logger.info(f"Created execution with ID: {graph_exec.id}")
|
|
|
|
# Execution queue should be empty
|
|
logger.info("Waiting for execution to complete...")
|
|
result = await wait_execution(test_user.id, graph_exec.id, 30)
|
|
logger.info(f"Execution completed with {len(result)} results")
|
|
assert len(result) == num_execs
|
|
return graph_exec.id
|
|
|
|
|
|
async def assert_sample_graph_executions(
|
|
test_graph: graph.Graph,
|
|
test_user: User,
|
|
graph_exec_id: str,
|
|
):
|
|
logger.info(f"Checking execution results for graph {test_graph.id}")
|
|
graph_run = await execution.get_graph_execution(
|
|
test_user.id, graph_exec_id, include_node_executions=True
|
|
)
|
|
assert isinstance(graph_run, execution.GraphExecutionWithNodes)
|
|
|
|
output_list = [{"result": ["Hello"]}, {"result": ["World"]}]
|
|
input_list = [
|
|
{
|
|
"name": "input_1",
|
|
"value": "Hello",
|
|
},
|
|
{
|
|
"name": "input_2",
|
|
"value": "World",
|
|
"description": "This is my description of this parameter",
|
|
},
|
|
]
|
|
|
|
# Executing StoreValueBlock
|
|
exec = graph_run.node_executions[0]
|
|
logger.info(f"Checking first StoreValueBlock execution: {exec}")
|
|
assert exec.status == execution.ExecutionStatus.COMPLETED
|
|
assert exec.graph_exec_id == graph_exec_id
|
|
assert (
|
|
exec.output_data in output_list
|
|
), f"Output data: {exec.output_data} and {output_list}"
|
|
assert (
|
|
exec.input_data in input_list
|
|
), f"Input data: {exec.input_data} and {input_list}"
|
|
assert exec.node_id in [test_graph.nodes[0].id, test_graph.nodes[1].id]
|
|
|
|
# Executing StoreValueBlock
|
|
exec = graph_run.node_executions[1]
|
|
logger.info(f"Checking second StoreValueBlock execution: {exec}")
|
|
assert exec.status == execution.ExecutionStatus.COMPLETED
|
|
assert exec.graph_exec_id == graph_exec_id
|
|
assert (
|
|
exec.output_data in output_list
|
|
), f"Output data: {exec.output_data} and {output_list}"
|
|
assert (
|
|
exec.input_data in input_list
|
|
), f"Input data: {exec.input_data} and {input_list}"
|
|
assert exec.node_id in [test_graph.nodes[0].id, test_graph.nodes[1].id]
|
|
|
|
# Executing FillTextTemplateBlock
|
|
exec = graph_run.node_executions[2]
|
|
logger.info(f"Checking FillTextTemplateBlock execution: {exec}")
|
|
assert exec.status == execution.ExecutionStatus.COMPLETED
|
|
assert exec.graph_exec_id == graph_exec_id
|
|
assert exec.output_data == {"output": ["Hello, World!!!"]}
|
|
assert exec.input_data == {
|
|
"format": "{{a}}, {{b}}{{c}}",
|
|
"values": {"a": "Hello", "b": "World", "c": "!!!"},
|
|
"values_#_a": "Hello",
|
|
"values_#_b": "World",
|
|
"values_#_c": "!!!",
|
|
}
|
|
assert exec.node_id == test_graph.nodes[2].id
|
|
|
|
# Executing PrintToConsoleBlock
|
|
exec = graph_run.node_executions[3]
|
|
logger.info(f"Checking PrintToConsoleBlock execution: {exec}")
|
|
assert exec.status == execution.ExecutionStatus.COMPLETED
|
|
assert exec.graph_exec_id == graph_exec_id
|
|
assert exec.output_data == {"output": ["Hello, World!!!"]}
|
|
assert exec.input_data == {"input": "Hello, World!!!"}
|
|
assert exec.node_id == test_graph.nodes[3].id
|
|
|
|
|
|
@pytest.mark.asyncio(loop_scope="session")
|
|
async def test_agent_execution(server: SpinTestServer):
|
|
logger.info("Starting test_agent_execution")
|
|
test_user = await create_test_user()
|
|
test_graph = await create_graph(server, create_test_graph(), test_user)
|
|
data = {"input_1": "Hello", "input_2": "World"}
|
|
graph_exec_id = await execute_graph(
|
|
server.agent_server,
|
|
test_graph,
|
|
test_user,
|
|
data,
|
|
4,
|
|
)
|
|
await assert_sample_graph_executions(test_graph, test_user, graph_exec_id)
|
|
logger.info("Completed test_agent_execution")
|
|
|
|
|
|
@pytest.mark.asyncio(loop_scope="session")
|
|
async def test_input_pin_always_waited(server: SpinTestServer):
|
|
"""
|
|
This test is asserting that the input pin should always be waited for the execution,
|
|
even when default value on that pin is defined, the value has to be ignored.
|
|
|
|
Test scenario:
|
|
StoreValueBlock1
|
|
\\ input
|
|
>------- FindInDictionaryBlock | input_default: key: "", input: {}
|
|
// key
|
|
StoreValueBlock2
|
|
"""
|
|
logger.info("Starting test_input_pin_always_waited")
|
|
nodes = [
|
|
graph.Node(
|
|
block_id=StoreValueBlock().id,
|
|
input_default={"input": {"key1": "value1", "key2": "value2"}},
|
|
),
|
|
graph.Node(
|
|
block_id=StoreValueBlock().id,
|
|
input_default={"input": "key2"},
|
|
),
|
|
graph.Node(
|
|
block_id=FindInDictionaryBlock().id,
|
|
input_default={"key": "", "input": {}},
|
|
),
|
|
]
|
|
links = [
|
|
graph.Link(
|
|
source_id=nodes[0].id,
|
|
sink_id=nodes[2].id,
|
|
source_name="output",
|
|
sink_name="input",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[1].id,
|
|
sink_id=nodes[2].id,
|
|
source_name="output",
|
|
sink_name="key",
|
|
),
|
|
]
|
|
test_graph = graph.Graph(
|
|
name="TestGraph",
|
|
description="Test graph",
|
|
nodes=nodes,
|
|
links=links,
|
|
)
|
|
test_user = await create_test_user()
|
|
test_graph = await create_graph(server, test_graph, test_user)
|
|
graph_exec_id = await execute_graph(
|
|
server.agent_server, test_graph, test_user, {}, 3
|
|
)
|
|
|
|
logger.info("Checking execution results")
|
|
graph_exec = await execution.get_graph_execution(
|
|
test_user.id, graph_exec_id, include_node_executions=True
|
|
)
|
|
assert isinstance(graph_exec, execution.GraphExecutionWithNodes)
|
|
assert len(graph_exec.node_executions) == 3
|
|
# FindInDictionaryBlock should wait for the input pin to be provided,
|
|
# Hence executing extraction of "key" from {"key1": "value1", "key2": "value2"}
|
|
assert graph_exec.node_executions[2].status == execution.ExecutionStatus.COMPLETED
|
|
assert graph_exec.node_executions[2].output_data == {"output": ["value2"]}
|
|
logger.info("Completed test_input_pin_always_waited")
|
|
|
|
|
|
@pytest.mark.asyncio(loop_scope="session")
|
|
async def test_static_input_link_on_graph(server: SpinTestServer):
|
|
"""
|
|
This test is asserting the behaviour of static input link, e.g: reusable input link.
|
|
|
|
Test scenario:
|
|
*StoreValueBlock1*===a=========\\
|
|
*StoreValueBlock2*===a=====\\ ||
|
|
*StoreValueBlock3*===a===*MathBlock*====b / static====*StoreValueBlock5*
|
|
*StoreValueBlock4*=========================================//
|
|
|
|
In this test, there will be three input waiting in the MathBlock input pin `a`.
|
|
And later, another output is produced on input pin `b`, which is a static link,
|
|
this input will complete the input of those three incomplete executions.
|
|
"""
|
|
logger.info("Starting test_static_input_link_on_graph")
|
|
nodes = [
|
|
graph.Node(block_id=StoreValueBlock().id, input_default={"input": 4}), # a
|
|
graph.Node(block_id=StoreValueBlock().id, input_default={"input": 4}), # a
|
|
graph.Node(block_id=StoreValueBlock().id, input_default={"input": 4}), # a
|
|
graph.Node(block_id=StoreValueBlock().id, input_default={"input": 5}), # b
|
|
graph.Node(block_id=StoreValueBlock().id),
|
|
graph.Node(
|
|
block_id=CalculatorBlock().id,
|
|
input_default={"operation": Operation.ADD.value},
|
|
),
|
|
]
|
|
links = [
|
|
graph.Link(
|
|
source_id=nodes[0].id,
|
|
sink_id=nodes[5].id,
|
|
source_name="output",
|
|
sink_name="a",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[1].id,
|
|
sink_id=nodes[5].id,
|
|
source_name="output",
|
|
sink_name="a",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[2].id,
|
|
sink_id=nodes[5].id,
|
|
source_name="output",
|
|
sink_name="a",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[3].id,
|
|
sink_id=nodes[4].id,
|
|
source_name="output",
|
|
sink_name="input",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[4].id,
|
|
sink_id=nodes[5].id,
|
|
source_name="output",
|
|
sink_name="b",
|
|
is_static=True, # This is the static link to test.
|
|
),
|
|
]
|
|
test_graph = graph.Graph(
|
|
name="TestGraph",
|
|
description="Test graph",
|
|
nodes=nodes,
|
|
links=links,
|
|
)
|
|
test_user = await create_test_user()
|
|
test_graph = await create_graph(server, test_graph, test_user)
|
|
graph_exec_id = await execute_graph(
|
|
server.agent_server, test_graph, test_user, {}, 8
|
|
)
|
|
logger.info("Checking execution results")
|
|
graph_exec = await execution.get_graph_execution(
|
|
test_user.id, graph_exec_id, include_node_executions=True
|
|
)
|
|
assert isinstance(graph_exec, execution.GraphExecutionWithNodes)
|
|
assert len(graph_exec.node_executions) == 8
|
|
# The last 3 executions will be a+b=4+5=9
|
|
for i, exec_data in enumerate(graph_exec.node_executions[-3:]):
|
|
logger.info(f"Checking execution {i+1} of last 3: {exec_data}")
|
|
assert exec_data.status == execution.ExecutionStatus.COMPLETED
|
|
assert exec_data.output_data == {"result": [9]}
|
|
logger.info("Completed test_static_input_link_on_graph")
|
|
|
|
|
|
@pytest.mark.asyncio(loop_scope="session")
|
|
async def test_execute_preset(server: SpinTestServer):
|
|
"""
|
|
Test executing a preset.
|
|
|
|
This test ensures that:
|
|
1. A preset can be successfully executed
|
|
2. The execution results are correct
|
|
|
|
Args:
|
|
server (SpinTestServer): The test server instance.
|
|
"""
|
|
# Create test graph and user
|
|
nodes = [
|
|
graph.Node( # 0
|
|
block_id=AgentInputBlock().id,
|
|
input_default={"name": "dictionary"},
|
|
),
|
|
graph.Node( # 1
|
|
block_id=AgentInputBlock().id,
|
|
input_default={"name": "selected_value"},
|
|
),
|
|
graph.Node( # 2
|
|
block_id=StoreValueBlock().id,
|
|
input_default={"input": {"key1": "Hi", "key2": "Everyone"}},
|
|
),
|
|
graph.Node( # 3
|
|
block_id=FindInDictionaryBlock().id,
|
|
input_default={"key": "", "input": {}},
|
|
),
|
|
]
|
|
links = [
|
|
graph.Link(
|
|
source_id=nodes[0].id,
|
|
sink_id=nodes[2].id,
|
|
source_name="result",
|
|
sink_name="input",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[1].id,
|
|
sink_id=nodes[3].id,
|
|
source_name="result",
|
|
sink_name="key",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[2].id,
|
|
sink_id=nodes[3].id,
|
|
source_name="output",
|
|
sink_name="input",
|
|
),
|
|
]
|
|
test_graph = graph.Graph(
|
|
name="TestGraph",
|
|
description="Test graph",
|
|
nodes=nodes,
|
|
links=links,
|
|
)
|
|
test_user = await create_test_user()
|
|
test_graph = await create_graph(server, test_graph, test_user)
|
|
|
|
# Create preset with initial values
|
|
preset = backend.server.v2.library.model.LibraryAgentPresetCreatable(
|
|
name="Test Preset With Clash",
|
|
description="Test preset with clashing input values",
|
|
graph_id=test_graph.id,
|
|
graph_version=test_graph.version,
|
|
inputs={
|
|
"dictionary": {"key1": "Hello", "key2": "World"},
|
|
"selected_value": "key2",
|
|
},
|
|
credentials={},
|
|
is_active=True,
|
|
)
|
|
created_preset = await server.agent_server.test_create_preset(preset, test_user.id)
|
|
|
|
# Execute preset with overriding values
|
|
result = await server.agent_server.test_execute_preset(
|
|
preset_id=created_preset.id,
|
|
user_id=test_user.id,
|
|
)
|
|
|
|
# Verify execution
|
|
assert result is not None
|
|
graph_exec_id = result.id
|
|
|
|
# Wait for execution to complete
|
|
executions = await wait_execution(test_user.id, graph_exec_id)
|
|
assert len(executions) == 4
|
|
|
|
# FindInDictionaryBlock should wait for the input pin to be provided,
|
|
# Hence executing extraction of "key" from {"key1": "value1", "key2": "value2"}
|
|
assert executions[3].status == execution.ExecutionStatus.COMPLETED
|
|
assert executions[3].output_data == {"output": ["World"]}
|
|
|
|
|
|
@pytest.mark.asyncio(loop_scope="session")
|
|
async def test_execute_preset_with_clash(server: SpinTestServer):
|
|
"""
|
|
Test executing a preset with clashing input data.
|
|
"""
|
|
# Create test graph and user
|
|
nodes = [
|
|
graph.Node( # 0
|
|
block_id=AgentInputBlock().id,
|
|
input_default={"name": "dictionary"},
|
|
),
|
|
graph.Node( # 1
|
|
block_id=AgentInputBlock().id,
|
|
input_default={"name": "selected_value"},
|
|
),
|
|
graph.Node( # 2
|
|
block_id=StoreValueBlock().id,
|
|
input_default={"input": {"key1": "Hi", "key2": "Everyone"}},
|
|
),
|
|
graph.Node( # 3
|
|
block_id=FindInDictionaryBlock().id,
|
|
input_default={"key": "", "input": {}},
|
|
),
|
|
]
|
|
links = [
|
|
graph.Link(
|
|
source_id=nodes[0].id,
|
|
sink_id=nodes[2].id,
|
|
source_name="result",
|
|
sink_name="input",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[1].id,
|
|
sink_id=nodes[3].id,
|
|
source_name="result",
|
|
sink_name="key",
|
|
),
|
|
graph.Link(
|
|
source_id=nodes[2].id,
|
|
sink_id=nodes[3].id,
|
|
source_name="output",
|
|
sink_name="input",
|
|
),
|
|
]
|
|
test_graph = graph.Graph(
|
|
name="TestGraph",
|
|
description="Test graph",
|
|
nodes=nodes,
|
|
links=links,
|
|
)
|
|
test_user = await create_test_user()
|
|
test_graph = await create_graph(server, test_graph, test_user)
|
|
|
|
# Create preset with initial values
|
|
preset = backend.server.v2.library.model.LibraryAgentPresetCreatable(
|
|
name="Test Preset With Clash",
|
|
description="Test preset with clashing input values",
|
|
graph_id=test_graph.id,
|
|
graph_version=test_graph.version,
|
|
inputs={
|
|
"dictionary": {"key1": "Hello", "key2": "World"},
|
|
"selected_value": "key2",
|
|
},
|
|
credentials={},
|
|
is_active=True,
|
|
)
|
|
created_preset = await server.agent_server.test_create_preset(preset, test_user.id)
|
|
|
|
# Execute preset with overriding values
|
|
result = await server.agent_server.test_execute_preset(
|
|
preset_id=created_preset.id,
|
|
inputs={"selected_value": "key1"},
|
|
user_id=test_user.id,
|
|
)
|
|
|
|
# Verify execution
|
|
assert result is not None, "Result must not be None"
|
|
graph_exec_id = result.id
|
|
|
|
# Wait for execution to complete
|
|
executions = await wait_execution(test_user.id, graph_exec_id)
|
|
assert len(executions) == 4
|
|
|
|
# FindInDictionaryBlock should wait for the input pin to be provided,
|
|
# Hence executing extraction of "key" from {"key1": "value1", "key2": "value2"}
|
|
assert executions[3].status == execution.ExecutionStatus.COMPLETED
|
|
assert executions[3].output_data == {"output": ["Hello"]}
|
|
|
|
|
|
@pytest.mark.asyncio(loop_scope="session")
|
|
async def test_store_listing_graph(server: SpinTestServer):
|
|
logger.info("Starting test_agent_execution")
|
|
test_user = await create_test_user()
|
|
test_graph = await create_graph(server, create_test_graph(), test_user)
|
|
|
|
store_submission_request = backend.server.v2.store.model.StoreSubmissionRequest(
|
|
agent_id=test_graph.id,
|
|
agent_version=test_graph.version,
|
|
slug=test_graph.id,
|
|
name="Test name",
|
|
sub_heading="Test sub heading",
|
|
video_url=None,
|
|
image_urls=[],
|
|
description="Test description",
|
|
categories=[],
|
|
)
|
|
|
|
store_listing = await server.agent_server.test_create_store_listing(
|
|
store_submission_request, test_user.id
|
|
)
|
|
|
|
if isinstance(store_listing, fastapi.responses.JSONResponse):
|
|
assert False, "Failed to create store listing"
|
|
|
|
slv_id = (
|
|
store_listing.store_listing_version_id
|
|
if store_listing.store_listing_version_id is not None
|
|
else None
|
|
)
|
|
|
|
assert slv_id is not None
|
|
|
|
admin_user = await create_test_user(alt_user=True)
|
|
await server.agent_server.test_review_store_listing(
|
|
backend.server.v2.store.model.ReviewSubmissionRequest(
|
|
store_listing_version_id=slv_id,
|
|
is_approved=True,
|
|
comments="Test comments",
|
|
),
|
|
user_id=admin_user.id,
|
|
)
|
|
|
|
# Add the approved store listing to the admin user's library so they can execute it
|
|
from backend.server.v2.library.db import add_store_agent_to_library
|
|
|
|
await add_store_agent_to_library(
|
|
store_listing_version_id=slv_id, user_id=admin_user.id
|
|
)
|
|
|
|
alt_test_user = admin_user
|
|
|
|
data = {"input_1": "Hello", "input_2": "World"}
|
|
graph_exec_id = await execute_graph(
|
|
server.agent_server,
|
|
test_graph,
|
|
alt_test_user,
|
|
data,
|
|
4,
|
|
)
|
|
|
|
await assert_sample_graph_executions(test_graph, alt_test_user, graph_exec_id)
|
|
logger.info("Completed test_agent_execution")
|