Files
AutoGPT/autogpt_platform/backend/test/conftest.py
Reinier van der Leer 296eee0b4f feat(platform/library): Library v2 > Agent Runs page (#9051)
- Resolves #8780
- Part of #8774

### Changes 🏗️

- Add new UI components
- Add `/agents/[id]` page, with sub-components:
  - `AgentRunsSelectorList`
    - `AgentRunSummaryCard`
      - `AgentRunStatusChip`
  - `AgentRunDetailsView`
  - `AgentRunDraftView`
  - `AgentScheduleDetailsView`

Backend improvements:
- Improve output of execution-related API endpoints: return
`GraphExecution` instead of `NodeExecutionResult[]`
- Reduce log spam from Prisma in tests

General frontend improvements:
- Hide nav link names on smaller screens to prevent navbar overflow
- Clean up styling and fix sizing of `agptui/Button`

Technical frontend improvements:
- Fix tailwind config size increments
- Rename `font-poppin` -> `font-poppins`
- Clean up component implementations and usages
   - Yeet all occurrences of `variant="default"`
- Remove `default` button variant as duplicate of `outline`; make
`outline` the default
- Fix minor typing issues

DX:
- Add front end type-check step to `pre-commit` config
- Fix logging setup in conftest.py

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - `/agents/[id]` (new)
    - Go to page -> list of runs loads
    - Create new run -> runs; all I/O is visible
    - Click "Run again" -> runs again with same input
  - `/monitoring` (existing)
    - Go to page -> everything loads
    - Selecting agents and agent runs works

---------

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-02-19 22:07:03 +00:00

51 lines
1.6 KiB
Python

import logging
import os
import pytest
from backend.util.logging import configure_logging
# NOTE: You can run tests like with the --log-cli-level=INFO to see the logs
# Set up logging
configure_logging()
logger = logging.getLogger(__name__)
# Reduce Prisma log spam unless PRISMA_DEBUG is set
if not os.getenv("PRISMA_DEBUG"):
prisma_logger = logging.getLogger("prisma")
prisma_logger.setLevel(logging.INFO)
@pytest.fixture(scope="session")
async def server():
from backend.util.test import SpinTestServer
async with SpinTestServer() as server:
yield server
@pytest.fixture(scope="session", autouse=True)
async def graph_cleanup(server):
created_graph_ids = []
original_create_graph = server.agent_server.test_create_graph
async def create_graph_wrapper(*args, **kwargs):
created_graph = await original_create_graph(*args, **kwargs)
# Extract user_id correctly
user_id = kwargs.get("user_id", args[2] if len(args) > 2 else None)
created_graph_ids.append((created_graph.id, user_id))
return created_graph
try:
server.agent_server.test_create_graph = create_graph_wrapper
yield # This runs the test function
finally:
server.agent_server.test_create_graph = original_create_graph
# Delete the created graphs and assert they were deleted
for graph_id, user_id in created_graph_ids:
if user_id:
resp = await server.agent_server.test_delete_graph(graph_id, user_id)
num_deleted = resp["version_counts"]
assert num_deleted > 0, f"Graph {graph_id} was not deleted."