mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-08 22:58:01 -05:00
feat(backend): snapshot test responses (#10039)
This pull request introduces a comprehensive backend testing guide and adds new tests for analytics logging and various API endpoints, focusing on snapshot testing. It also includes corresponding snapshot files for these tests. Below are the most significant changes: ### Documentation Updates: * Added a detailed `TESTING.md` file to the backend, providing a guide for running tests, snapshot testing, writing API route tests, and best practices. It includes examples for mocking, fixtures, and CI/CD integration. ### Analytics Logging Tests: * Implemented tests for logging raw metrics and analytics in `analytics_test.py`, covering success scenarios, various input values, invalid requests, and complex nested data. These tests utilize snapshot testing for response validation. * Added snapshot files for analytics logging tests, including responses for success cases, various metric values, and complex analytics data. [[1]](diffhunk://#diff-654bc5aa1951008ec5c110a702279ef58709ee455ba049b9fa825fa60f7e3869R1-R3) [[2]](diffhunk://#diff-e0a434b107abc71aeffb7d7989dbfd8f466b5e53f8dea25a87937ec1b885b122R1-R3) [[3]](diffhunk://#diff-dd0bc0b72264de1a0c0d3bd0c54ad656061317f425e4de461018ca51a19171a0R1-R3) [[4]](diffhunk://#diff-63af007073db553d04988544af46930458a768544cabd08412265e0818320d11R1-R30) ### Snapshot Files for API Endpoints: * Added snapshot files for various API endpoint tests, such as: - Graph-related operations (`graphs_get_single_response`, `graphs_get_all_response`, `blocks_get_all_response`). [[1]](diffhunk://#diff-b25dba271606530cfa428c00073d7e016184a7bb22166148ab1726b3e113dda8R1-R29) [[2]](diffhunk://#diff-1054e58ec3094715660f55bfba1676d65b6833a81a91a08e90ad57922444d056R1-R31) [[3]](diffhunk://#diff-cfd403ab6f3efc89188acaf993d85e6f792108d1740c7e7149eb05efb73d918dR1-R14) - User-related operations (`auth_get_or_create_user_response`, `auth_update_email_response`). [[1]](diffhunk://#diff-49e65ab1eb6af4d0163a6c54ed10be621ce7336b2ab5d47d47679bfaefdb7059R1-R5) [[2]](diffhunk://#diff-ac1216f96878bd4356454c317473654d5d5c7c180125663b80b0b45aa5ab52cbR1-R3) - Credit-related operations (`credits_get_balance_response`, `credits_get_auto_top_up_response`, `credits_top_up_request_response`). [[1]](diffhunk://#diff-189488f8da5be74d80ac3fb7f84f1039a408573184293e9ba2e321d535c57cddR1-R3) [[2]](diffhunk://#diff-ba3c4a6853793cbed24030cdccedf966d71913451ef8eb4b2c4f426ef18ed87aR1-R4) [[3]](diffhunk://#diff-43d7daa0c82070a9b6aee88a774add8e87533e630bbccbac5a838b7a7ae56a75R1-R3) - Graph execution and deletion (`blocks_execute_response`, `graphs_delete_response`). [[1]](diffhunk://#diff-a2ade7d646ad85a2801e7ff39799a925a612548a1cdd0ed99b44dd870d1465b5R1-R12) [[2]](diffhunk://#diff-c0d1cd0a8499ee175ce3007c3a87ba5f3235ce02d38ce837560b36a44fdc4a22R1-R3)## Summary - add pytest-snapshot to backend dev requirements - snapshot server route response JSONs - mention how to update stored snapshots ## Testing - `poetry run format` - `poetry run test` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] run poetry run test --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
This commit is contained in:
132
autogpt_platform/CLAUDE.md
Normal file
132
autogpt_platform/CLAUDE.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Repository Overview
|
||||
|
||||
AutoGPT Platform is a monorepo containing:
|
||||
- **Backend** (`/backend`): Python FastAPI server with async support
|
||||
- **Frontend** (`/frontend`): Next.js React application
|
||||
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
|
||||
|
||||
## Essential Commands
|
||||
|
||||
### Backend Development
|
||||
```bash
|
||||
# Install dependencies
|
||||
cd backend && poetry install
|
||||
|
||||
# Run database migrations
|
||||
poetry run prisma migrate dev
|
||||
|
||||
# Start all services (database, redis, rabbitmq)
|
||||
docker compose up -d
|
||||
|
||||
# Run the backend server
|
||||
poetry run serve
|
||||
|
||||
# Run tests
|
||||
poetry run test
|
||||
|
||||
# Run specific test
|
||||
poetry run pytest path/to/test_file.py::test_function_name
|
||||
|
||||
# Lint and format
|
||||
poetry run format # Black + isort
|
||||
poetry run lint # ruff
|
||||
```
|
||||
More details can be found in TESTING.md
|
||||
|
||||
#### Creating/Updating Snapshots
|
||||
|
||||
When you first write a test or when the expected output changes:
|
||||
|
||||
```bash
|
||||
poetry run pytest path/to/test.py --snapshot-update
|
||||
```
|
||||
|
||||
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
|
||||
|
||||
|
||||
### Frontend Development
|
||||
```bash
|
||||
# Install dependencies
|
||||
cd frontend && npm install
|
||||
|
||||
# Start development server
|
||||
npm run dev
|
||||
|
||||
# Run E2E tests
|
||||
npm run test
|
||||
|
||||
# Run Storybook for component development
|
||||
npm run storybook
|
||||
|
||||
# Build production
|
||||
npm run build
|
||||
|
||||
# Type checking
|
||||
npm run type-check
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
### Backend Architecture
|
||||
- **API Layer**: FastAPI with REST and WebSocket endpoints
|
||||
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
|
||||
- **Queue System**: RabbitMQ for async task processing
|
||||
- **Execution Engine**: Separate executor service processes agent workflows
|
||||
- **Authentication**: JWT-based with Supabase integration
|
||||
|
||||
### Frontend Architecture
|
||||
- **Framework**: Next.js App Router with React Server Components
|
||||
- **State Management**: React hooks + Supabase client for real-time updates
|
||||
- **Workflow Builder**: Visual graph editor using @xyflow/react
|
||||
- **UI Components**: Radix UI primitives with Tailwind CSS styling
|
||||
- **Feature Flags**: LaunchDarkly integration
|
||||
|
||||
### Key Concepts
|
||||
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
|
||||
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
|
||||
3. **Integrations**: OAuth and API connections stored per user
|
||||
4. **Store**: Marketplace for sharing agent templates
|
||||
|
||||
### Testing Approach
|
||||
- Backend uses pytest with snapshot testing for API responses
|
||||
- Test files are colocated with source files (`*_test.py`)
|
||||
- Frontend uses Playwright for E2E tests
|
||||
- Component testing via Storybook
|
||||
|
||||
### Database Schema
|
||||
Key models (defined in `/backend/schema.prisma`):
|
||||
- `User`: Authentication and profile data
|
||||
- `AgentGraph`: Workflow definitions with version control
|
||||
- `AgentGraphExecution`: Execution history and results
|
||||
- `AgentNode`: Individual nodes in a workflow
|
||||
- `StoreListing`: Marketplace listings for sharing agents
|
||||
|
||||
### Environment Configuration
|
||||
- Backend: `.env` file in `/backend`
|
||||
- Frontend: `.env.local` file in `/frontend`
|
||||
- Both require Supabase credentials and API keys for various services
|
||||
|
||||
### Common Development Tasks
|
||||
|
||||
**Adding a new block:**
|
||||
1. Create new file in `/backend/backend/blocks/`
|
||||
2. Inherit from `Block` base class
|
||||
3. Define input/output schemas
|
||||
4. Implement `run` method
|
||||
5. Register in block registry
|
||||
|
||||
**Modifying the API:**
|
||||
1. Update route in `/backend/backend/server/routers/`
|
||||
2. Add/update Pydantic models in same directory
|
||||
3. Write tests alongside the route file
|
||||
4. Run `poetry run test` to verify
|
||||
|
||||
**Frontend feature development:**
|
||||
1. Components go in `/frontend/src/components/`
|
||||
2. Use existing UI components from `/frontend/src/components/ui/`
|
||||
3. Add Storybook stories for new components
|
||||
4. Test with Playwright if user-facing
|
||||
237
autogpt_platform/backend/TESTING.md
Normal file
237
autogpt_platform/backend/TESTING.md
Normal file
@@ -0,0 +1,237 @@
|
||||
# Backend Testing Guide
|
||||
|
||||
This guide covers testing practices for the AutoGPT Platform backend, with a focus on snapshot testing for API endpoints.
|
||||
|
||||
## Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Running Tests](#running-tests)
|
||||
- [Snapshot Testing](#snapshot-testing)
|
||||
- [Writing Tests for API Routes](#writing-tests-for-api-routes)
|
||||
- [Best Practices](#best-practices)
|
||||
|
||||
## Overview
|
||||
|
||||
The backend uses pytest for testing with the following key libraries:
|
||||
- `pytest` - Test framework
|
||||
- `pytest-asyncio` - Async test support
|
||||
- `pytest-mock` - Mocking support
|
||||
- `pytest-snapshot` - Snapshot testing for API responses
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Run all tests
|
||||
```bash
|
||||
poetry run test
|
||||
```
|
||||
|
||||
### Run specific test file
|
||||
```bash
|
||||
poetry run pytest path/to/test_file.py
|
||||
```
|
||||
|
||||
### Run with verbose output
|
||||
```bash
|
||||
poetry run pytest -v
|
||||
```
|
||||
|
||||
### Run with coverage
|
||||
```bash
|
||||
poetry run pytest --cov=backend
|
||||
```
|
||||
|
||||
## Snapshot Testing
|
||||
|
||||
Snapshot testing captures the output of your code and compares it against previously saved snapshots. This is particularly useful for testing API responses.
|
||||
|
||||
### How Snapshot Testing Works
|
||||
|
||||
1. First run: Creates snapshot files in `snapshots/` directories
|
||||
2. Subsequent runs: Compares output against saved snapshots
|
||||
3. Changes detected: Test fails if output differs from snapshot
|
||||
|
||||
### Creating/Updating Snapshots
|
||||
|
||||
When you first write a test or when the expected output changes:
|
||||
|
||||
```bash
|
||||
poetry run pytest path/to/test.py --snapshot-update
|
||||
```
|
||||
|
||||
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
|
||||
|
||||
### Snapshot Test Example
|
||||
|
||||
```python
|
||||
import json
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
def test_api_endpoint(snapshot: Snapshot):
|
||||
response = client.get("/api/endpoint")
|
||||
|
||||
# Snapshot the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response.json(), indent=2, sort_keys=True),
|
||||
"endpoint_response"
|
||||
)
|
||||
```
|
||||
|
||||
### Best Practices for Snapshots
|
||||
|
||||
1. **Use descriptive names**: `"user_list_response"` not `"response1"`
|
||||
2. **Sort JSON keys**: Ensures consistent snapshots
|
||||
3. **Format JSON**: Use `indent=2` for readable diffs
|
||||
4. **Exclude dynamic data**: Remove timestamps, IDs, etc. that change between runs
|
||||
|
||||
Example of excluding dynamic data:
|
||||
```python
|
||||
response_data = response.json()
|
||||
# Remove dynamic fields for snapshot
|
||||
response_data.pop("created_at", None)
|
||||
response_data.pop("id", None)
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"static_response_data"
|
||||
)
|
||||
```
|
||||
|
||||
## Writing Tests for API Routes
|
||||
|
||||
### Basic Structure
|
||||
|
||||
```python
|
||||
import json
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
from backend.server.v2.myroute import router
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(router)
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
def test_endpoint_success(snapshot: Snapshot):
|
||||
response = client.get("/endpoint")
|
||||
assert response.status_code == 200
|
||||
|
||||
# Test specific fields
|
||||
data = response.json()
|
||||
assert data["status"] == "success"
|
||||
|
||||
# Snapshot the full response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(data, indent=2, sort_keys=True),
|
||||
"endpoint_success_response"
|
||||
)
|
||||
```
|
||||
|
||||
### Testing with Authentication
|
||||
|
||||
```python
|
||||
def override_auth_middleware():
|
||||
return {"sub": "test-user-id"}
|
||||
|
||||
def override_get_user_id():
|
||||
return "test-user-id"
|
||||
|
||||
app.dependency_overrides[auth_middleware] = override_auth_middleware
|
||||
app.dependency_overrides[get_user_id] = override_get_user_id
|
||||
```
|
||||
|
||||
### Mocking External Services
|
||||
|
||||
```python
|
||||
def test_external_api_call(mocker, snapshot):
|
||||
# Mock external service
|
||||
mock_response = {"external": "data"}
|
||||
mocker.patch(
|
||||
"backend.services.external_api.call",
|
||||
return_value=mock_response
|
||||
)
|
||||
|
||||
response = client.post("/api/process")
|
||||
assert response.status_code == 200
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response.json(), indent=2, sort_keys=True),
|
||||
"process_with_external_response"
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Test Organization
|
||||
- Place tests next to the code: `routes.py` → `routes_test.py`
|
||||
- Use descriptive test names: `test_create_user_with_invalid_email`
|
||||
- Group related tests in classes when appropriate
|
||||
|
||||
### 2. Test Coverage
|
||||
- Test happy path and error cases
|
||||
- Test edge cases (empty data, invalid formats)
|
||||
- Test authentication and authorization
|
||||
|
||||
### 3. Snapshot Testing Guidelines
|
||||
- Review all snapshot changes carefully
|
||||
- Don't snapshot sensitive data
|
||||
- Keep snapshots focused and minimal
|
||||
- Update snapshots intentionally, not accidentally
|
||||
|
||||
### 4. Async Testing
|
||||
- Use regular `def` for FastAPI TestClient tests
|
||||
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
|
||||
|
||||
### 5. Fixtures
|
||||
Create reusable fixtures for common test data:
|
||||
|
||||
```python
|
||||
@pytest.fixture
|
||||
def sample_user():
|
||||
return {
|
||||
"email": "test@example.com",
|
||||
"name": "Test User"
|
||||
}
|
||||
|
||||
def test_create_user(sample_user, snapshot):
|
||||
response = client.post("/users", json=sample_user)
|
||||
# ... test implementation
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
The GitHub Actions workflow automatically runs tests on:
|
||||
- Pull requests
|
||||
- Pushes to main branch
|
||||
|
||||
Snapshot tests work in CI by:
|
||||
1. Committing snapshot files to the repository
|
||||
2. CI compares against committed snapshots
|
||||
3. Fails if snapshots don't match
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Snapshot Mismatches
|
||||
- Review the diff carefully
|
||||
- If changes are expected: `poetry run pytest --snapshot-update`
|
||||
- If changes are unexpected: Fix the code causing the difference
|
||||
|
||||
### Async Test Issues
|
||||
- Ensure async functions use `@pytest.mark.asyncio`
|
||||
- Use `AsyncMock` for mocking async functions
|
||||
- FastAPI TestClient handles async automatically
|
||||
|
||||
### Import Errors
|
||||
- Check that all dependencies are in `pyproject.toml`
|
||||
- Run `poetry install` to ensure dependencies are installed
|
||||
- Verify import paths are correct
|
||||
|
||||
## Summary
|
||||
|
||||
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
|
||||
|
||||
Remember: Good tests are as important as good code!
|
||||
17
autogpt_platform/backend/backend/server/conftest.py
Normal file
17
autogpt_platform/backend/backend/server/conftest.py
Normal file
@@ -0,0 +1,17 @@
|
||||
"""Common test fixtures for server tests."""
|
||||
|
||||
import pytest
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def configured_snapshot(snapshot: Snapshot) -> Snapshot:
|
||||
"""Pre-configured snapshot fixture with standard settings."""
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
return snapshot
|
||||
|
||||
|
||||
# Test ID constants
|
||||
TEST_USER_ID = "test-user-id"
|
||||
ADMIN_USER_ID = "admin-user-id"
|
||||
TARGET_USER_ID = "target-user-id"
|
||||
@@ -0,0 +1,139 @@
|
||||
"""Example of analytics tests with improved error handling and assertions."""
|
||||
|
||||
import json
|
||||
from unittest.mock import AsyncMock, Mock
|
||||
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.routers.analytics as analytics_routes
|
||||
from backend.server.conftest import TEST_USER_ID
|
||||
from backend.server.test_helpers import (
|
||||
assert_error_response_structure,
|
||||
assert_mock_called_with_partial,
|
||||
assert_response_status,
|
||||
safe_parse_json,
|
||||
)
|
||||
from backend.server.utils import get_user_id
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(analytics_routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_get_user_id() -> str:
|
||||
"""Override get_user_id for testing"""
|
||||
return TEST_USER_ID
|
||||
|
||||
|
||||
app.dependency_overrides[get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
def test_log_raw_metric_success_improved(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test successful raw metric logging with improved assertions."""
|
||||
# Mock the analytics function
|
||||
mock_result = Mock(id="metric-123-uuid")
|
||||
|
||||
mock_log_metric = mocker.patch(
|
||||
"backend.data.analytics.log_raw_metric",
|
||||
new_callable=AsyncMock,
|
||||
return_value=mock_result,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"metric_name": "page_load_time",
|
||||
"metric_value": 2.5,
|
||||
"data_string": "/dashboard",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_metric", json=request_data)
|
||||
|
||||
# Improved assertions with better error messages
|
||||
assert_response_status(response, 200, "Metric logging should succeed")
|
||||
response_data = safe_parse_json(response, "Metric response parsing")
|
||||
|
||||
assert response_data == "metric-123-uuid", f"Unexpected response: {response_data}"
|
||||
|
||||
# Verify the function was called with correct parameters
|
||||
assert_mock_called_with_partial(
|
||||
mock_log_metric,
|
||||
user_id=TEST_USER_ID,
|
||||
metric_name="page_load_time",
|
||||
metric_value=2.5,
|
||||
data_string="/dashboard",
|
||||
)
|
||||
|
||||
# Snapshot test the response
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps({"metric_id": response_data}, indent=2, sort_keys=True),
|
||||
"analytics_log_metric_success_improved",
|
||||
)
|
||||
|
||||
|
||||
def test_log_raw_metric_invalid_request_improved() -> None:
|
||||
"""Test invalid metric request with improved error assertions."""
|
||||
# Test missing required fields
|
||||
response = client.post("/log_raw_metric", json={})
|
||||
|
||||
error_data = assert_error_response_structure(
|
||||
response, expected_status=422, expected_error_fields=["loc", "msg", "type"]
|
||||
)
|
||||
|
||||
# Verify specific error details
|
||||
detail = error_data["detail"]
|
||||
assert isinstance(detail, list), "Error detail should be a list"
|
||||
assert len(detail) > 0, "Should have at least one error"
|
||||
|
||||
# Check that required fields are mentioned in errors
|
||||
error_fields = [error["loc"][-1] for error in detail if "loc" in error]
|
||||
assert "metric_name" in error_fields, "Should report missing metric_name"
|
||||
assert "metric_value" in error_fields, "Should report missing metric_value"
|
||||
assert "data_string" in error_fields, "Should report missing data_string"
|
||||
|
||||
|
||||
def test_log_raw_metric_type_validation_improved() -> None:
|
||||
"""Test metric type validation with improved assertions."""
|
||||
invalid_requests = [
|
||||
{
|
||||
"data": {
|
||||
"metric_name": "test",
|
||||
"metric_value": "not_a_number", # Invalid type
|
||||
"data_string": "test",
|
||||
},
|
||||
"expected_error": "Input should be a valid number",
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"metric_name": "", # Empty string
|
||||
"metric_value": 1.0,
|
||||
"data_string": "test",
|
||||
},
|
||||
"expected_error": "String should have at least 1 character",
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"metric_name": "test",
|
||||
"metric_value": float("inf"), # Infinity
|
||||
"data_string": "test",
|
||||
},
|
||||
"expected_error": "ensure this value is finite",
|
||||
},
|
||||
]
|
||||
|
||||
for test_case in invalid_requests:
|
||||
response = client.post("/log_raw_metric", json=test_case["data"])
|
||||
|
||||
error_data = assert_error_response_structure(response, expected_status=422)
|
||||
|
||||
# Check that expected error is in the response
|
||||
error_text = json.dumps(error_data)
|
||||
assert (
|
||||
test_case["expected_error"] in error_text
|
||||
or test_case["expected_error"].lower() in error_text.lower()
|
||||
), f"Expected error '{test_case['expected_error']}' not found in: {error_text}"
|
||||
@@ -0,0 +1,107 @@
|
||||
"""Example of parametrized tests for analytics endpoints."""
|
||||
|
||||
import json
|
||||
from unittest.mock import AsyncMock, Mock
|
||||
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.routers.analytics as analytics_routes
|
||||
from backend.server.conftest import TEST_USER_ID
|
||||
from backend.server.utils import get_user_id
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(analytics_routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_get_user_id() -> str:
|
||||
"""Override get_user_id for testing"""
|
||||
return TEST_USER_ID
|
||||
|
||||
|
||||
app.dependency_overrides[get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"metric_value,metric_name,data_string,test_id",
|
||||
[
|
||||
(100, "api_calls_count", "external_api", "integer_value"),
|
||||
(0, "error_count", "no_errors", "zero_value"),
|
||||
(-5.2, "temperature_delta", "cooling", "negative_value"),
|
||||
(1.23456789, "precision_test", "float_precision", "float_precision"),
|
||||
(999999999, "large_number", "max_value", "large_number"),
|
||||
(0.0000001, "tiny_number", "min_value", "tiny_number"),
|
||||
],
|
||||
)
|
||||
def test_log_raw_metric_values_parametrized(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
metric_value: float,
|
||||
metric_name: str,
|
||||
data_string: str,
|
||||
test_id: str,
|
||||
) -> None:
|
||||
"""Test raw metric logging with various metric values using parametrize."""
|
||||
# Mock the analytics function
|
||||
mock_result = Mock(id=f"metric-{test_id}-uuid")
|
||||
|
||||
mocker.patch(
|
||||
"backend.data.analytics.log_raw_metric",
|
||||
new_callable=AsyncMock,
|
||||
return_value=mock_result,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"metric_name": metric_name,
|
||||
"metric_value": metric_value,
|
||||
"data_string": data_string,
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_metric", json=request_data)
|
||||
|
||||
# Better error handling
|
||||
assert response.status_code == 200, f"Failed for {test_id}: {response.text}"
|
||||
response_data = response.json()
|
||||
|
||||
# Snapshot test the response
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps(
|
||||
{"metric_id": response_data, "test_case": test_id}, indent=2, sort_keys=True
|
||||
),
|
||||
f"analytics_metric_{test_id}",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"invalid_data,expected_error",
|
||||
[
|
||||
({}, "Field required"), # Missing all fields
|
||||
({"metric_name": "test"}, "Field required"), # Missing metric_value
|
||||
(
|
||||
{"metric_name": "test", "metric_value": "not_a_number"},
|
||||
"Input should be a valid number",
|
||||
), # Invalid type
|
||||
(
|
||||
{"metric_name": "", "metric_value": 1.0, "data_string": "test"},
|
||||
"String should have at least 1 character",
|
||||
), # Empty name
|
||||
],
|
||||
)
|
||||
def test_log_raw_metric_invalid_requests_parametrized(
|
||||
invalid_data: dict,
|
||||
expected_error: str,
|
||||
) -> None:
|
||||
"""Test invalid metric requests with parametrize."""
|
||||
response = client.post("/log_raw_metric", json=invalid_data)
|
||||
|
||||
assert response.status_code == 422
|
||||
error_detail = response.json()
|
||||
assert "detail" in error_detail
|
||||
# Verify error message contains expected error
|
||||
error_text = json.dumps(error_detail)
|
||||
assert expected_error in error_text or expected_error.lower() in error_text.lower()
|
||||
@@ -0,0 +1,281 @@
|
||||
import json
|
||||
from unittest.mock import AsyncMock, Mock
|
||||
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.routers.analytics as analytics_routes
|
||||
from backend.server.conftest import TEST_USER_ID
|
||||
from backend.server.utils import get_user_id
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(analytics_routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_get_user_id() -> str:
|
||||
"""Override get_user_id for testing"""
|
||||
return TEST_USER_ID
|
||||
|
||||
|
||||
app.dependency_overrides[get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
def test_log_raw_metric_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test successful raw metric logging"""
|
||||
|
||||
# Mock the analytics function
|
||||
mock_result = Mock(id="metric-123-uuid")
|
||||
|
||||
mock_log_metric = mocker.patch(
|
||||
"backend.data.analytics.log_raw_metric",
|
||||
new_callable=AsyncMock,
|
||||
return_value=mock_result,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"metric_name": "page_load_time",
|
||||
"metric_value": 2.5,
|
||||
"data_string": "/dashboard",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_metric", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data == "metric-123-uuid"
|
||||
|
||||
# Verify the function was called with correct parameters
|
||||
mock_log_metric.assert_called_once_with(
|
||||
user_id=TEST_USER_ID,
|
||||
metric_name="page_load_time",
|
||||
metric_value=2.5,
|
||||
data_string="/dashboard",
|
||||
)
|
||||
|
||||
# Snapshot test the response
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True),
|
||||
"analytics_log_metric_success",
|
||||
)
|
||||
|
||||
|
||||
def test_log_raw_metric_various_values(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test raw metric logging with various metric values"""
|
||||
|
||||
# Mock the analytics function
|
||||
mock_result = Mock(id="metric-456-uuid")
|
||||
|
||||
mocker.patch(
|
||||
"backend.data.analytics.log_raw_metric",
|
||||
new_callable=AsyncMock,
|
||||
return_value=mock_result,
|
||||
)
|
||||
|
||||
# Test with integer value
|
||||
request_data = {
|
||||
"metric_name": "api_calls_count",
|
||||
"metric_value": 100,
|
||||
"data_string": "external_api",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_metric", json=request_data)
|
||||
assert response.status_code == 200
|
||||
|
||||
# Test with zero value
|
||||
request_data = {
|
||||
"metric_name": "error_count",
|
||||
"metric_value": 0,
|
||||
"data_string": "no_errors",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_metric", json=request_data)
|
||||
assert response.status_code == 200
|
||||
|
||||
# Test with negative value
|
||||
request_data = {
|
||||
"metric_name": "temperature_delta",
|
||||
"metric_value": -5.2,
|
||||
"data_string": "cooling",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_metric", json=request_data)
|
||||
assert response.status_code == 200
|
||||
|
||||
# Snapshot the last response
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True),
|
||||
"analytics_log_metric_various_values",
|
||||
)
|
||||
|
||||
|
||||
def test_log_raw_analytics_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test successful raw analytics logging"""
|
||||
|
||||
# Mock the analytics function
|
||||
mock_result = Mock(id="analytics-789-uuid")
|
||||
|
||||
mock_log_analytics = mocker.patch(
|
||||
"backend.data.analytics.log_raw_analytics",
|
||||
new_callable=AsyncMock,
|
||||
return_value=mock_result,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"type": "user_action",
|
||||
"data": {
|
||||
"action": "button_click",
|
||||
"button_id": "submit_form",
|
||||
"timestamp": "2023-01-01T00:00:00Z",
|
||||
"metadata": {
|
||||
"form_type": "registration",
|
||||
"fields_filled": 5,
|
||||
},
|
||||
},
|
||||
"data_index": "button_click_submit_form",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_analytics", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data == "analytics-789-uuid"
|
||||
|
||||
# Verify the function was called with correct parameters
|
||||
mock_log_analytics.assert_called_once_with(
|
||||
TEST_USER_ID,
|
||||
"user_action",
|
||||
request_data["data"],
|
||||
"button_click_submit_form",
|
||||
)
|
||||
|
||||
# Snapshot test the response
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps({"analytics_id": response_data}, indent=2, sort_keys=True),
|
||||
"analytics_log_analytics_success",
|
||||
)
|
||||
|
||||
|
||||
def test_log_raw_analytics_complex_data(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test raw analytics logging with complex nested data"""
|
||||
|
||||
# Mock the analytics function
|
||||
mock_result = Mock(id="analytics-complex-uuid")
|
||||
|
||||
mocker.patch(
|
||||
"backend.data.analytics.log_raw_analytics",
|
||||
new_callable=AsyncMock,
|
||||
return_value=mock_result,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"type": "agent_execution",
|
||||
"data": {
|
||||
"agent_id": "agent_123",
|
||||
"execution_id": "exec_456",
|
||||
"status": "completed",
|
||||
"duration_ms": 3500,
|
||||
"nodes_executed": 15,
|
||||
"blocks_used": [
|
||||
{"block_id": "llm_block", "count": 3},
|
||||
{"block_id": "http_block", "count": 5},
|
||||
{"block_id": "code_block", "count": 2},
|
||||
],
|
||||
"errors": [],
|
||||
"metadata": {
|
||||
"trigger": "manual",
|
||||
"user_tier": "premium",
|
||||
"environment": "production",
|
||||
},
|
||||
},
|
||||
"data_index": "agent_123_exec_456",
|
||||
}
|
||||
|
||||
response = client.post("/log_raw_analytics", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
|
||||
# Snapshot test the complex data structure
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps(
|
||||
{
|
||||
"analytics_id": response_data,
|
||||
"logged_data": request_data["data"],
|
||||
},
|
||||
indent=2,
|
||||
sort_keys=True,
|
||||
),
|
||||
"analytics_log_analytics_complex_data",
|
||||
)
|
||||
|
||||
|
||||
def test_log_raw_metric_invalid_request() -> None:
|
||||
"""Test raw metric logging with invalid request data"""
|
||||
# Missing required fields
|
||||
response = client.post("/log_raw_metric", json={})
|
||||
assert response.status_code == 422
|
||||
|
||||
# Invalid metric_value type
|
||||
response = client.post(
|
||||
"/log_raw_metric",
|
||||
json={
|
||||
"metric_name": "test",
|
||||
"metric_value": "not_a_number",
|
||||
"data_string": "test",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
# Missing data_string
|
||||
response = client.post(
|
||||
"/log_raw_metric",
|
||||
json={
|
||||
"metric_name": "test",
|
||||
"metric_value": 1.0,
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
|
||||
def test_log_raw_analytics_invalid_request() -> None:
|
||||
"""Test raw analytics logging with invalid request data"""
|
||||
# Missing required fields
|
||||
response = client.post("/log_raw_analytics", json={})
|
||||
assert response.status_code == 422
|
||||
|
||||
# Invalid data type (should be dict)
|
||||
response = client.post(
|
||||
"/log_raw_analytics",
|
||||
json={
|
||||
"type": "test",
|
||||
"data": "not_a_dict",
|
||||
"data_index": "test",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
# Missing data_index
|
||||
response = client.post(
|
||||
"/log_raw_analytics",
|
||||
json={
|
||||
"type": "test",
|
||||
"data": {"key": "value"},
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
391
autogpt_platform/backend/backend/server/routers/v1_test.py
Normal file
391
autogpt_platform/backend/backend/server/routers/v1_test.py
Normal file
@@ -0,0 +1,391 @@
|
||||
import json
|
||||
from unittest.mock import AsyncMock, Mock
|
||||
|
||||
import autogpt_libs.auth.depends
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.routers.v1 as v1_routes
|
||||
from backend.data.credit import AutoTopUpConfig
|
||||
from backend.data.graph import GraphModel
|
||||
from backend.server.conftest import TEST_USER_ID
|
||||
from backend.server.utils import get_user_id
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(v1_routes.v1_router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_auth_middleware(request: fastapi.Request) -> dict[str, str]:
|
||||
"""Override auth middleware for testing"""
|
||||
return {"sub": TEST_USER_ID, "role": "user", "email": "test@example.com"}
|
||||
|
||||
|
||||
def override_get_user_id() -> str:
|
||||
"""Override get_user_id for testing"""
|
||||
return TEST_USER_ID
|
||||
|
||||
|
||||
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
|
||||
override_auth_middleware
|
||||
)
|
||||
app.dependency_overrides[get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
# Auth endpoints tests
|
||||
def test_get_or_create_user_route(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test get or create user endpoint"""
|
||||
mock_user = Mock()
|
||||
mock_user.model_dump.return_value = {
|
||||
"id": TEST_USER_ID,
|
||||
"email": "test@example.com",
|
||||
"name": "Test User",
|
||||
}
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.get_or_create_user",
|
||||
return_value=mock_user,
|
||||
)
|
||||
|
||||
response = client.post("/auth/user")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"auth_user",
|
||||
)
|
||||
|
||||
|
||||
def test_update_user_email_route(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test update user email endpoint"""
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.update_user_email",
|
||||
return_value=None,
|
||||
)
|
||||
|
||||
response = client.post("/auth/user/email", json="newemail@example.com")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["email"] == "newemail@example.com"
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"auth_email",
|
||||
)
|
||||
|
||||
|
||||
# Blocks endpoints tests
|
||||
def test_get_graph_blocks(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test get blocks endpoint"""
|
||||
# Mock block
|
||||
mock_block = Mock()
|
||||
mock_block.to_dict.return_value = {
|
||||
"id": "test-block",
|
||||
"name": "Test Block",
|
||||
"description": "A test block",
|
||||
"disabled": False,
|
||||
}
|
||||
mock_block.id = "test-block"
|
||||
mock_block.disabled = False
|
||||
|
||||
# Mock get_blocks
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.get_blocks",
|
||||
return_value={"test-block": lambda: mock_block},
|
||||
)
|
||||
|
||||
# Mock block costs
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.get_block_costs",
|
||||
return_value={"test-block": [{"cost": 10, "type": "credit"}]},
|
||||
)
|
||||
|
||||
response = client.get("/blocks")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert len(response_data) == 1
|
||||
assert response_data[0]["id"] == "test-block"
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"blks_all",
|
||||
)
|
||||
|
||||
|
||||
def test_execute_graph_block(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test execute block endpoint"""
|
||||
# Mock block
|
||||
mock_block = Mock()
|
||||
mock_block.execute.return_value = [
|
||||
("output1", {"data": "result1"}),
|
||||
("output2", {"data": "result2"}),
|
||||
]
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.get_block",
|
||||
return_value=mock_block,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"input_name": "test_input",
|
||||
"input_value": "test_value",
|
||||
}
|
||||
|
||||
response = client.post("/blocks/test-block/execute", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"blks_exec",
|
||||
)
|
||||
|
||||
|
||||
def test_execute_graph_block_not_found(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
) -> None:
|
||||
"""Test execute block with non-existent block"""
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.get_block",
|
||||
return_value=None,
|
||||
)
|
||||
|
||||
response = client.post("/blocks/nonexistent-block/execute", json={})
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "not found" in response.json()["detail"]
|
||||
|
||||
|
||||
# Credits endpoints tests
|
||||
def test_get_user_credits(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test get user credits endpoint"""
|
||||
mock_credit_model = mocker.patch("backend.server.routers.v1._user_credit_model")
|
||||
mock_credit_model.get_credits = AsyncMock(return_value=1000)
|
||||
|
||||
response = client.get("/credits")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["credits"] == 1000
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"cred_bal",
|
||||
)
|
||||
|
||||
|
||||
def test_request_top_up(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test request top up endpoint"""
|
||||
mock_credit_model = mocker.patch("backend.server.routers.v1._user_credit_model")
|
||||
mock_credit_model.top_up_intent = AsyncMock(
|
||||
return_value="https://checkout.example.com/session123"
|
||||
)
|
||||
|
||||
request_data = {"credit_amount": 500}
|
||||
|
||||
response = client.post("/credits", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert "checkout_url" in response_data
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"cred_topup_req",
|
||||
)
|
||||
|
||||
|
||||
def test_get_auto_top_up(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test get auto top-up configuration endpoint"""
|
||||
mock_config = AutoTopUpConfig(threshold=100, amount=500)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.get_auto_top_up",
|
||||
return_value=mock_config,
|
||||
)
|
||||
|
||||
response = client.get("/credits/auto-top-up")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["threshold"] == 100
|
||||
assert response_data["amount"] == 500
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"cred_topup_cfg",
|
||||
)
|
||||
|
||||
|
||||
# Graphs endpoints tests
|
||||
def test_get_graphs(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test get graphs endpoint"""
|
||||
mock_graph = GraphModel(
|
||||
id="graph-123",
|
||||
version=1,
|
||||
is_active=True,
|
||||
name="Test Graph",
|
||||
description="A test graph",
|
||||
user_id="test-user-id",
|
||||
)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.graph_db.get_graphs",
|
||||
return_value=[mock_graph],
|
||||
)
|
||||
|
||||
response = client.get("/graphs")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert len(response_data) == 1
|
||||
assert response_data[0]["id"] == "graph-123"
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"grphs_all",
|
||||
)
|
||||
|
||||
|
||||
def test_get_graph(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test get single graph endpoint"""
|
||||
mock_graph = GraphModel(
|
||||
id="graph-123",
|
||||
version=1,
|
||||
is_active=True,
|
||||
name="Test Graph",
|
||||
description="A test graph",
|
||||
user_id="test-user-id",
|
||||
)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.graph_db.get_graph",
|
||||
return_value=mock_graph,
|
||||
)
|
||||
|
||||
response = client.get("/graphs/graph-123")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["id"] == "graph-123"
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"grph_single",
|
||||
)
|
||||
|
||||
|
||||
def test_get_graph_not_found(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
) -> None:
|
||||
"""Test get graph with non-existent ID"""
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.graph_db.get_graph",
|
||||
return_value=None,
|
||||
)
|
||||
|
||||
response = client.get("/graphs/nonexistent-graph")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "not found" in response.json()["detail"]
|
||||
|
||||
|
||||
def test_delete_graph(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test delete graph endpoint"""
|
||||
# Mock active graph for deactivation
|
||||
mock_graph = GraphModel(
|
||||
id="graph-123",
|
||||
version=1,
|
||||
is_active=True,
|
||||
name="Test Graph",
|
||||
description="A test graph",
|
||||
user_id="test-user-id",
|
||||
)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.graph_db.get_graph",
|
||||
return_value=mock_graph,
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.on_graph_deactivate",
|
||||
return_value=None,
|
||||
)
|
||||
mocker.patch(
|
||||
"backend.server.routers.v1.graph_db.delete_graph",
|
||||
return_value=3, # Number of versions deleted
|
||||
)
|
||||
|
||||
response = client.delete("/graphs/graph-123")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["version_counts"] == 3
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"grphs_del",
|
||||
)
|
||||
|
||||
|
||||
# Invalid request tests
|
||||
def test_invalid_json_request() -> None:
|
||||
"""Test endpoint with invalid JSON"""
|
||||
response = client.post(
|
||||
"/auth/user/email",
|
||||
content="invalid json",
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
|
||||
def test_missing_required_field() -> None:
|
||||
"""Test endpoint with missing required field"""
|
||||
response = client.post("/credits", json={}) # Missing credit_amount
|
||||
assert response.status_code == 422
|
||||
139
autogpt_platform/backend/backend/server/test_fixtures.py
Normal file
139
autogpt_platform/backend/backend/server/test_fixtures.py
Normal file
@@ -0,0 +1,139 @@
|
||||
"""Common test fixtures with proper setup and teardown."""
|
||||
|
||||
from contextlib import asynccontextmanager
|
||||
from typing import AsyncGenerator
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
from prisma import Prisma
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def test_db_connection() -> AsyncGenerator[Prisma, None]:
|
||||
"""Provide a test database connection with proper cleanup.
|
||||
|
||||
This fixture ensures the database connection is properly
|
||||
closed after the test, even if the test fails.
|
||||
"""
|
||||
db = Prisma()
|
||||
try:
|
||||
await db.connect()
|
||||
yield db
|
||||
finally:
|
||||
await db.disconnect()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_transaction():
|
||||
"""Mock database transaction with proper async context manager."""
|
||||
|
||||
@asynccontextmanager
|
||||
async def mock_context(*args, **kwargs):
|
||||
yield None
|
||||
|
||||
with patch("backend.data.db.locked_transaction", side_effect=mock_context) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def isolated_app_state():
|
||||
"""Fixture that ensures app state is isolated between tests."""
|
||||
# Example: Save original state
|
||||
# from backend.server.app import app
|
||||
# original_overrides = app.dependency_overrides.copy()
|
||||
|
||||
# try:
|
||||
# yield app
|
||||
# finally:
|
||||
# # Restore original state
|
||||
# app.dependency_overrides = original_overrides
|
||||
|
||||
# For now, just yield None as this is an example
|
||||
yield None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def cleanup_files():
|
||||
"""Fixture to track and cleanup files created during tests."""
|
||||
created_files = []
|
||||
|
||||
def track_file(filepath: str):
|
||||
created_files.append(filepath)
|
||||
|
||||
yield track_file
|
||||
|
||||
# Cleanup
|
||||
import os
|
||||
|
||||
for filepath in created_files:
|
||||
try:
|
||||
if os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
except Exception as e:
|
||||
print(f"Warning: Failed to cleanup {filepath}: {e}")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def async_mock_with_cleanup():
|
||||
"""Create async mocks that are properly cleaned up."""
|
||||
mocks = []
|
||||
|
||||
def create_mock(**kwargs):
|
||||
mock = Mock(**kwargs)
|
||||
mocks.append(mock)
|
||||
return mock
|
||||
|
||||
yield create_mock
|
||||
|
||||
# Reset all mocks
|
||||
for mock in mocks:
|
||||
mock.reset_mock()
|
||||
|
||||
|
||||
class TestDatabaseIsolation:
|
||||
"""Example of proper test isolation with database operations."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
async def setup_and_teardown(self, test_db_connection):
|
||||
"""Setup and teardown for each test method."""
|
||||
# Setup: Clear test data
|
||||
await test_db_connection.user.delete_many(
|
||||
where={"email": {"contains": "@test.example"}}
|
||||
)
|
||||
|
||||
yield
|
||||
|
||||
# Teardown: Clear test data again
|
||||
await test_db_connection.user.delete_many(
|
||||
where={"email": {"contains": "@test.example"}}
|
||||
)
|
||||
|
||||
async def test_create_user(self, test_db_connection):
|
||||
"""Test that demonstrates proper isolation."""
|
||||
# This test has access to a clean database
|
||||
user = await test_db_connection.user.create(
|
||||
data={"email": "test@test.example", "name": "Test User"}
|
||||
)
|
||||
assert user.email == "test@test.example"
|
||||
# User will be cleaned up automatically
|
||||
|
||||
|
||||
@pytest.fixture(scope="function") # Explicitly use function scope
|
||||
def reset_singleton_state():
|
||||
"""Reset singleton state between tests."""
|
||||
# Example: Reset a singleton instance
|
||||
# from backend.data.some_singleton import SingletonClass
|
||||
|
||||
# # Save original state
|
||||
# original_instance = getattr(SingletonClass, "_instance", None)
|
||||
|
||||
# try:
|
||||
# # Clear singleton
|
||||
# SingletonClass._instance = None
|
||||
# yield
|
||||
# finally:
|
||||
# # Restore original state
|
||||
# SingletonClass._instance = original_instance
|
||||
|
||||
# For now, just yield None as this is an example
|
||||
yield None
|
||||
109
autogpt_platform/backend/backend/server/test_helpers.py
Normal file
109
autogpt_platform/backend/backend/server/test_helpers.py
Normal file
@@ -0,0 +1,109 @@
|
||||
"""Helper functions for improved test assertions and error handling."""
|
||||
|
||||
import json
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
|
||||
def assert_response_status(
|
||||
response: Any, expected_status: int = 200, error_context: Optional[str] = None
|
||||
) -> None:
|
||||
"""Assert response status with helpful error message.
|
||||
|
||||
Args:
|
||||
response: The HTTP response object
|
||||
expected_status: Expected status code
|
||||
error_context: Optional context to include in error message
|
||||
"""
|
||||
if response.status_code != expected_status:
|
||||
error_msg = f"Expected status {expected_status}, got {response.status_code}"
|
||||
if error_context:
|
||||
error_msg = f"{error_context}: {error_msg}"
|
||||
|
||||
# Try to include response body in error
|
||||
try:
|
||||
body = response.json()
|
||||
error_msg += f"\nResponse body: {json.dumps(body, indent=2)}"
|
||||
except Exception:
|
||||
error_msg += f"\nResponse text: {response.text}"
|
||||
|
||||
raise AssertionError(error_msg)
|
||||
|
||||
|
||||
def safe_parse_json(
|
||||
response: Any, error_context: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Safely parse JSON response with error handling.
|
||||
|
||||
Args:
|
||||
response: The HTTP response object
|
||||
error_context: Optional context for error messages
|
||||
|
||||
Returns:
|
||||
Parsed JSON data
|
||||
|
||||
Raises:
|
||||
AssertionError: If JSON parsing fails
|
||||
"""
|
||||
try:
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to parse JSON response: {e}"
|
||||
if error_context:
|
||||
error_msg = f"{error_context}: {error_msg}"
|
||||
error_msg += f"\nResponse text: {response.text[:500]}"
|
||||
raise AssertionError(error_msg)
|
||||
|
||||
|
||||
def assert_error_response_structure(
|
||||
response: Any,
|
||||
expected_status: int = 422,
|
||||
expected_error_fields: Optional[list[str]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Assert error response has expected structure.
|
||||
|
||||
Args:
|
||||
response: The HTTP response object
|
||||
expected_status: Expected error status code
|
||||
expected_error_fields: List of expected fields in error detail
|
||||
|
||||
Returns:
|
||||
Parsed error response
|
||||
"""
|
||||
assert_response_status(response, expected_status, "Error response check")
|
||||
|
||||
error_data = safe_parse_json(response, "Error response parsing")
|
||||
|
||||
# Check basic error structure
|
||||
assert "detail" in error_data, f"Missing 'detail' in error response: {error_data}"
|
||||
|
||||
# Check specific error fields if provided
|
||||
if expected_error_fields:
|
||||
detail = error_data["detail"]
|
||||
if isinstance(detail, list):
|
||||
# FastAPI validation errors
|
||||
for error in detail:
|
||||
assert "loc" in error, f"Missing 'loc' in error: {error}"
|
||||
assert "msg" in error, f"Missing 'msg' in error: {error}"
|
||||
assert "type" in error, f"Missing 'type' in error: {error}"
|
||||
|
||||
return error_data
|
||||
|
||||
|
||||
def assert_mock_called_with_partial(mock_obj: Any, **expected_kwargs: Any) -> None:
|
||||
"""Assert mock was called with expected kwargs (partial match).
|
||||
|
||||
Args:
|
||||
mock_obj: The mock object to check
|
||||
**expected_kwargs: Expected keyword arguments
|
||||
"""
|
||||
assert mock_obj.called, f"Mock {mock_obj} was not called"
|
||||
|
||||
actual_kwargs = mock_obj.call_args.kwargs if mock_obj.call_args else {}
|
||||
|
||||
for key, expected_value in expected_kwargs.items():
|
||||
assert (
|
||||
key in actual_kwargs
|
||||
), f"Missing key '{key}' in mock call. Actual keys: {list(actual_kwargs.keys())}"
|
||||
assert (
|
||||
actual_kwargs[key] == expected_value
|
||||
), f"Mock called with {key}={actual_kwargs[key]}, expected {expected_value}"
|
||||
74
autogpt_platform/backend/backend/server/test_utils.py
Normal file
74
autogpt_platform/backend/backend/server/test_utils.py
Normal file
@@ -0,0 +1,74 @@
|
||||
"""Common test utilities and constants for server tests."""
|
||||
|
||||
from typing import Any, Dict
|
||||
from unittest.mock import Mock
|
||||
|
||||
import pytest
|
||||
|
||||
# Test ID constants
|
||||
TEST_USER_ID = "test-user-id"
|
||||
ADMIN_USER_ID = "admin-user-id"
|
||||
TARGET_USER_ID = "target-user-id"
|
||||
|
||||
# Common test data constants
|
||||
FIXED_TIMESTAMP = "2024-01-01T00:00:00Z"
|
||||
TRANSACTION_UUID = "transaction-123-uuid"
|
||||
METRIC_UUID = "metric-123-uuid"
|
||||
ANALYTICS_UUID = "analytics-123-uuid"
|
||||
|
||||
|
||||
def create_mock_with_id(mock_id: str) -> Mock:
|
||||
"""Create a mock object with an id attribute.
|
||||
|
||||
Args:
|
||||
mock_id: The ID value to set on the mock
|
||||
|
||||
Returns:
|
||||
Mock object with id attribute set
|
||||
"""
|
||||
return Mock(id=mock_id)
|
||||
|
||||
|
||||
def assert_status_and_parse_json(
|
||||
response: Any, expected_status: int = 200
|
||||
) -> Dict[str, Any]:
|
||||
"""Assert response status and return parsed JSON.
|
||||
|
||||
Args:
|
||||
response: The HTTP response object
|
||||
expected_status: Expected status code (default: 200)
|
||||
|
||||
Returns:
|
||||
Parsed JSON response data
|
||||
|
||||
Raises:
|
||||
AssertionError: If status code doesn't match expected
|
||||
"""
|
||||
assert (
|
||||
response.status_code == expected_status
|
||||
), f"Expected status {expected_status}, got {response.status_code}: {response.text}"
|
||||
return response.json()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"metric_value,metric_name,data_string",
|
||||
[
|
||||
(100, "api_calls_count", "external_api"),
|
||||
(0, "error_count", "no_errors"),
|
||||
(-5.2, "temperature_delta", "cooling"),
|
||||
(1.23456789, "precision_test", "float_precision"),
|
||||
(999999999, "large_number", "max_value"),
|
||||
],
|
||||
)
|
||||
def parametrized_metric_values_decorator(func):
|
||||
"""Decorator for parametrized metric value tests."""
|
||||
return pytest.mark.parametrize(
|
||||
"metric_value,metric_name,data_string",
|
||||
[
|
||||
(100, "api_calls_count", "external_api"),
|
||||
(0, "error_count", "no_errors"),
|
||||
(-5.2, "temperature_delta", "cooling"),
|
||||
(1.23456789, "precision_test", "float_precision"),
|
||||
(999999999, "large_number", "max_value"),
|
||||
],
|
||||
)(func)
|
||||
@@ -0,0 +1,331 @@
|
||||
import json
|
||||
from unittest.mock import AsyncMock
|
||||
|
||||
import autogpt_libs.auth
|
||||
import autogpt_libs.auth.depends
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import prisma.enums
|
||||
import pytest_mock
|
||||
from prisma import Json
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.v2.admin.credit_admin_routes as credit_admin_routes
|
||||
import backend.server.v2.admin.model as admin_model
|
||||
from backend.data.model import UserTransaction
|
||||
from backend.server.conftest import ADMIN_USER_ID, TARGET_USER_ID
|
||||
from backend.server.model import Pagination
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(credit_admin_routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_requires_admin_user() -> dict[str, str]:
|
||||
"""Override admin user check for testing"""
|
||||
return {"sub": ADMIN_USER_ID, "role": "admin"}
|
||||
|
||||
|
||||
def override_get_user_id() -> str:
|
||||
"""Override get_user_id for testing"""
|
||||
return ADMIN_USER_ID
|
||||
|
||||
|
||||
app.dependency_overrides[autogpt_libs.auth.requires_admin_user] = (
|
||||
override_requires_admin_user
|
||||
)
|
||||
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
def test_add_user_credits_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
configured_snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test successful credit addition by admin"""
|
||||
# Mock the credit model
|
||||
mock_credit_model = mocker.patch(
|
||||
"backend.server.v2.admin.credit_admin_routes._user_credit_model"
|
||||
)
|
||||
mock_credit_model._add_transaction = AsyncMock(
|
||||
return_value=(1500, "transaction-123-uuid")
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"user_id": TARGET_USER_ID,
|
||||
"amount": 500,
|
||||
"comments": "Test credit grant for debugging",
|
||||
}
|
||||
|
||||
response = client.post("/admin/add_credits", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["new_balance"] == 1500
|
||||
assert response_data["transaction_key"] == "transaction-123-uuid"
|
||||
|
||||
# Verify the function was called with correct parameters
|
||||
mock_credit_model._add_transaction.assert_called_once()
|
||||
call_args = mock_credit_model._add_transaction.call_args
|
||||
assert call_args[0] == (TARGET_USER_ID, 500)
|
||||
assert call_args[1]["transaction_type"] == prisma.enums.CreditTransactionType.GRANT
|
||||
# Check that metadata is a Json object with the expected content
|
||||
assert isinstance(call_args[1]["metadata"], Json)
|
||||
assert call_args[1]["metadata"] == Json(
|
||||
{"admin_id": ADMIN_USER_ID, "reason": "Test credit grant for debugging"}
|
||||
)
|
||||
|
||||
# Snapshot test the response
|
||||
configured_snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"admin_add_credits_success",
|
||||
)
|
||||
|
||||
|
||||
def test_add_user_credits_negative_amount(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test credit deduction by admin (negative amount)"""
|
||||
# Mock the credit model
|
||||
mock_credit_model = mocker.patch(
|
||||
"backend.server.v2.admin.credit_admin_routes._user_credit_model"
|
||||
)
|
||||
mock_credit_model._add_transaction = AsyncMock(
|
||||
return_value=(200, "transaction-456-uuid")
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"user_id": "target-user-id",
|
||||
"amount": -100,
|
||||
"comments": "Refund adjustment",
|
||||
}
|
||||
|
||||
response = client.post("/admin/add_credits", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["new_balance"] == 200
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"adm_add_cred_neg",
|
||||
)
|
||||
|
||||
|
||||
def test_get_user_history_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test successful retrieval of user credit history"""
|
||||
# Mock the admin_get_user_history function
|
||||
mock_history_response = admin_model.UserHistoryResponse(
|
||||
history=[
|
||||
UserTransaction(
|
||||
user_id="user-1",
|
||||
user_email="user1@example.com",
|
||||
amount=1000,
|
||||
reason="Initial grant",
|
||||
transaction_type=prisma.enums.CreditTransactionType.GRANT,
|
||||
),
|
||||
UserTransaction(
|
||||
user_id="user-2",
|
||||
user_email="user2@example.com",
|
||||
amount=-50,
|
||||
reason="Usage",
|
||||
transaction_type=prisma.enums.CreditTransactionType.USAGE,
|
||||
),
|
||||
],
|
||||
pagination=Pagination(
|
||||
total_items=2,
|
||||
total_pages=1,
|
||||
current_page=1,
|
||||
page_size=20,
|
||||
),
|
||||
)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
|
||||
return_value=mock_history_response,
|
||||
)
|
||||
|
||||
response = client.get("/admin/users_history")
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert len(response_data["history"]) == 2
|
||||
assert response_data["pagination"]["total_items"] == 2
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"adm_usr_hist_ok",
|
||||
)
|
||||
|
||||
|
||||
def test_get_user_history_with_filters(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test user credit history with search and filter parameters"""
|
||||
# Mock the admin_get_user_history function
|
||||
mock_history_response = admin_model.UserHistoryResponse(
|
||||
history=[
|
||||
UserTransaction(
|
||||
user_id="user-3",
|
||||
user_email="test@example.com",
|
||||
amount=500,
|
||||
reason="Top up",
|
||||
transaction_type=prisma.enums.CreditTransactionType.TOP_UP,
|
||||
),
|
||||
],
|
||||
pagination=Pagination(
|
||||
total_items=1,
|
||||
total_pages=1,
|
||||
current_page=1,
|
||||
page_size=10,
|
||||
),
|
||||
)
|
||||
|
||||
mock_get_history = mocker.patch(
|
||||
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
|
||||
return_value=mock_history_response,
|
||||
)
|
||||
|
||||
response = client.get(
|
||||
"/admin/users_history",
|
||||
params={
|
||||
"search": "test@example.com",
|
||||
"page": 1,
|
||||
"page_size": 10,
|
||||
"transaction_filter": "TOP_UP",
|
||||
},
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert len(response_data["history"]) == 1
|
||||
assert response_data["history"][0]["transaction_type"] == "TOP_UP"
|
||||
|
||||
# Verify the function was called with correct parameters
|
||||
mock_get_history.assert_called_once_with(
|
||||
page=1,
|
||||
page_size=10,
|
||||
search="test@example.com",
|
||||
transaction_filter=prisma.enums.CreditTransactionType.TOP_UP,
|
||||
)
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"adm_usr_hist_filt",
|
||||
)
|
||||
|
||||
|
||||
def test_get_user_history_empty_results(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test user credit history with no results"""
|
||||
# Mock empty history response
|
||||
mock_history_response = admin_model.UserHistoryResponse(
|
||||
history=[],
|
||||
pagination=Pagination(
|
||||
total_items=0,
|
||||
total_pages=0,
|
||||
current_page=1,
|
||||
page_size=20,
|
||||
),
|
||||
)
|
||||
|
||||
mocker.patch(
|
||||
"backend.server.v2.admin.credit_admin_routes.admin_get_user_history",
|
||||
return_value=mock_history_response,
|
||||
)
|
||||
|
||||
response = client.get("/admin/users_history", params={"search": "nonexistent"})
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert len(response_data["history"]) == 0
|
||||
assert response_data["pagination"]["total_items"] == 0
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"adm_usr_hist_empty",
|
||||
)
|
||||
|
||||
|
||||
def test_add_credits_invalid_request() -> None:
|
||||
"""Test credit addition with invalid request data"""
|
||||
# Missing required fields
|
||||
response = client.post("/admin/add_credits", json={})
|
||||
assert response.status_code == 422
|
||||
|
||||
# Invalid amount type
|
||||
response = client.post(
|
||||
"/admin/add_credits",
|
||||
json={
|
||||
"user_id": "test",
|
||||
"amount": "not_a_number",
|
||||
"comments": "test",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
# Missing comments
|
||||
response = client.post(
|
||||
"/admin/add_credits",
|
||||
json={
|
||||
"user_id": "test",
|
||||
"amount": 100,
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
|
||||
def test_admin_endpoints_require_admin_role(mocker: pytest_mock.MockFixture) -> None:
|
||||
"""Test that admin endpoints require admin role"""
|
||||
# Clear the admin override to test authorization
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
# Mock requires_admin_user to raise an exception
|
||||
mocker.patch(
|
||||
"autogpt_libs.auth.requires_admin_user",
|
||||
side_effect=fastapi.HTTPException(
|
||||
status_code=403, detail="Admin access required"
|
||||
),
|
||||
)
|
||||
|
||||
# Test add_credits endpoint
|
||||
response = client.post(
|
||||
"/admin/add_credits",
|
||||
json={
|
||||
"user_id": "test",
|
||||
"amount": 100,
|
||||
"comments": "test",
|
||||
},
|
||||
)
|
||||
assert (
|
||||
response.status_code == 401
|
||||
) # Auth middleware returns 401 when auth is disabled
|
||||
|
||||
# Test users_history endpoint
|
||||
response = client.get("/admin/users_history")
|
||||
assert (
|
||||
response.status_code == 401
|
||||
) # Auth middleware returns 401 when auth is disabled
|
||||
|
||||
# Restore the override
|
||||
app.dependency_overrides[autogpt_libs.auth.requires_admin_user] = (
|
||||
override_requires_admin_user
|
||||
)
|
||||
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = (
|
||||
override_get_user_id
|
||||
)
|
||||
@@ -3,6 +3,7 @@ from datetime import datetime
|
||||
import prisma.enums
|
||||
import prisma.errors
|
||||
import prisma.models
|
||||
import prisma.types
|
||||
import pytest
|
||||
|
||||
import backend.server.v2.library.db as db
|
||||
@@ -84,6 +85,11 @@ async def test_get_library_agents(mocker):
|
||||
@pytest.mark.asyncio(loop_scope="session")
|
||||
async def test_add_agent_to_library(mocker):
|
||||
await connect()
|
||||
|
||||
# Mock the transaction context
|
||||
mock_transaction = mocker.patch("backend.server.v2.library.db.locked_transaction")
|
||||
mock_transaction.return_value.__aenter__ = mocker.AsyncMock(return_value=None)
|
||||
mock_transaction.return_value.__aexit__ = mocker.AsyncMock(return_value=None)
|
||||
# Mock data
|
||||
mock_store_listing_data = prisma.models.StoreListingVersion(
|
||||
id="version123",
|
||||
@@ -142,6 +148,10 @@ async def test_add_agent_to_library(mocker):
|
||||
return_value=mock_library_agent_data
|
||||
)
|
||||
|
||||
# Mock the model conversion
|
||||
mock_from_db = mocker.patch("backend.server.v2.library.model.LibraryAgent.from_db")
|
||||
mock_from_db.return_value = mocker.Mock()
|
||||
|
||||
# Call function
|
||||
await db.add_store_agent_to_library("version123", "test-user")
|
||||
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
import datetime
|
||||
import json
|
||||
|
||||
import autogpt_libs.auth as autogpt_auth_lib
|
||||
import fastapi.testclient
|
||||
import pytest
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.model as server_model
|
||||
import backend.server.v2.library.model as library_model
|
||||
@@ -14,6 +16,8 @@ app.include_router(library_router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0)
|
||||
|
||||
|
||||
def override_auth_middleware():
|
||||
"""Override auth middleware for testing"""
|
||||
@@ -30,7 +34,10 @@ app.dependency_overrides[autogpt_auth_lib.depends.get_user_id] = override_get_us
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_library_agents_success(mocker: pytest_mock.MockFixture):
|
||||
async def test_get_library_agents_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = library_model.LibraryAgentResponse(
|
||||
agents=[
|
||||
library_model.LibraryAgent(
|
||||
@@ -82,6 +89,10 @@ async def test_get_library_agents_success(mocker: pytest_mock.MockFixture):
|
||||
assert data.agents[0].can_access_graph is True
|
||||
assert data.agents[1].graph_id == "test-agent-2"
|
||||
assert data.agents[1].can_access_graph is False
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "lib_agts_search")
|
||||
|
||||
mock_db_call.assert_called_once_with(
|
||||
user_id="test-user-id",
|
||||
search_term="test",
|
||||
|
||||
271
autogpt_platform/backend/backend/server/v2/otto/routes_test.py
Normal file
271
autogpt_platform/backend/backend/server/v2/otto/routes_test.py
Normal file
@@ -0,0 +1,271 @@
|
||||
import json
|
||||
|
||||
import autogpt_libs.auth.depends
|
||||
import autogpt_libs.auth.middleware
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.v2.otto.models as otto_models
|
||||
import backend.server.v2.otto.routes as otto_routes
|
||||
from backend.server.utils import get_user_id
|
||||
from backend.server.v2.otto.service import OttoService
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(otto_routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_auth_middleware():
|
||||
"""Override auth middleware for testing"""
|
||||
return {"sub": "test-user-id"}
|
||||
|
||||
|
||||
def override_get_user_id():
|
||||
"""Override get_user_id for testing"""
|
||||
return "test-user-id"
|
||||
|
||||
|
||||
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
|
||||
override_auth_middleware
|
||||
)
|
||||
app.dependency_overrides[get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
def test_ask_otto_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test successful Otto API request"""
|
||||
# Mock the OttoService.ask method
|
||||
mock_response = otto_models.ApiResponse(
|
||||
answer="This is Otto's response to your query.",
|
||||
documents=[
|
||||
otto_models.Document(
|
||||
url="https://example.com/doc1",
|
||||
relevance_score=0.95,
|
||||
),
|
||||
otto_models.Document(
|
||||
url="https://example.com/doc2",
|
||||
relevance_score=0.87,
|
||||
),
|
||||
],
|
||||
success=True,
|
||||
)
|
||||
|
||||
mocker.patch.object(
|
||||
OttoService,
|
||||
"ask",
|
||||
return_value=mock_response,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"query": "How do I create an agent?",
|
||||
"conversation_history": [
|
||||
{
|
||||
"query": "What is AutoGPT?",
|
||||
"response": "AutoGPT is an AI agent platform.",
|
||||
}
|
||||
],
|
||||
"message_id": "msg_123",
|
||||
"include_graph_data": False,
|
||||
}
|
||||
|
||||
response = client.post("/ask", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["success"] is True
|
||||
assert response_data["answer"] == "This is Otto's response to your query."
|
||||
assert len(response_data["documents"]) == 2
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"otto_ok",
|
||||
)
|
||||
|
||||
|
||||
def test_ask_otto_with_graph_data(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test Otto API request with graph data included"""
|
||||
# Mock the OttoService.ask method
|
||||
mock_response = otto_models.ApiResponse(
|
||||
answer="Here's information about your graph.",
|
||||
documents=[
|
||||
otto_models.Document(
|
||||
url="https://example.com/graph-doc",
|
||||
relevance_score=0.92,
|
||||
),
|
||||
],
|
||||
success=True,
|
||||
)
|
||||
|
||||
mocker.patch.object(
|
||||
OttoService,
|
||||
"ask",
|
||||
return_value=mock_response,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"query": "Tell me about my graph",
|
||||
"conversation_history": [],
|
||||
"message_id": "msg_456",
|
||||
"include_graph_data": True,
|
||||
"graph_id": "graph_123",
|
||||
}
|
||||
|
||||
response = client.post("/ask", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["success"] is True
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"otto_grph",
|
||||
)
|
||||
|
||||
|
||||
def test_ask_otto_empty_conversation(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test Otto API request with empty conversation history"""
|
||||
# Mock the OttoService.ask method
|
||||
mock_response = otto_models.ApiResponse(
|
||||
answer="Welcome! How can I help you?",
|
||||
documents=[],
|
||||
success=True,
|
||||
)
|
||||
|
||||
mocker.patch.object(
|
||||
OttoService,
|
||||
"ask",
|
||||
return_value=mock_response,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"query": "Hello",
|
||||
"conversation_history": [],
|
||||
"message_id": "msg_789",
|
||||
"include_graph_data": False,
|
||||
}
|
||||
|
||||
response = client.post("/ask", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["success"] is True
|
||||
assert len(response_data["documents"]) == 0
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"otto_empty",
|
||||
)
|
||||
|
||||
|
||||
def test_ask_otto_service_error(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
"""Test Otto API request when service returns error"""
|
||||
# Mock the OttoService.ask method to return failure
|
||||
mock_response = otto_models.ApiResponse(
|
||||
answer="An error occurred while processing your request.",
|
||||
documents=[],
|
||||
success=False,
|
||||
)
|
||||
|
||||
mocker.patch.object(
|
||||
OttoService,
|
||||
"ask",
|
||||
return_value=mock_response,
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"query": "Test query",
|
||||
"conversation_history": [],
|
||||
"message_id": "msg_error",
|
||||
"include_graph_data": False,
|
||||
}
|
||||
|
||||
response = client.post("/ask", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["success"] is False
|
||||
|
||||
# Snapshot test the response
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(response_data, indent=2, sort_keys=True),
|
||||
"otto_err",
|
||||
)
|
||||
|
||||
|
||||
def test_ask_otto_invalid_request() -> None:
|
||||
"""Test Otto API with invalid request data"""
|
||||
# Missing required fields
|
||||
response = client.post("/ask", json={})
|
||||
assert response.status_code == 422
|
||||
|
||||
# Invalid conversation history format
|
||||
response = client.post(
|
||||
"/ask",
|
||||
json={
|
||||
"query": "Test",
|
||||
"conversation_history": "not a list",
|
||||
"message_id": "123",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
# Missing message_id
|
||||
response = client.post(
|
||||
"/ask",
|
||||
json={
|
||||
"query": "Test",
|
||||
"conversation_history": [],
|
||||
},
|
||||
)
|
||||
assert response.status_code == 422
|
||||
|
||||
|
||||
def test_ask_otto_unauthenticated(mocker: pytest_mock.MockFixture) -> None:
|
||||
"""Test Otto API request without authentication"""
|
||||
# Remove the auth override to test unauthenticated access
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
# Mock auth_middleware to raise an exception
|
||||
mocker.patch(
|
||||
"autogpt_libs.auth.middleware.auth_middleware",
|
||||
side_effect=fastapi.HTTPException(status_code=401, detail="Unauthorized"),
|
||||
)
|
||||
|
||||
request_data = {
|
||||
"query": "Test",
|
||||
"conversation_history": [],
|
||||
"message_id": "123",
|
||||
}
|
||||
|
||||
response = client.post("/ask", json=request_data)
|
||||
# When auth is disabled and Otto API URL is not configured, we get 503
|
||||
assert response.status_code == 503
|
||||
|
||||
# Restore the override
|
||||
app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
|
||||
override_auth_middleware
|
||||
)
|
||||
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = (
|
||||
override_get_user_id
|
||||
)
|
||||
@@ -1,4 +1,5 @@
|
||||
import datetime
|
||||
import json
|
||||
|
||||
import autogpt_libs.auth.depends
|
||||
import autogpt_libs.auth.middleware
|
||||
@@ -6,22 +7,27 @@ import fastapi
|
||||
import fastapi.testclient
|
||||
import prisma.enums
|
||||
import pytest_mock
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.v2.store.model
|
||||
import backend.server.v2.store.routes
|
||||
|
||||
# Using a fixed timestamp for reproducible tests
|
||||
# 2023 date is intentionally used to ensure tests work regardless of current year
|
||||
FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0)
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(backend.server.v2.store.routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def override_auth_middleware():
|
||||
def override_auth_middleware() -> dict[str, str]:
|
||||
"""Override auth middleware for testing"""
|
||||
return {"sub": "test-user-id"}
|
||||
|
||||
|
||||
def override_get_user_id():
|
||||
def override_get_user_id() -> str:
|
||||
"""Override get_user_id for testing"""
|
||||
return "test-user-id"
|
||||
|
||||
@@ -32,7 +38,10 @@ app.dependency_overrides[autogpt_libs.auth.middleware.auth_middleware] = (
|
||||
app.dependency_overrides[autogpt_libs.auth.depends.get_user_id] = override_get_user_id
|
||||
|
||||
|
||||
def test_get_agents_defaults(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_defaults(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[],
|
||||
pagination=backend.server.v2.store.model.Pagination(
|
||||
@@ -52,6 +61,9 @@ def test_get_agents_defaults(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert data.pagination.total_pages == 0
|
||||
assert data.agents == []
|
||||
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "def_agts")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False,
|
||||
creator=None,
|
||||
@@ -63,7 +75,10 @@ def test_get_agents_defaults(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
|
||||
|
||||
def test_get_agents_featured(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_featured(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[
|
||||
backend.server.v2.store.model.StoreAgent(
|
||||
@@ -94,6 +109,8 @@ def test_get_agents_featured(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert len(data.agents) == 1
|
||||
assert data.agents[0].slug == "featured-agent"
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "feat_agts")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=True,
|
||||
creator=None,
|
||||
@@ -105,7 +122,10 @@ def test_get_agents_featured(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
|
||||
|
||||
def test_get_agents_by_creator(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_by_creator(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[
|
||||
backend.server.v2.store.model.StoreAgent(
|
||||
@@ -136,6 +156,8 @@ def test_get_agents_by_creator(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert len(data.agents) == 1
|
||||
assert data.agents[0].creator == "specific-creator"
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_by_creator")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False,
|
||||
creator="specific-creator",
|
||||
@@ -147,7 +169,10 @@ def test_get_agents_by_creator(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
|
||||
|
||||
def test_get_agents_sorted(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_sorted(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[
|
||||
backend.server.v2.store.model.StoreAgent(
|
||||
@@ -178,6 +203,8 @@ def test_get_agents_sorted(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert len(data.agents) == 1
|
||||
assert data.agents[0].runs == 1000
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_sorted")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False,
|
||||
creator=None,
|
||||
@@ -189,7 +216,10 @@ def test_get_agents_sorted(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
|
||||
|
||||
def test_get_agents_search(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_search(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[
|
||||
backend.server.v2.store.model.StoreAgent(
|
||||
@@ -220,6 +250,8 @@ def test_get_agents_search(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert len(data.agents) == 1
|
||||
assert "specific" in data.agents[0].description.lower()
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_search")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False,
|
||||
creator=None,
|
||||
@@ -231,7 +263,10 @@ def test_get_agents_search(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
|
||||
|
||||
def test_get_agents_category(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_category(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[
|
||||
backend.server.v2.store.model.StoreAgent(
|
||||
@@ -261,6 +296,8 @@ def test_get_agents_category(mocker: pytest_mock.MockFixture):
|
||||
response.json()
|
||||
)
|
||||
assert len(data.agents) == 1
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_category")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False,
|
||||
creator=None,
|
||||
@@ -272,7 +309,10 @@ def test_get_agents_category(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
|
||||
|
||||
def test_get_agents_pagination(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agents_pagination(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentsResponse(
|
||||
agents=[
|
||||
backend.server.v2.store.model.StoreAgent(
|
||||
@@ -305,6 +345,8 @@ def test_get_agents_pagination(mocker: pytest_mock.MockFixture):
|
||||
assert len(data.agents) == 5
|
||||
assert data.pagination.current_page == 2
|
||||
assert data.pagination.page_size == 5
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_pagination")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False,
|
||||
creator=None,
|
||||
@@ -334,7 +376,10 @@ def test_get_agents_malformed_request(mocker: pytest_mock.MockFixture):
|
||||
mock_db_call.assert_not_called()
|
||||
|
||||
|
||||
def test_get_agent_details(mocker: pytest_mock.MockFixture):
|
||||
def test_get_agent_details(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreAgentDetails(
|
||||
store_listing_version_id="test-version-id",
|
||||
slug="test-agent",
|
||||
@@ -349,7 +394,7 @@ def test_get_agent_details(mocker: pytest_mock.MockFixture):
|
||||
runs=100,
|
||||
rating=4.5,
|
||||
versions=["1.0.0", "1.1.0"],
|
||||
last_updated=datetime.datetime.now(),
|
||||
last_updated=FIXED_NOW,
|
||||
)
|
||||
mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agent_details")
|
||||
mock_db_call.return_value = mocked_value
|
||||
@@ -362,10 +407,15 @@ def test_get_agent_details(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert data.agent_name == "Test Agent"
|
||||
assert data.creator == "creator1"
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "agt_details")
|
||||
mock_db_call.assert_called_once_with(username="creator1", agent_name="test-agent")
|
||||
|
||||
|
||||
def test_get_creators_defaults(mocker: pytest_mock.MockFixture):
|
||||
def test_get_creators_defaults(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.CreatorsResponse(
|
||||
creators=[],
|
||||
pagination=backend.server.v2.store.model.Pagination(
|
||||
@@ -386,12 +436,17 @@ def test_get_creators_defaults(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert data.pagination.total_pages == 0
|
||||
assert data.creators == []
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "def_creators")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False, search_query=None, sorted_by=None, page=1, page_size=20
|
||||
)
|
||||
|
||||
|
||||
def test_get_creators_pagination(mocker: pytest_mock.MockFixture):
|
||||
def test_get_creators_pagination(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.CreatorsResponse(
|
||||
creators=[
|
||||
backend.server.v2.store.model.Creator(
|
||||
@@ -425,6 +480,8 @@ def test_get_creators_pagination(mocker: pytest_mock.MockFixture):
|
||||
assert len(data.creators) == 5
|
||||
assert data.pagination.current_page == 2
|
||||
assert data.pagination.page_size == 5
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "creators_pagination")
|
||||
mock_db_call.assert_called_once_with(
|
||||
featured=False, search_query=None, sorted_by=None, page=2, page_size=5
|
||||
)
|
||||
@@ -448,7 +505,10 @@ def test_get_creators_malformed_request(mocker: pytest_mock.MockFixture):
|
||||
mock_db_call.assert_not_called()
|
||||
|
||||
|
||||
def test_get_creator_details(mocker: pytest_mock.MockFixture):
|
||||
def test_get_creator_details(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.CreatorDetails(
|
||||
name="Test User",
|
||||
username="creator1",
|
||||
@@ -468,17 +528,22 @@ def test_get_creator_details(mocker: pytest_mock.MockFixture):
|
||||
data = backend.server.v2.store.model.CreatorDetails.model_validate(response.json())
|
||||
assert data.username == "creator1"
|
||||
assert data.name == "Test User"
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "creator_details")
|
||||
mock_db_call.assert_called_once_with(username="creator1")
|
||||
|
||||
|
||||
def test_get_submissions_success(mocker: pytest_mock.MockFixture):
|
||||
def test_get_submissions_success(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse(
|
||||
submissions=[
|
||||
backend.server.v2.store.model.StoreSubmission(
|
||||
name="Test Agent",
|
||||
description="Test agent description",
|
||||
image_urls=["test.jpg"],
|
||||
date_submitted=datetime.datetime.now(),
|
||||
date_submitted=FIXED_NOW,
|
||||
status=prisma.enums.SubmissionStatus.APPROVED,
|
||||
runs=50,
|
||||
rating=4.2,
|
||||
@@ -507,10 +572,15 @@ def test_get_submissions_success(mocker: pytest_mock.MockFixture):
|
||||
assert len(data.submissions) == 1
|
||||
assert data.submissions[0].name == "Test Agent"
|
||||
assert data.pagination.current_page == 1
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "sub_success")
|
||||
mock_db_call.assert_called_once_with(user_id="test-user-id", page=1, page_size=20)
|
||||
|
||||
|
||||
def test_get_submissions_pagination(mocker: pytest_mock.MockFixture):
|
||||
def test_get_submissions_pagination(
|
||||
mocker: pytest_mock.MockFixture,
|
||||
snapshot: Snapshot,
|
||||
) -> None:
|
||||
mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse(
|
||||
submissions=[],
|
||||
pagination=backend.server.v2.store.model.Pagination(
|
||||
@@ -531,6 +601,8 @@ def test_get_submissions_pagination(mocker: pytest_mock.MockFixture):
|
||||
)
|
||||
assert data.pagination.current_page == 2
|
||||
assert data.pagination.page_size == 5
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(response.json(), indent=2), "sub_pagination")
|
||||
mock_db_call.assert_called_once_with(user_id="test-user-id", page=2, page_size=5)
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,32 @@
|
||||
import fastapi
|
||||
import fastapi.testclient
|
||||
import pytest_mock
|
||||
|
||||
import backend.server.v2.turnstile.routes as turnstile_routes
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
app.include_router(turnstile_routes.router)
|
||||
|
||||
client = fastapi.testclient.TestClient(app)
|
||||
|
||||
|
||||
def test_verify_turnstile_token_no_secret_key(mocker: pytest_mock.MockFixture) -> None:
|
||||
"""Test token verification without secret key configured"""
|
||||
# Mock the settings with no secret key
|
||||
mock_settings = mocker.patch("backend.server.v2.turnstile.routes.settings")
|
||||
mock_settings.secrets.turnstile_secret_key = None
|
||||
|
||||
request_data = {"token": "test_token", "action": "login"}
|
||||
response = client.post("/verify", json=request_data)
|
||||
|
||||
assert response.status_code == 200
|
||||
response_data = response.json()
|
||||
assert response_data["success"] is False
|
||||
assert response_data["error"] == "CONFIGURATION_ERROR"
|
||||
|
||||
|
||||
def test_verify_turnstile_token_invalid_request() -> None:
|
||||
"""Test token verification with invalid request data"""
|
||||
# Missing token
|
||||
response = client.post("/verify", json={"action": "login"})
|
||||
assert response.status_code == 422
|
||||
@@ -2,16 +2,17 @@ services:
|
||||
postgres-test:
|
||||
image: ankane/pgvector:latest
|
||||
environment:
|
||||
- POSTGRES_USER=${DB_USER}
|
||||
- POSTGRES_PASSWORD=${DB_PASS}
|
||||
- POSTGRES_DB=${DB_NAME}
|
||||
- POSTGRES_USER=${DB_USER:-postgres}
|
||||
- POSTGRES_PASSWORD=${DB_PASS:-postgres}
|
||||
- POSTGRES_DB=${DB_NAME:-postgres}
|
||||
- POSTGRES_PORT=${DB_PORT:-5432}
|
||||
healthcheck:
|
||||
test: pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
ports:
|
||||
- "${DB_PORT}:5432"
|
||||
- "${DB_PORT:-5432}:5432"
|
||||
networks:
|
||||
- app-network-test
|
||||
redis-test:
|
||||
|
||||
3358
autogpt_platform/backend/poetry.lock
generated
3358
autogpt_platform/backend/poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -65,6 +65,7 @@ websockets = "^14.2"
|
||||
youtube-transcript-api = "^0.6.2"
|
||||
zerobouncesdk = "^1.1.1"
|
||||
# NOTE: please insert new dependencies in their alphabetical location
|
||||
pytest-snapshot = "^0.9.0"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
aiohappyeyeballs = "^2.6.1"
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
@@ -59,11 +60,52 @@ def test():
|
||||
run_command(["docker", "compose", "-f", "docker-compose.test.yaml", "down"])
|
||||
sys.exit(1)
|
||||
|
||||
# Run Prisma migrations
|
||||
run_command(["prisma", "migrate", "dev"])
|
||||
# IMPORTANT: Set test database environment variables to prevent accidentally
|
||||
# resetting the developer's local database.
|
||||
#
|
||||
# This script spins up a separate test database container (postgres-test) using
|
||||
# docker-compose.test.yaml. We explicitly set DATABASE_URL and DIRECT_URL to point
|
||||
# to this test database to ensure that:
|
||||
# 1. The prisma migrate reset command only affects the test database
|
||||
# 2. Tests run against the test database, not the developer's local database
|
||||
# 3. Any database operations during testing are isolated from development data
|
||||
#
|
||||
# Without this, if a developer has DATABASE_URL set in their environment pointing
|
||||
# to their development database, running tests would wipe their local data!
|
||||
test_env = os.environ.copy()
|
||||
|
||||
# Run the tests
|
||||
result = subprocess.run(["pytest"] + sys.argv[1:], check=False)
|
||||
# Use environment variables if set, otherwise use defaults that match docker-compose.test.yaml
|
||||
db_user = os.getenv("DB_USER", "postgres")
|
||||
db_pass = os.getenv("DB_PASS", "postgres")
|
||||
db_name = os.getenv("DB_NAME", "postgres")
|
||||
db_port = os.getenv("DB_PORT", "5432")
|
||||
|
||||
# Construct the test database URL - this ensures we're always pointing to the test container
|
||||
test_env["DATABASE_URL"] = (
|
||||
f"postgresql://{db_user}:{db_pass}@localhost:{db_port}/{db_name}"
|
||||
)
|
||||
test_env["DIRECT_URL"] = test_env["DATABASE_URL"]
|
||||
|
||||
test_env["DB_PORT"] = db_port
|
||||
test_env["DB_NAME"] = db_name
|
||||
test_env["DB_PASS"] = db_pass
|
||||
test_env["DB_USER"] = db_user
|
||||
|
||||
# Run Prisma migrations with test database
|
||||
# First, reset the database to ensure clean state for tests
|
||||
# This is safe because we've explicitly set DATABASE_URL to the test database above
|
||||
subprocess.run(
|
||||
["prisma", "migrate", "reset", "--force", "--skip-seed"],
|
||||
env=test_env,
|
||||
check=False,
|
||||
)
|
||||
# Then apply migrations to get the test database schema up to date
|
||||
subprocess.run(["prisma", "migrate", "deploy"], env=test_env, check=True)
|
||||
|
||||
# Run the tests with test database environment
|
||||
# This ensures all database connections in the tests use the test database,
|
||||
# not any database that might be configured in the developer's environment
|
||||
result = subprocess.run(["pytest"] + sys.argv[1:], env=test_env, check=False)
|
||||
|
||||
run_command(["docker", "compose", "-f", "docker-compose.test.yaml", "down"])
|
||||
|
||||
|
||||
4
autogpt_platform/backend/snapshots/adm_add_cred_neg
Normal file
4
autogpt_platform/backend/snapshots/adm_add_cred_neg
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"new_balance": 200,
|
||||
"transaction_key": "transaction-456-uuid"
|
||||
}
|
||||
4
autogpt_platform/backend/snapshots/adm_add_cred_ok
Normal file
4
autogpt_platform/backend/snapshots/adm_add_cred_ok
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"new_balance": 1500,
|
||||
"transaction_key": "transaction-123-uuid"
|
||||
}
|
||||
9
autogpt_platform/backend/snapshots/adm_usr_hist_empty
Normal file
9
autogpt_platform/backend/snapshots/adm_usr_hist_empty
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"history": [],
|
||||
"pagination": {
|
||||
"current_page": 1,
|
||||
"page_size": 20,
|
||||
"total_items": 0,
|
||||
"total_pages": 0
|
||||
}
|
||||
}
|
||||
28
autogpt_platform/backend/snapshots/adm_usr_hist_filt
Normal file
28
autogpt_platform/backend/snapshots/adm_usr_hist_filt
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"history": [
|
||||
{
|
||||
"admin_email": null,
|
||||
"amount": 500,
|
||||
"current_balance": 0,
|
||||
"description": null,
|
||||
"extra_data": null,
|
||||
"reason": "Top up",
|
||||
"running_balance": 0,
|
||||
"transaction_key": "",
|
||||
"transaction_time": "0001-01-01T00:00:00Z",
|
||||
"transaction_type": "TOP_UP",
|
||||
"usage_execution_id": null,
|
||||
"usage_graph_id": null,
|
||||
"usage_node_count": 0,
|
||||
"usage_start_time": "9999-12-31T23:59:59.999999Z",
|
||||
"user_email": "test@example.com",
|
||||
"user_id": "user-3"
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"current_page": 1,
|
||||
"page_size": 10,
|
||||
"total_items": 1,
|
||||
"total_pages": 1
|
||||
}
|
||||
}
|
||||
46
autogpt_platform/backend/snapshots/adm_usr_hist_ok
Normal file
46
autogpt_platform/backend/snapshots/adm_usr_hist_ok
Normal file
@@ -0,0 +1,46 @@
|
||||
{
|
||||
"history": [
|
||||
{
|
||||
"admin_email": null,
|
||||
"amount": 1000,
|
||||
"current_balance": 0,
|
||||
"description": null,
|
||||
"extra_data": null,
|
||||
"reason": "Initial grant",
|
||||
"running_balance": 0,
|
||||
"transaction_key": "",
|
||||
"transaction_time": "0001-01-01T00:00:00Z",
|
||||
"transaction_type": "GRANT",
|
||||
"usage_execution_id": null,
|
||||
"usage_graph_id": null,
|
||||
"usage_node_count": 0,
|
||||
"usage_start_time": "9999-12-31T23:59:59.999999Z",
|
||||
"user_email": "user1@example.com",
|
||||
"user_id": "user-1"
|
||||
},
|
||||
{
|
||||
"admin_email": null,
|
||||
"amount": -50,
|
||||
"current_balance": 0,
|
||||
"description": null,
|
||||
"extra_data": null,
|
||||
"reason": "Usage",
|
||||
"running_balance": 0,
|
||||
"transaction_key": "",
|
||||
"transaction_time": "0001-01-01T00:00:00Z",
|
||||
"transaction_type": "USAGE",
|
||||
"usage_execution_id": null,
|
||||
"usage_graph_id": null,
|
||||
"usage_node_count": 0,
|
||||
"usage_start_time": "9999-12-31T23:59:59.999999Z",
|
||||
"user_email": "user2@example.com",
|
||||
"user_id": "user-2"
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"current_page": 1,
|
||||
"page_size": 20,
|
||||
"total_items": 2,
|
||||
"total_pages": 1
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"new_balance": 1500,
|
||||
"transaction_key": "transaction-123-uuid"
|
||||
}
|
||||
27
autogpt_platform/backend/snapshots/agt_details
Normal file
27
autogpt_platform/backend/snapshots/agt_details
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"store_listing_version_id": "test-version-id",
|
||||
"slug": "test-agent",
|
||||
"agent_name": "Test Agent",
|
||||
"agent_video": "video.mp4",
|
||||
"agent_image": [
|
||||
"image1.jpg",
|
||||
"image2.jpg"
|
||||
],
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Test agent subheading",
|
||||
"description": "Test agent description",
|
||||
"categories": [
|
||||
"category1",
|
||||
"category2"
|
||||
],
|
||||
"runs": 100,
|
||||
"rating": 4.5,
|
||||
"versions": [
|
||||
"1.0.0",
|
||||
"1.1.0"
|
||||
],
|
||||
"last_updated": "2023-01-01T00:00:00",
|
||||
"active_version_id": null,
|
||||
"has_approved_version": false
|
||||
}
|
||||
21
autogpt_platform/backend/snapshots/agts_by_creator
Normal file
21
autogpt_platform/backend/snapshots/agts_by_creator
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"slug": "creator-agent",
|
||||
"agent_name": "Creator Agent",
|
||||
"agent_image": "agent.jpg",
|
||||
"creator": "specific-creator",
|
||||
"creator_avatar": "avatar.jpg",
|
||||
"sub_heading": "Creator agent subheading",
|
||||
"description": "Creator agent description",
|
||||
"runs": 50,
|
||||
"rating": 4.0
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 1,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 20
|
||||
}
|
||||
}
|
||||
21
autogpt_platform/backend/snapshots/agts_category
Normal file
21
autogpt_platform/backend/snapshots/agts_category
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"slug": "category-agent",
|
||||
"agent_name": "Category Agent",
|
||||
"agent_image": "category.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Category agent subheading",
|
||||
"description": "Category agent description",
|
||||
"runs": 60,
|
||||
"rating": 4.1
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 1,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 20
|
||||
}
|
||||
}
|
||||
65
autogpt_platform/backend/snapshots/agts_pagination
Normal file
65
autogpt_platform/backend/snapshots/agts_pagination
Normal file
@@ -0,0 +1,65 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"slug": "agent-0",
|
||||
"agent_name": "Agent 0",
|
||||
"agent_image": "agent0.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Agent 0 subheading",
|
||||
"description": "Agent 0 description",
|
||||
"runs": 0,
|
||||
"rating": 4.0
|
||||
},
|
||||
{
|
||||
"slug": "agent-1",
|
||||
"agent_name": "Agent 1",
|
||||
"agent_image": "agent1.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Agent 1 subheading",
|
||||
"description": "Agent 1 description",
|
||||
"runs": 10,
|
||||
"rating": 4.0
|
||||
},
|
||||
{
|
||||
"slug": "agent-2",
|
||||
"agent_name": "Agent 2",
|
||||
"agent_image": "agent2.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Agent 2 subheading",
|
||||
"description": "Agent 2 description",
|
||||
"runs": 20,
|
||||
"rating": 4.0
|
||||
},
|
||||
{
|
||||
"slug": "agent-3",
|
||||
"agent_name": "Agent 3",
|
||||
"agent_image": "agent3.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Agent 3 subheading",
|
||||
"description": "Agent 3 description",
|
||||
"runs": 30,
|
||||
"rating": 4.0
|
||||
},
|
||||
{
|
||||
"slug": "agent-4",
|
||||
"agent_name": "Agent 4",
|
||||
"agent_image": "agent4.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Agent 4 subheading",
|
||||
"description": "Agent 4 description",
|
||||
"runs": 40,
|
||||
"rating": 4.0
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 15,
|
||||
"total_pages": 3,
|
||||
"current_page": 2,
|
||||
"page_size": 5
|
||||
}
|
||||
}
|
||||
21
autogpt_platform/backend/snapshots/agts_search
Normal file
21
autogpt_platform/backend/snapshots/agts_search
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"slug": "search-agent",
|
||||
"agent_name": "Search Agent",
|
||||
"agent_image": "search.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Search agent subheading",
|
||||
"description": "Specific search term description",
|
||||
"runs": 75,
|
||||
"rating": 4.2
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 1,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 20
|
||||
}
|
||||
}
|
||||
21
autogpt_platform/backend/snapshots/agts_sorted
Normal file
21
autogpt_platform/backend/snapshots/agts_sorted
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"slug": "top-agent",
|
||||
"agent_name": "Top Agent",
|
||||
"agent_image": "top.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Top agent subheading",
|
||||
"description": "Top agent description",
|
||||
"runs": 1000,
|
||||
"rating": 5.0
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 1,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 20
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"analytics_id": "analytics-complex-uuid",
|
||||
"logged_data": {
|
||||
"agent_id": "agent_123",
|
||||
"blocks_used": [
|
||||
{
|
||||
"block_id": "llm_block",
|
||||
"count": 3
|
||||
},
|
||||
{
|
||||
"block_id": "http_block",
|
||||
"count": 5
|
||||
},
|
||||
{
|
||||
"block_id": "code_block",
|
||||
"count": 2
|
||||
}
|
||||
],
|
||||
"duration_ms": 3500,
|
||||
"errors": [],
|
||||
"execution_id": "exec_456",
|
||||
"metadata": {
|
||||
"environment": "production",
|
||||
"trigger": "manual",
|
||||
"user_tier": "premium"
|
||||
},
|
||||
"nodes_executed": 15,
|
||||
"status": "completed"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"analytics_id": "analytics-789-uuid"
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"metric_id": "metric-123-uuid"
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"metric_id": "metric-456-uuid"
|
||||
}
|
||||
3
autogpt_platform/backend/snapshots/auth_email
Normal file
3
autogpt_platform/backend/snapshots/auth_email
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"email": "newemail@example.com"
|
||||
}
|
||||
5
autogpt_platform/backend/snapshots/auth_user
Normal file
5
autogpt_platform/backend/snapshots/auth_user
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"email": "test@example.com",
|
||||
"id": "test-user-id",
|
||||
"name": "Test User"
|
||||
}
|
||||
14
autogpt_platform/backend/snapshots/blks_all
Normal file
14
autogpt_platform/backend/snapshots/blks_all
Normal file
@@ -0,0 +1,14 @@
|
||||
[
|
||||
{
|
||||
"costs": [
|
||||
{
|
||||
"cost": 10,
|
||||
"type": "credit"
|
||||
}
|
||||
],
|
||||
"description": "A test block",
|
||||
"disabled": false,
|
||||
"id": "test-block",
|
||||
"name": "Test Block"
|
||||
}
|
||||
]
|
||||
12
autogpt_platform/backend/snapshots/blks_exec
Normal file
12
autogpt_platform/backend/snapshots/blks_exec
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"output1": [
|
||||
{
|
||||
"data": "result1"
|
||||
}
|
||||
],
|
||||
"output2": [
|
||||
{
|
||||
"data": "result2"
|
||||
}
|
||||
]
|
||||
}
|
||||
16
autogpt_platform/backend/snapshots/creator_details
Normal file
16
autogpt_platform/backend/snapshots/creator_details
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"name": "Test User",
|
||||
"username": "creator1",
|
||||
"description": "Test creator description",
|
||||
"links": [
|
||||
"link1.com",
|
||||
"link2.com"
|
||||
],
|
||||
"avatar_url": "avatar.jpg",
|
||||
"agent_rating": 4.8,
|
||||
"agent_runs": 1000,
|
||||
"top_categories": [
|
||||
"category1",
|
||||
"category2"
|
||||
]
|
||||
}
|
||||
60
autogpt_platform/backend/snapshots/creators_pagination
Normal file
60
autogpt_platform/backend/snapshots/creators_pagination
Normal file
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"creators": [
|
||||
{
|
||||
"name": "Creator 0",
|
||||
"username": "creator0",
|
||||
"description": "Creator 0 description",
|
||||
"avatar_url": "avatar0.jpg",
|
||||
"num_agents": 1,
|
||||
"agent_rating": 4.5,
|
||||
"agent_runs": 100,
|
||||
"is_featured": false
|
||||
},
|
||||
{
|
||||
"name": "Creator 1",
|
||||
"username": "creator1",
|
||||
"description": "Creator 1 description",
|
||||
"avatar_url": "avatar1.jpg",
|
||||
"num_agents": 1,
|
||||
"agent_rating": 4.5,
|
||||
"agent_runs": 100,
|
||||
"is_featured": false
|
||||
},
|
||||
{
|
||||
"name": "Creator 2",
|
||||
"username": "creator2",
|
||||
"description": "Creator 2 description",
|
||||
"avatar_url": "avatar2.jpg",
|
||||
"num_agents": 1,
|
||||
"agent_rating": 4.5,
|
||||
"agent_runs": 100,
|
||||
"is_featured": false
|
||||
},
|
||||
{
|
||||
"name": "Creator 3",
|
||||
"username": "creator3",
|
||||
"description": "Creator 3 description",
|
||||
"avatar_url": "avatar3.jpg",
|
||||
"num_agents": 1,
|
||||
"agent_rating": 4.5,
|
||||
"agent_runs": 100,
|
||||
"is_featured": false
|
||||
},
|
||||
{
|
||||
"name": "Creator 4",
|
||||
"username": "creator4",
|
||||
"description": "Creator 4 description",
|
||||
"avatar_url": "avatar4.jpg",
|
||||
"num_agents": 1,
|
||||
"agent_rating": 4.5,
|
||||
"agent_runs": 100,
|
||||
"is_featured": false
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 15,
|
||||
"total_pages": 3,
|
||||
"current_page": 2,
|
||||
"page_size": 5
|
||||
}
|
||||
}
|
||||
3
autogpt_platform/backend/snapshots/cred_bal
Normal file
3
autogpt_platform/backend/snapshots/cred_bal
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"credits": 1000
|
||||
}
|
||||
4
autogpt_platform/backend/snapshots/cred_topup_cfg
Normal file
4
autogpt_platform/backend/snapshots/cred_topup_cfg
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"amount": 500,
|
||||
"threshold": 100
|
||||
}
|
||||
3
autogpt_platform/backend/snapshots/cred_topup_req
Normal file
3
autogpt_platform/backend/snapshots/cred_topup_req
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"checkout_url": "https://checkout.example.com/session123"
|
||||
}
|
||||
9
autogpt_platform/backend/snapshots/def_agts
Normal file
9
autogpt_platform/backend/snapshots/def_agts
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"agents": [],
|
||||
"pagination": {
|
||||
"total_items": 0,
|
||||
"total_pages": 0,
|
||||
"current_page": 0,
|
||||
"page_size": 10
|
||||
}
|
||||
}
|
||||
9
autogpt_platform/backend/snapshots/def_creators
Normal file
9
autogpt_platform/backend/snapshots/def_creators
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"creators": [],
|
||||
"pagination": {
|
||||
"total_items": 0,
|
||||
"total_pages": 0,
|
||||
"current_page": 0,
|
||||
"page_size": 10
|
||||
}
|
||||
}
|
||||
21
autogpt_platform/backend/snapshots/feat_agts
Normal file
21
autogpt_platform/backend/snapshots/feat_agts
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"slug": "featured-agent",
|
||||
"agent_name": "Featured Agent",
|
||||
"agent_image": "featured.jpg",
|
||||
"creator": "creator1",
|
||||
"creator_avatar": "avatar1.jpg",
|
||||
"sub_heading": "Featured agent subheading",
|
||||
"description": "Featured agent description",
|
||||
"runs": 100,
|
||||
"rating": 4.5
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 1,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 20
|
||||
}
|
||||
}
|
||||
20
autogpt_platform/backend/snapshots/grph_in_schm
Normal file
20
autogpt_platform/backend/snapshots/grph_in_schm
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"properties": {
|
||||
"in_key_a": {
|
||||
"advanced": true,
|
||||
"default": "A",
|
||||
"secret": false,
|
||||
"title": "Key A"
|
||||
},
|
||||
"in_key_b": {
|
||||
"advanced": false,
|
||||
"secret": false,
|
||||
"title": "in_key_b"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"in_key_b"
|
||||
],
|
||||
"title": "ExpectedInputSchema",
|
||||
"type": "object"
|
||||
}
|
||||
15
autogpt_platform/backend/snapshots/grph_out_schm
Normal file
15
autogpt_platform/backend/snapshots/grph_out_schm
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"properties": {
|
||||
"out_key": {
|
||||
"advanced": false,
|
||||
"description": "This is an output key",
|
||||
"secret": false,
|
||||
"title": "out_key"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"out_key"
|
||||
],
|
||||
"title": "ExpectedOutputSchema",
|
||||
"type": "object"
|
||||
}
|
||||
29
autogpt_platform/backend/snapshots/grph_single
Normal file
29
autogpt_platform/backend/snapshots/grph_single
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"credentials_input_schema": {
|
||||
"properties": {},
|
||||
"title": "TestGraphCredentialsInputSchema",
|
||||
"type": "object"
|
||||
},
|
||||
"description": "A test graph",
|
||||
"forked_from_id": null,
|
||||
"forked_from_version": null,
|
||||
"has_webhook_trigger": false,
|
||||
"id": "graph-123",
|
||||
"input_schema": {
|
||||
"properties": {},
|
||||
"required": [],
|
||||
"type": "object"
|
||||
},
|
||||
"is_active": true,
|
||||
"links": [],
|
||||
"name": "Test Graph",
|
||||
"nodes": [],
|
||||
"output_schema": {
|
||||
"properties": {},
|
||||
"required": [],
|
||||
"type": "object"
|
||||
},
|
||||
"sub_graphs": [],
|
||||
"user_id": "test-user-id",
|
||||
"version": 1
|
||||
}
|
||||
17
autogpt_platform/backend/snapshots/grph_struct
Normal file
17
autogpt_platform/backend/snapshots/grph_struct
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"description": "Test graph",
|
||||
"link_structure": [
|
||||
{
|
||||
"sink_name": "name",
|
||||
"source_name": "output"
|
||||
}
|
||||
],
|
||||
"links_count": 1,
|
||||
"name": "TestGraph",
|
||||
"node_blocks": [
|
||||
"1ff065e9-88e8-4358-9d82-8dc91f622ba9",
|
||||
"c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
|
||||
"1ff065e9-88e8-4358-9d82-8dc91f622ba9"
|
||||
],
|
||||
"nodes_count": 3
|
||||
}
|
||||
31
autogpt_platform/backend/snapshots/grphs_all
Normal file
31
autogpt_platform/backend/snapshots/grphs_all
Normal file
@@ -0,0 +1,31 @@
|
||||
[
|
||||
{
|
||||
"credentials_input_schema": {
|
||||
"properties": {},
|
||||
"title": "TestGraphCredentialsInputSchema",
|
||||
"type": "object"
|
||||
},
|
||||
"description": "A test graph",
|
||||
"forked_from_id": null,
|
||||
"forked_from_version": null,
|
||||
"has_webhook_trigger": false,
|
||||
"id": "graph-123",
|
||||
"input_schema": {
|
||||
"properties": {},
|
||||
"required": [],
|
||||
"type": "object"
|
||||
},
|
||||
"is_active": true,
|
||||
"links": [],
|
||||
"name": "Test Graph",
|
||||
"nodes": [],
|
||||
"output_schema": {
|
||||
"properties": {},
|
||||
"required": [],
|
||||
"type": "object"
|
||||
},
|
||||
"sub_graphs": [],
|
||||
"user_id": "test-user-id",
|
||||
"version": 1
|
||||
}
|
||||
]
|
||||
3
autogpt_platform/backend/snapshots/grphs_del
Normal file
3
autogpt_platform/backend/snapshots/grphs_del
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"version_counts": 3
|
||||
}
|
||||
48
autogpt_platform/backend/snapshots/lib_agts_search
Normal file
48
autogpt_platform/backend/snapshots/lib_agts_search
Normal file
@@ -0,0 +1,48 @@
|
||||
{
|
||||
"agents": [
|
||||
{
|
||||
"id": "test-agent-1",
|
||||
"graph_id": "test-agent-1",
|
||||
"graph_version": 1,
|
||||
"image_url": null,
|
||||
"creator_name": "Test Creator",
|
||||
"creator_image_url": "",
|
||||
"status": "COMPLETED",
|
||||
"updated_at": "2023-01-01T00:00:00",
|
||||
"name": "Test Agent 1",
|
||||
"description": "Test Description 1",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
},
|
||||
"new_output": false,
|
||||
"can_access_graph": true,
|
||||
"is_latest_version": true
|
||||
},
|
||||
{
|
||||
"id": "test-agent-2",
|
||||
"graph_id": "test-agent-2",
|
||||
"graph_version": 1,
|
||||
"image_url": null,
|
||||
"creator_name": "Test Creator",
|
||||
"creator_image_url": "",
|
||||
"status": "COMPLETED",
|
||||
"updated_at": "2023-01-01T00:00:00",
|
||||
"name": "Test Agent 2",
|
||||
"description": "Test Description 2",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
},
|
||||
"new_output": false,
|
||||
"can_access_graph": false,
|
||||
"is_latest_version": true
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 2,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 50
|
||||
}
|
||||
}
|
||||
30
autogpt_platform/backend/snapshots/log_anlyt_cplx
Normal file
30
autogpt_platform/backend/snapshots/log_anlyt_cplx
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"analytics_id": "analytics-complex-uuid",
|
||||
"logged_data": {
|
||||
"agent_id": "agent_123",
|
||||
"blocks_used": [
|
||||
{
|
||||
"block_id": "llm_block",
|
||||
"count": 3
|
||||
},
|
||||
{
|
||||
"block_id": "http_block",
|
||||
"count": 5
|
||||
},
|
||||
{
|
||||
"block_id": "code_block",
|
||||
"count": 2
|
||||
}
|
||||
],
|
||||
"duration_ms": 3500,
|
||||
"errors": [],
|
||||
"execution_id": "exec_456",
|
||||
"metadata": {
|
||||
"environment": "production",
|
||||
"trigger": "manual",
|
||||
"user_tier": "premium"
|
||||
},
|
||||
"nodes_executed": 15,
|
||||
"status": "completed"
|
||||
}
|
||||
}
|
||||
3
autogpt_platform/backend/snapshots/log_anlyt_ok
Normal file
3
autogpt_platform/backend/snapshots/log_anlyt_ok
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"analytics_id": "analytics-789-uuid"
|
||||
}
|
||||
3
autogpt_platform/backend/snapshots/log_metric_ok
Normal file
3
autogpt_platform/backend/snapshots/log_metric_ok
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"metric_id": "metric-123-uuid"
|
||||
}
|
||||
3
autogpt_platform/backend/snapshots/log_metric_vals
Normal file
3
autogpt_platform/backend/snapshots/log_metric_vals
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"metric_id": "metric-456-uuid"
|
||||
}
|
||||
5
autogpt_platform/backend/snapshots/otto_empty
Normal file
5
autogpt_platform/backend/snapshots/otto_empty
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"answer": "Welcome! How can I help you?",
|
||||
"documents": [],
|
||||
"success": true
|
||||
}
|
||||
5
autogpt_platform/backend/snapshots/otto_err
Normal file
5
autogpt_platform/backend/snapshots/otto_err
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"answer": "An error occurred while processing your request.",
|
||||
"documents": [],
|
||||
"success": false
|
||||
}
|
||||
10
autogpt_platform/backend/snapshots/otto_grph
Normal file
10
autogpt_platform/backend/snapshots/otto_grph
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"answer": "Here's information about your graph.",
|
||||
"documents": [
|
||||
{
|
||||
"relevance_score": 0.92,
|
||||
"url": "https://example.com/graph-doc"
|
||||
}
|
||||
],
|
||||
"success": true
|
||||
}
|
||||
14
autogpt_platform/backend/snapshots/otto_ok
Normal file
14
autogpt_platform/backend/snapshots/otto_ok
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"answer": "This is Otto's response to your query.",
|
||||
"documents": [
|
||||
{
|
||||
"relevance_score": 0.95,
|
||||
"url": "https://example.com/doc1"
|
||||
},
|
||||
{
|
||||
"relevance_score": 0.87,
|
||||
"url": "https://example.com/doc2"
|
||||
}
|
||||
],
|
||||
"success": true
|
||||
}
|
||||
7
autogpt_platform/backend/snapshots/sub
Normal file
7
autogpt_platform/backend/snapshots/sub
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"channel": "3e53486c-cf57-477e-ba2a-cb02dc828e1a|graph_exec#test-graph-exec-1",
|
||||
"data": null,
|
||||
"error": null,
|
||||
"method": "subscribe_graph_execution",
|
||||
"success": true
|
||||
}
|
||||
9
autogpt_platform/backend/snapshots/sub_pagination
Normal file
9
autogpt_platform/backend/snapshots/sub_pagination
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"submissions": [],
|
||||
"pagination": {
|
||||
"total_items": 10,
|
||||
"total_pages": 2,
|
||||
"current_page": 2,
|
||||
"page_size": 5
|
||||
}
|
||||
}
|
||||
32
autogpt_platform/backend/snapshots/sub_success
Normal file
32
autogpt_platform/backend/snapshots/sub_success
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"submissions": [
|
||||
{
|
||||
"agent_id": "test-agent-id",
|
||||
"agent_version": 1,
|
||||
"name": "Test Agent",
|
||||
"sub_heading": "Test agent subheading",
|
||||
"slug": "test-agent",
|
||||
"description": "Test agent description",
|
||||
"image_urls": [
|
||||
"test.jpg"
|
||||
],
|
||||
"date_submitted": "2023-01-01T00:00:00",
|
||||
"status": "APPROVED",
|
||||
"runs": 50,
|
||||
"rating": 4.2,
|
||||
"store_listing_version_id": null,
|
||||
"version": null,
|
||||
"reviewer_id": null,
|
||||
"review_comments": null,
|
||||
"internal_comments": null,
|
||||
"reviewed_at": null,
|
||||
"changes_summary": null
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"total_items": 1,
|
||||
"total_pages": 1,
|
||||
"current_page": 1,
|
||||
"page_size": 20
|
||||
}
|
||||
}
|
||||
7
autogpt_platform/backend/snapshots/unsub
Normal file
7
autogpt_platform/backend/snapshots/unsub
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"channel": "3e53486c-cf57-477e-ba2a-cb02dc828e1a|graph_exec#test-graph-exec-1",
|
||||
"data": null,
|
||||
"error": null,
|
||||
"method": "unsubscribe",
|
||||
"success": true
|
||||
}
|
||||
@@ -1,9 +1,11 @@
|
||||
import json
|
||||
from typing import Any
|
||||
from uuid import UUID
|
||||
|
||||
import autogpt_libs.auth.models
|
||||
import fastapi.exceptions
|
||||
import pytest
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
import backend.server.v2.store.model as store
|
||||
from backend.blocks.basic import StoreValueBlock
|
||||
@@ -18,7 +20,7 @@ from backend.util.test import SpinTestServer
|
||||
|
||||
|
||||
@pytest.mark.asyncio(loop_scope="session")
|
||||
async def test_graph_creation(server: SpinTestServer):
|
||||
async def test_graph_creation(server: SpinTestServer, snapshot: Snapshot):
|
||||
"""
|
||||
Test the creation of a graph with nodes and links.
|
||||
|
||||
@@ -70,9 +72,27 @@ async def test_graph_creation(server: SpinTestServer):
|
||||
assert links[0].source_id in {nodes[0].id, nodes[1].id, nodes[2].id}
|
||||
assert links[0].sink_id in {nodes[0].id, nodes[1].id, nodes[2].id}
|
||||
|
||||
# Create a serializable version of the graph for snapshot testing
|
||||
# Remove dynamic IDs to make snapshots reproducible
|
||||
graph_data = {
|
||||
"name": created_graph.name,
|
||||
"description": created_graph.description,
|
||||
"nodes_count": len(created_graph.nodes),
|
||||
"links_count": len(created_graph.links),
|
||||
"node_blocks": [node.block_id for node in created_graph.nodes],
|
||||
"link_structure": [
|
||||
{"source_name": link.source_name, "sink_name": link.sink_name}
|
||||
for link in created_graph.links
|
||||
],
|
||||
}
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(graph_data, indent=2, sort_keys=True), "grph_struct"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio(loop_scope="session")
|
||||
async def test_get_input_schema(server: SpinTestServer):
|
||||
async def test_get_input_schema(server: SpinTestServer, snapshot: Snapshot):
|
||||
"""
|
||||
Test the get_input_schema method of a created graph.
|
||||
|
||||
@@ -162,10 +182,22 @@ async def test_get_input_schema(server: SpinTestServer):
|
||||
input_schema["title"] = "ExpectedInputSchema"
|
||||
assert input_schema == ExpectedInputSchema.jsonschema()
|
||||
|
||||
# Add snapshot testing for the schemas
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(input_schema, indent=2, sort_keys=True), "grph_in_schm"
|
||||
)
|
||||
|
||||
output_schema = created_graph.output_schema
|
||||
output_schema["title"] = "ExpectedOutputSchema"
|
||||
assert output_schema == ExpectedOutputSchema.jsonschema()
|
||||
|
||||
# Add snapshot testing for the output schema
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(
|
||||
json.dumps(output_schema, indent=2, sort_keys=True), "grph_out_schm"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio(loop_scope="session")
|
||||
async def test_clean_graph(server: SpinTestServer):
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
import json
|
||||
from typing import cast
|
||||
from unittest.mock import AsyncMock
|
||||
|
||||
import pytest
|
||||
from fastapi import WebSocket, WebSocketDisconnect
|
||||
from pytest_snapshot.plugin import Snapshot
|
||||
|
||||
from backend.data.user import DEFAULT_USER_ID
|
||||
from backend.server.conn_manager import ConnectionManager
|
||||
@@ -27,7 +29,7 @@ def mock_manager() -> AsyncMock:
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_websocket_router_subscribe(
|
||||
mock_websocket: AsyncMock, mock_manager: AsyncMock
|
||||
mock_websocket: AsyncMock, mock_manager: AsyncMock, snapshot: Snapshot
|
||||
) -> None:
|
||||
mock_websocket.receive_text.side_effect = [
|
||||
WSMessage(
|
||||
@@ -56,12 +58,19 @@ async def test_websocket_router_subscribe(
|
||||
in mock_websocket.send_text.call_args[0][0]
|
||||
)
|
||||
assert '"success":true' in mock_websocket.send_text.call_args[0][0]
|
||||
|
||||
# Capture and snapshot the WebSocket response message
|
||||
sent_message = mock_websocket.send_text.call_args[0][0]
|
||||
parsed_message = json.loads(sent_message)
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(parsed_message, indent=2, sort_keys=True), "sub")
|
||||
|
||||
mock_manager.disconnect_socket.assert_called_once_with(mock_websocket)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_websocket_router_unsubscribe(
|
||||
mock_websocket: AsyncMock, mock_manager: AsyncMock
|
||||
mock_websocket: AsyncMock, mock_manager: AsyncMock, snapshot: Snapshot
|
||||
) -> None:
|
||||
mock_websocket.receive_text.side_effect = [
|
||||
WSMessage(
|
||||
@@ -87,6 +96,13 @@ async def test_websocket_router_unsubscribe(
|
||||
mock_websocket.send_text.assert_called_once()
|
||||
assert '"method":"unsubscribe"' in mock_websocket.send_text.call_args[0][0]
|
||||
assert '"success":true' in mock_websocket.send_text.call_args[0][0]
|
||||
|
||||
# Capture and snapshot the WebSocket response message
|
||||
sent_message = mock_websocket.send_text.call_args[0][0]
|
||||
parsed_message = json.loads(sent_message)
|
||||
snapshot.snapshot_dir = "snapshots"
|
||||
snapshot.assert_match(json.dumps(parsed_message, indent=2, sort_keys=True), "unsub")
|
||||
|
||||
mock_manager.disconnect_socket.assert_called_once_with(mock_websocket)
|
||||
|
||||
|
||||
|
||||
@@ -342,6 +342,12 @@ To run the tests:
|
||||
poetry run test
|
||||
```
|
||||
|
||||
To update stored snapshots after intentional API changes:
|
||||
|
||||
```sh
|
||||
pytest --snapshot-update
|
||||
```
|
||||
|
||||
## Project Outline
|
||||
|
||||
The current project has the following main modules:
|
||||
|
||||
@@ -54,7 +54,9 @@ nav:
|
||||
- Frontend:
|
||||
- Readme: https://github.com/Significant-Gravitas/AutoGPT/blob/master/classic/frontend/README.md
|
||||
|
||||
- Contribute: contribute/index.md
|
||||
- Contribute:
|
||||
- Introduction: contribute/index.md
|
||||
- Testing: autogpt_platform/backend/TESTING.md
|
||||
|
||||
# - Challenges:
|
||||
# - Introduction: challenges/introduction.md
|
||||
|
||||
Reference in New Issue
Block a user