mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-02-11 15:25:16 -05:00
This pull request introduces a comprehensive backend testing guide and adds new tests for analytics logging and various API endpoints, focusing on snapshot testing. It also includes corresponding snapshot files for these tests. Below are the most significant changes: ### Documentation Updates: * Added a detailed `TESTING.md` file to the backend, providing a guide for running tests, snapshot testing, writing API route tests, and best practices. It includes examples for mocking, fixtures, and CI/CD integration. ### Analytics Logging Tests: * Implemented tests for logging raw metrics and analytics in `analytics_test.py`, covering success scenarios, various input values, invalid requests, and complex nested data. These tests utilize snapshot testing for response validation. * Added snapshot files for analytics logging tests, including responses for success cases, various metric values, and complex analytics data. [[1]](diffhunk://#diff-654bc5aa1951008ec5c110a702279ef58709ee455ba049b9fa825fa60f7e3869R1-R3) [[2]](diffhunk://#diff-e0a434b107abc71aeffb7d7989dbfd8f466b5e53f8dea25a87937ec1b885b122R1-R3) [[3]](diffhunk://#diff-dd0bc0b72264de1a0c0d3bd0c54ad656061317f425e4de461018ca51a19171a0R1-R3) [[4]](diffhunk://#diff-63af007073db553d04988544af46930458a768544cabd08412265e0818320d11R1-R30) ### Snapshot Files for API Endpoints: * Added snapshot files for various API endpoint tests, such as: - Graph-related operations (`graphs_get_single_response`, `graphs_get_all_response`, `blocks_get_all_response`). [[1]](diffhunk://#diff-b25dba271606530cfa428c00073d7e016184a7bb22166148ab1726b3e113dda8R1-R29) [[2]](diffhunk://#diff-1054e58ec3094715660f55bfba1676d65b6833a81a91a08e90ad57922444d056R1-R31) [[3]](diffhunk://#diff-cfd403ab6f3efc89188acaf993d85e6f792108d1740c7e7149eb05efb73d918dR1-R14) - User-related operations (`auth_get_or_create_user_response`, `auth_update_email_response`). [[1]](diffhunk://#diff-49e65ab1eb6af4d0163a6c54ed10be621ce7336b2ab5d47d47679bfaefdb7059R1-R5) [[2]](diffhunk://#diff-ac1216f96878bd4356454c317473654d5d5c7c180125663b80b0b45aa5ab52cbR1-R3) - Credit-related operations (`credits_get_balance_response`, `credits_get_auto_top_up_response`, `credits_top_up_request_response`). [[1]](diffhunk://#diff-189488f8da5be74d80ac3fb7f84f1039a408573184293e9ba2e321d535c57cddR1-R3) [[2]](diffhunk://#diff-ba3c4a6853793cbed24030cdccedf966d71913451ef8eb4b2c4f426ef18ed87aR1-R4) [[3]](diffhunk://#diff-43d7daa0c82070a9b6aee88a774add8e87533e630bbccbac5a838b7a7ae56a75R1-R3) - Graph execution and deletion (`blocks_execute_response`, `graphs_delete_response`). [[1]](diffhunk://#diff-a2ade7d646ad85a2801e7ff39799a925a612548a1cdd0ed99b44dd870d1465b5R1-R12) [[2]](diffhunk://#diff-c0d1cd0a8499ee175ce3007c3a87ba5f3235ce02d38ce837560b36a44fdc4a22R1-R3)## Summary - add pytest-snapshot to backend dev requirements - snapshot server route response JSONs - mention how to update stored snapshots ## Testing - `poetry run format` - `poetry run test` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: <!-- Put your test plan here: --> - [x] run poetry run test --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
237 lines
6.2 KiB
Markdown
237 lines
6.2 KiB
Markdown
# Backend Testing Guide
|
|
|
|
This guide covers testing practices for the AutoGPT Platform backend, with a focus on snapshot testing for API endpoints.
|
|
|
|
## Table of Contents
|
|
- [Overview](#overview)
|
|
- [Running Tests](#running-tests)
|
|
- [Snapshot Testing](#snapshot-testing)
|
|
- [Writing Tests for API Routes](#writing-tests-for-api-routes)
|
|
- [Best Practices](#best-practices)
|
|
|
|
## Overview
|
|
|
|
The backend uses pytest for testing with the following key libraries:
|
|
- `pytest` - Test framework
|
|
- `pytest-asyncio` - Async test support
|
|
- `pytest-mock` - Mocking support
|
|
- `pytest-snapshot` - Snapshot testing for API responses
|
|
|
|
## Running Tests
|
|
|
|
### Run all tests
|
|
```bash
|
|
poetry run test
|
|
```
|
|
|
|
### Run specific test file
|
|
```bash
|
|
poetry run pytest path/to/test_file.py
|
|
```
|
|
|
|
### Run with verbose output
|
|
```bash
|
|
poetry run pytest -v
|
|
```
|
|
|
|
### Run with coverage
|
|
```bash
|
|
poetry run pytest --cov=backend
|
|
```
|
|
|
|
## Snapshot Testing
|
|
|
|
Snapshot testing captures the output of your code and compares it against previously saved snapshots. This is particularly useful for testing API responses.
|
|
|
|
### How Snapshot Testing Works
|
|
|
|
1. First run: Creates snapshot files in `snapshots/` directories
|
|
2. Subsequent runs: Compares output against saved snapshots
|
|
3. Changes detected: Test fails if output differs from snapshot
|
|
|
|
### Creating/Updating Snapshots
|
|
|
|
When you first write a test or when the expected output changes:
|
|
|
|
```bash
|
|
poetry run pytest path/to/test.py --snapshot-update
|
|
```
|
|
|
|
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
|
|
|
|
### Snapshot Test Example
|
|
|
|
```python
|
|
import json
|
|
from pytest_snapshot.plugin import Snapshot
|
|
|
|
def test_api_endpoint(snapshot: Snapshot):
|
|
response = client.get("/api/endpoint")
|
|
|
|
# Snapshot the response
|
|
snapshot.snapshot_dir = "snapshots"
|
|
snapshot.assert_match(
|
|
json.dumps(response.json(), indent=2, sort_keys=True),
|
|
"endpoint_response"
|
|
)
|
|
```
|
|
|
|
### Best Practices for Snapshots
|
|
|
|
1. **Use descriptive names**: `"user_list_response"` not `"response1"`
|
|
2. **Sort JSON keys**: Ensures consistent snapshots
|
|
3. **Format JSON**: Use `indent=2` for readable diffs
|
|
4. **Exclude dynamic data**: Remove timestamps, IDs, etc. that change between runs
|
|
|
|
Example of excluding dynamic data:
|
|
```python
|
|
response_data = response.json()
|
|
# Remove dynamic fields for snapshot
|
|
response_data.pop("created_at", None)
|
|
response_data.pop("id", None)
|
|
|
|
snapshot.snapshot_dir = "snapshots"
|
|
snapshot.assert_match(
|
|
json.dumps(response_data, indent=2, sort_keys=True),
|
|
"static_response_data"
|
|
)
|
|
```
|
|
|
|
## Writing Tests for API Routes
|
|
|
|
### Basic Structure
|
|
|
|
```python
|
|
import json
|
|
import fastapi
|
|
import fastapi.testclient
|
|
import pytest
|
|
from pytest_snapshot.plugin import Snapshot
|
|
|
|
from backend.server.v2.myroute import router
|
|
|
|
app = fastapi.FastAPI()
|
|
app.include_router(router)
|
|
client = fastapi.testclient.TestClient(app)
|
|
|
|
def test_endpoint_success(snapshot: Snapshot):
|
|
response = client.get("/endpoint")
|
|
assert response.status_code == 200
|
|
|
|
# Test specific fields
|
|
data = response.json()
|
|
assert data["status"] == "success"
|
|
|
|
# Snapshot the full response
|
|
snapshot.snapshot_dir = "snapshots"
|
|
snapshot.assert_match(
|
|
json.dumps(data, indent=2, sort_keys=True),
|
|
"endpoint_success_response"
|
|
)
|
|
```
|
|
|
|
### Testing with Authentication
|
|
|
|
```python
|
|
def override_auth_middleware():
|
|
return {"sub": "test-user-id"}
|
|
|
|
def override_get_user_id():
|
|
return "test-user-id"
|
|
|
|
app.dependency_overrides[auth_middleware] = override_auth_middleware
|
|
app.dependency_overrides[get_user_id] = override_get_user_id
|
|
```
|
|
|
|
### Mocking External Services
|
|
|
|
```python
|
|
def test_external_api_call(mocker, snapshot):
|
|
# Mock external service
|
|
mock_response = {"external": "data"}
|
|
mocker.patch(
|
|
"backend.services.external_api.call",
|
|
return_value=mock_response
|
|
)
|
|
|
|
response = client.post("/api/process")
|
|
assert response.status_code == 200
|
|
|
|
snapshot.snapshot_dir = "snapshots"
|
|
snapshot.assert_match(
|
|
json.dumps(response.json(), indent=2, sort_keys=True),
|
|
"process_with_external_response"
|
|
)
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### 1. Test Organization
|
|
- Place tests next to the code: `routes.py` → `routes_test.py`
|
|
- Use descriptive test names: `test_create_user_with_invalid_email`
|
|
- Group related tests in classes when appropriate
|
|
|
|
### 2. Test Coverage
|
|
- Test happy path and error cases
|
|
- Test edge cases (empty data, invalid formats)
|
|
- Test authentication and authorization
|
|
|
|
### 3. Snapshot Testing Guidelines
|
|
- Review all snapshot changes carefully
|
|
- Don't snapshot sensitive data
|
|
- Keep snapshots focused and minimal
|
|
- Update snapshots intentionally, not accidentally
|
|
|
|
### 4. Async Testing
|
|
- Use regular `def` for FastAPI TestClient tests
|
|
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
|
|
|
|
### 5. Fixtures
|
|
Create reusable fixtures for common test data:
|
|
|
|
```python
|
|
@pytest.fixture
|
|
def sample_user():
|
|
return {
|
|
"email": "test@example.com",
|
|
"name": "Test User"
|
|
}
|
|
|
|
def test_create_user(sample_user, snapshot):
|
|
response = client.post("/users", json=sample_user)
|
|
# ... test implementation
|
|
```
|
|
|
|
## CI/CD Integration
|
|
|
|
The GitHub Actions workflow automatically runs tests on:
|
|
- Pull requests
|
|
- Pushes to main branch
|
|
|
|
Snapshot tests work in CI by:
|
|
1. Committing snapshot files to the repository
|
|
2. CI compares against committed snapshots
|
|
3. Fails if snapshots don't match
|
|
|
|
## Troubleshooting
|
|
|
|
### Snapshot Mismatches
|
|
- Review the diff carefully
|
|
- If changes are expected: `poetry run pytest --snapshot-update`
|
|
- If changes are unexpected: Fix the code causing the difference
|
|
|
|
### Async Test Issues
|
|
- Ensure async functions use `@pytest.mark.asyncio`
|
|
- Use `AsyncMock` for mocking async functions
|
|
- FastAPI TestClient handles async automatically
|
|
|
|
### Import Errors
|
|
- Check that all dependencies are in `pyproject.toml`
|
|
- Run `poetry install` to ensure dependencies are installed
|
|
- Verify import paths are correct
|
|
|
|
## Summary
|
|
|
|
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
|
|
|
|
Remember: Good tests are as important as good code! |