Our current auth setup (`autogpt_libs.auth` + its usage) is quite inconsistent and doesn't do all of its jobs properly. The 401 responses you get when unauthenticated are not included in the OpenAPI spec, causing these to be unaccounted for in the generated frontend API client. Usage of the FastAPI dependencies supplied by `autogpt_libs.auth.depends` aren't consistently used the same way, making maintenance on these hard to oversee. API tests use many different ways to get around the auth requirement, making this also hard to maintain and oversee. This pull request aims to fix all of this and give us a consistent, clean, and self-documenting API auth implementation. - Resolves #10715 ### Changes 🏗️ - Homogenize use of `autogpt_libs.auth` security dependencies throughout the backend - Fix OpenAPI schema generation for 401 responses - Handle possible 401 responses in frontend - Tighten validation and add warnings for weak settings in `autogpt_libs.auth.config` - Increase test coverage for `autogpt_libs.auth` to 100% - Standardize auth setup for API tests - Rename `APIKeyValidator` to `APIKeyAuthenticator` and move to its own module in `backend.server` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] All tests for `autogpt_libs.auth` pass - [x] All tests for `backend.server` pass - [x] @ntindle does a security audit for these changes - [x] OpenAPI spec for authenticated routes is generated with the appropriate `401` response --------- Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
8.2 KiB
Backend Testing Guide
This guide covers testing practices for the AutoGPT Platform backend, with a focus on snapshot testing for API endpoints.
Table of Contents
Overview
The backend uses pytest for testing with the following key libraries:
pytest- Test frameworkpytest-asyncio- Async test supportpytest-mock- Mocking supportpytest-snapshot- Snapshot testing for API responses
Running Tests
Run all tests
poetry run test
Run specific test file
poetry run pytest path/to/test_file.py
Run with verbose output
poetry run pytest -v
Run with coverage
poetry run pytest --cov=backend
Snapshot Testing
Snapshot testing captures the output of your code and compares it against previously saved snapshots. This is particularly useful for testing API responses.
How Snapshot Testing Works
- First run: Creates snapshot files in
snapshots/directories - Subsequent runs: Compares output against saved snapshots
- Changes detected: Test fails if output differs from snapshot
Creating/Updating Snapshots
When you first write a test or when the expected output changes:
poetry run pytest path/to/test.py --snapshot-update
⚠️ Important: Always review snapshot changes before committing! Use git diff to verify the changes are expected.
Snapshot Test Example
import json
from pytest_snapshot.plugin import Snapshot
def test_api_endpoint(snapshot: Snapshot):
response = client.get("/api/endpoint")
# Snapshot the response
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
"endpoint_response"
)
Best Practices for Snapshots
- Use descriptive names:
"user_list_response"not"response1" - Sort JSON keys: Ensures consistent snapshots
- Format JSON: Use
indent=2for readable diffs - Exclude dynamic data: Remove timestamps, IDs, etc. that change between runs
Example of excluding dynamic data:
response_data = response.json()
# Remove dynamic fields for snapshot
response_data.pop("created_at", None)
response_data.pop("id", None)
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response_data, indent=2, sort_keys=True),
"static_response_data"
)
Writing Tests for API Routes
Basic Structure
import json
import fastapi
import fastapi.testclient
import pytest
from pytest_snapshot.plugin import Snapshot
from backend.server.v2.myroute import router
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
def test_endpoint_success(snapshot: Snapshot):
response = client.get("/endpoint")
assert response.status_code == 200
# Test specific fields
data = response.json()
assert data["status"] == "success"
# Snapshot the full response
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(data, indent=2, sort_keys=True),
"endpoint_success_response"
)
Testing with Authentication
For the main API routes that use JWT authentication, auth is provided by the autogpt_libs.auth module. If the test actually uses the user_id, the recommended approach for testing is to mock the get_jwt_payload function, which underpins all higher-level auth functions used in the API (requires_user, requires_admin_user, get_user_id).
If the test doesn't need the user_id specifically, mocking is not necessary as during tests auth is disabled anyway (see conftest.py).
Using Global Auth Fixtures
Two global auth fixtures are provided by backend/server/conftest.py:
mock_jwt_user- Regular user withtest_user_id("test-user-id")mock_jwt_admin- Admin user withadmin_user_id("admin-user-id")
These provide the easiest way to set up authentication mocking in test modules:
import fastapi
import fastapi.testclient
import pytest
from backend.server.v2.myroute import router
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_user):
"""Setup auth overrides for all tests in this module"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_user['get_jwt_payload']
yield
app.dependency_overrides.clear()
For admin-only endpoints, use mock_jwt_admin instead:
@pytest.fixture(autouse=True)
def setup_app_auth(mock_jwt_admin):
"""Setup auth overrides for admin tests"""
from autogpt_libs.auth.jwt_utils import get_jwt_payload
app.dependency_overrides[get_jwt_payload] = mock_jwt_admin['get_jwt_payload']
yield
app.dependency_overrides.clear()
The IDs are also available separately as fixtures:
test_user_idadmin_user_idtarget_user_id(for admin <-> user operations)
Mocking External Services
def test_external_api_call(mocker, snapshot):
# Mock external service
mock_response = {"external": "data"}
mocker.patch(
"backend.services.external_api.call",
return_value=mock_response
)
response = client.post("/api/process")
assert response.status_code == 200
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
"process_with_external_response"
)
Best Practices
1. Test Organization
- Place tests next to the code:
routes.py→routes_test.py - Use descriptive test names:
test_create_user_with_invalid_email - Group related tests in classes when appropriate
2. Test Coverage
- Test happy path and error cases
- Test edge cases (empty data, invalid formats)
- Test authentication and authorization
3. Snapshot Testing Guidelines
- Review all snapshot changes carefully
- Don't snapshot sensitive data
- Keep snapshots focused and minimal
- Update snapshots intentionally, not accidentally
4. Async Testing
- Use regular
deffor FastAPI TestClient tests - Use
async defwith@pytest.mark.asynciofor testing async functions directly
5. Fixtures
Global Fixtures (conftest.py)
Authentication fixtures are available globally from conftest.py:
mock_jwt_user- Standard user authenticationmock_jwt_admin- Admin user authenticationconfigured_snapshot- Pre-configured snapshot fixture
Custom Fixtures
Create reusable fixtures for common test data:
@pytest.fixture
def sample_user():
return {
"email": "test@example.com",
"name": "Test User"
}
def test_create_user(sample_user, snapshot):
response = client.post("/users", json=sample_user)
# ... test implementation
Test Isolation
All tests must use fixtures that ensure proper isolation:
- Authentication overrides are automatically cleaned up after each test
- Database connections are properly managed with cleanup
- Mock objects are reset between tests
CI/CD Integration
The GitHub Actions workflow automatically runs tests on:
- Pull requests
- Pushes to main branch
Snapshot tests work in CI by:
- Committing snapshot files to the repository
- CI compares against committed snapshots
- Fails if snapshots don't match
Troubleshooting
Snapshot Mismatches
- Review the diff carefully
- If changes are expected:
poetry run pytest --snapshot-update - If changes are unexpected: Fix the code causing the difference
Async Test Issues
- Ensure async functions use
@pytest.mark.asyncio - Use
AsyncMockfor mocking async functions - FastAPI TestClient handles async automatically
Import Errors
- Check that all dependencies are in
pyproject.toml - Run
poetry installto ensure dependencies are installed - Verify import paths are correct
Summary
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
Remember: Good tests are as important as good code!