Files
AutoGPT/autogpt_platform/backend/run_tests.py
Swifty a3d082a5fa feat(backend): snapshot test responses (#10039)
This pull request introduces a comprehensive backend testing guide and
adds new tests for analytics logging and various API endpoints, focusing
on snapshot testing. It also includes corresponding snapshot files for
these tests. Below are the most significant changes:

### Documentation Updates:
* Added a detailed `TESTING.md` file to the backend, providing a guide
for running tests, snapshot testing, writing API route tests, and best
practices. It includes examples for mocking, fixtures, and CI/CD
integration.

### Analytics Logging Tests:
* Implemented tests for logging raw metrics and analytics in
`analytics_test.py`, covering success scenarios, various input values,
invalid requests, and complex nested data. These tests utilize snapshot
testing for response validation.
* Added snapshot files for analytics logging tests, including responses
for success cases, various metric values, and complex analytics data.
[[1]](diffhunk://#diff-654bc5aa1951008ec5c110a702279ef58709ee455ba049b9fa825fa60f7e3869R1-R3)
[[2]](diffhunk://#diff-e0a434b107abc71aeffb7d7989dbfd8f466b5e53f8dea25a87937ec1b885b122R1-R3)
[[3]](diffhunk://#diff-dd0bc0b72264de1a0c0d3bd0c54ad656061317f425e4de461018ca51a19171a0R1-R3)
[[4]](diffhunk://#diff-63af007073db553d04988544af46930458a768544cabd08412265e0818320d11R1-R30)

### Snapshot Files for API Endpoints:
* Added snapshot files for various API endpoint tests, such as:
- Graph-related operations (`graphs_get_single_response`,
`graphs_get_all_response`, `blocks_get_all_response`).
[[1]](diffhunk://#diff-b25dba271606530cfa428c00073d7e016184a7bb22166148ab1726b3e113dda8R1-R29)
[[2]](diffhunk://#diff-1054e58ec3094715660f55bfba1676d65b6833a81a91a08e90ad57922444d056R1-R31)
[[3]](diffhunk://#diff-cfd403ab6f3efc89188acaf993d85e6f792108d1740c7e7149eb05efb73d918dR1-R14)
- User-related operations (`auth_get_or_create_user_response`,
`auth_update_email_response`).
[[1]](diffhunk://#diff-49e65ab1eb6af4d0163a6c54ed10be621ce7336b2ab5d47d47679bfaefdb7059R1-R5)
[[2]](diffhunk://#diff-ac1216f96878bd4356454c317473654d5d5c7c180125663b80b0b45aa5ab52cbR1-R3)
- Credit-related operations (`credits_get_balance_response`,
`credits_get_auto_top_up_response`, `credits_top_up_request_response`).
[[1]](diffhunk://#diff-189488f8da5be74d80ac3fb7f84f1039a408573184293e9ba2e321d535c57cddR1-R3)
[[2]](diffhunk://#diff-ba3c4a6853793cbed24030cdccedf966d71913451ef8eb4b2c4f426ef18ed87aR1-R4)
[[3]](diffhunk://#diff-43d7daa0c82070a9b6aee88a774add8e87533e630bbccbac5a838b7a7ae56a75R1-R3)
- Graph execution and deletion (`blocks_execute_response`,
`graphs_delete_response`).
[[1]](diffhunk://#diff-a2ade7d646ad85a2801e7ff39799a925a612548a1cdd0ed99b44dd870d1465b5R1-R12)
[[2]](diffhunk://#diff-c0d1cd0a8499ee175ce3007c3a87ba5f3235ce02d38ce837560b36a44fdc4a22R1-R3)##
Summary
- add pytest-snapshot to backend dev requirements
- snapshot server route response JSONs
- mention how to update stored snapshots

## Testing
- `poetry run format`
- `poetry run test` 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] run poetry run test

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-06-06 20:36:00 +00:00

113 lines
3.9 KiB
Python

import os
import subprocess
import sys
import time
def wait_for_postgres(max_retries=5, delay=5):
for _ in range(max_retries):
try:
result = subprocess.run(
[
"docker",
"compose",
"-f",
"docker-compose.test.yaml",
"exec",
"postgres-test",
"pg_isready",
"-U",
"postgres",
"-d",
"postgres",
],
check=True,
capture_output=True,
text=True,
)
if "accepting connections" in result.stdout:
print("PostgreSQL is ready.")
return True
except subprocess.CalledProcessError:
print(f"PostgreSQL is not ready yet. Retrying in {delay} seconds...")
time.sleep(delay)
print("Failed to connect to PostgreSQL.")
return False
def run_command(command, check=True):
try:
subprocess.run(command, check=check)
except subprocess.CalledProcessError as e:
print(f"Command failed: {e}")
sys.exit(1)
def test():
# Start PostgreSQL with Docker Compose
run_command(
[
"docker",
"compose",
"-f",
"docker-compose.test.yaml",
"up",
"-d",
]
)
if not wait_for_postgres():
run_command(["docker", "compose", "-f", "docker-compose.test.yaml", "down"])
sys.exit(1)
# IMPORTANT: Set test database environment variables to prevent accidentally
# resetting the developer's local database.
#
# This script spins up a separate test database container (postgres-test) using
# docker-compose.test.yaml. We explicitly set DATABASE_URL and DIRECT_URL to point
# to this test database to ensure that:
# 1. The prisma migrate reset command only affects the test database
# 2. Tests run against the test database, not the developer's local database
# 3. Any database operations during testing are isolated from development data
#
# Without this, if a developer has DATABASE_URL set in their environment pointing
# to their development database, running tests would wipe their local data!
test_env = os.environ.copy()
# Use environment variables if set, otherwise use defaults that match docker-compose.test.yaml
db_user = os.getenv("DB_USER", "postgres")
db_pass = os.getenv("DB_PASS", "postgres")
db_name = os.getenv("DB_NAME", "postgres")
db_port = os.getenv("DB_PORT", "5432")
# Construct the test database URL - this ensures we're always pointing to the test container
test_env["DATABASE_URL"] = (
f"postgresql://{db_user}:{db_pass}@localhost:{db_port}/{db_name}"
)
test_env["DIRECT_URL"] = test_env["DATABASE_URL"]
test_env["DB_PORT"] = db_port
test_env["DB_NAME"] = db_name
test_env["DB_PASS"] = db_pass
test_env["DB_USER"] = db_user
# Run Prisma migrations with test database
# First, reset the database to ensure clean state for tests
# This is safe because we've explicitly set DATABASE_URL to the test database above
subprocess.run(
["prisma", "migrate", "reset", "--force", "--skip-seed"],
env=test_env,
check=False,
)
# Then apply migrations to get the test database schema up to date
subprocess.run(["prisma", "migrate", "deploy"], env=test_env, check=True)
# Run the tests with test database environment
# This ensures all database connections in the tests use the test database,
# not any database that might be configured in the developer's environment
result = subprocess.run(["pytest"] + sys.argv[1:], env=test_env, check=False)
run_command(["docker", "compose", "-f", "docker-compose.test.yaml", "down"])
sys.exit(result.returncode)