mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-01-10 07:38:04 -05:00
### Problem When running multiple backend pods in production, requests can be routed to different pods causing inconsistent cache states. Additionally, the current cache implementation in `autogpt_libs` doesn't support shared caching across processes, leading to data inconsistency and redundant cache misses. ### Changes 🏗️ - **Moved cache implementation from autogpt_libs to backend** (`/backend/backend/util/cache.py`) - Removed `/autogpt_libs/autogpt_libs/utils/cache.py` - Centralized cache utilities within the backend module - Updated all import statements across the codebase - **Implemented Redis-based shared caching** - Added `shared_cache` parameter to `@cached` decorator for cross-process caching - Implemented Redis connection pooling for efficient cache operations - Added support for cache key pattern matching and bulk deletion - Added TTL refresh on cache access with `refresh_ttl_on_get` option - **Enhanced cache functionality** - Added thundering herd protection with double-checked locking - Implemented thread-local caching with `@thread_cached` decorator - Added cache management methods: `cache_clear()`, `cache_info()`, `cache_delete()` - Added support for both sync and async functions - **Updated store caching** (`/backend/server/v2/store/cache.py`) - Enabled shared caching for all store-related cache functions - Set appropriate TTL values (5-15 minutes) for different cache types - Added `clear_all_caches()` function for cache invalidation - **Added Redis configuration** - Added Redis connection settings to backend settings - Configured dedicated connection pool for cache operations - Set up binary mode for pickle serialization ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verify Redis connection and cache operations work correctly - [x] Test shared cache across multiple backend instances - [x] Verify cache invalidation with `clear_all_caches()` - [x] Run cache tests: `poetry run pytest backend/backend/util/cache_test.py` - [x] Test thundering herd protection under concurrent load - [x] Verify TTL refresh functionality with `refresh_ttl_on_get=True` - [x] Test thread-local caching for request-scoped data - [x] Ensure no performance regression vs in-memory cache #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes (Redis already configured) - [x] I have included a list of my configuration changes in the PR description (under **Changes**) - Redis cache configuration uses existing Redis service settings (REDIS_HOST, REDIS_PORT, REDIS_PASSWORD) - No new environment variables required
AutoGPT Libs
This is a new project to store shared functionality across different services in the AutoGPT Platform (e.g. authentication)