## Summary Enable LaunchDarkly feature flags to use rich user context and metadata for advanced targeting, including user segments, account age, email domains, and custom attributes. This unlocks LaunchDarkly's powerful targeting capabilities beyond simple user ID checks. ## Problem LaunchDarkly feature flags were only receiving basic user IDs, preventing the use of: - **Segment-based targeting** (e.g., "employees", "beta users", "new accounts") - **Contextual rules** (e.g., account age, email domain, custom metadata) - **Advanced LaunchDarkly features** like percentage rollouts by user attributes This limited feature flag flexibility and required manual user ID management for targeting. ## Solution ### 🎯 **LaunchDarkly Context Enhancement** - **Rich user context**: Send user metadata, segments, account age, email domain to LaunchDarkly - **Automatic segmentation**: Users automatically categorized as "employee", "new_user", "established_user" etc. - **Custom metadata support**: Any user metadata becomes available for LaunchDarkly targeting - **24-hour caching**: Efficient user context retrieval with TTL cache to reduce database calls ### 📊 **User Context Data** ```python # Before: Only user ID context = Context.builder("user-123").build() # After: Full context with targeting data context = { "email": "user@agpt.co", "created_at": "2023-01-15T10:00:00Z", "segments": ["employee", "established_user"], "email_domain": "agpt.co", "account_age_days": 365, "custom_role": "admin" } ``` ### 🏗️ **Required Infrastructure Changes** To support proper LaunchDarkly serialization, we needed to implement clean application models: #### **Application-Layer User Model** - Created snake_case User model (`created_at`, `email_verified`) for proper JSON serialization - LaunchDarkly expects consistent field naming - camelCase Prisma objects caused validation errors - Added `User.from_db()` converter to safely transform database objects #### **HTTP Client Reliability** - Fixed HTTP 4xx retry issue that was causing unnecessary load - Added layer validation to prevent database objects leaking to external services #### **Type Safety** - Eliminated `Any` types and defensive coding patterns - Proper typing enables better IDE support and catches errors early ## Technical Implementation ### **Core LaunchDarkly Enhancement** ```python # autogpt_libs/feature_flag/client.py @async_ttl_cache(maxsize=1000, ttl_seconds=86400) # 24h cache async def _fetch_user_context_data(user_id: str) -> dict[str, Any]: user = await get_user_by_id(user_id) return _build_launchdarkly_context(user) def _build_launchdarkly_context(user: User) -> dict[str, Any]: return { "email": user.email, "created_at": user.created_at.isoformat(), # snake_case for serialization "segments": determine_user_segments(user), "account_age_days": calculate_account_age(user), # ... more context data } ``` ### **User Segmentation Logic** - **Role-based**: `admin`, `user`, `system` segments - **Domain-based**: `employee` for @agpt.co emails - **Account age**: `new_user` (<7 days), `recent_user` (7-30 days), `established_user` (>30 days) - **Custom metadata**: Any user metadata becomes available for targeting ### **Infrastructure Updates** - `backend/data/model.py`: Application User model with proper serialization - `backend/util/service.py`: HTTP client improvements and layer validation - Multiple files: Migration to use application models for consistency ## LaunchDarkly Usage Examples With this enhancement, you can now create LaunchDarkly rules like: ```yaml # Target employees only - variation: true targets: - values: ["employee"] contextKind: "user" attribute: "segments" # Target new users for gradual rollout - variation: true rollout: variations: - variation: true weight: 25000 # 25% of new users contextKind: "user" bucketBy: "segments" filters: - attribute: "segments" op: "contains" values: ["new_user"] ``` ## Performance & Caching - **24-hour TTL cache**: Dramatically reduces database calls for user context - **Graceful fallbacks**: Simple user ID context if database unavailable - **Efficient caching**: 1000 entry LRU cache with automatic TTL expiration ## Testing - [x] LaunchDarkly context includes all expected user attributes - [x] Segmentation logic correctly categorizes users - [x] 24-hour cache reduces database load - [x] Fallback to simple context works when database unavailable - [x] All existing feature flag functionality preserved - [x] HTTP retry improvements work correctly ## Breaking Changes ✅ **No external API changes** - all existing feature flag usage continues to work ⚠️ **Internal changes only**: - `get_user_by_id()` returns application User model instead of Prisma model - Test utilities need to import User from `backend.data.model` ## Impact 🎯 **Product Impact**: - **Advanced targeting**: Product teams can now use sophisticated LaunchDarkly rules - **Better user experience**: Gradual rollouts, A/B testing, and segment-based features - **Operational efficiency**: Reduced need for manual user ID management 🚀 **Performance Impact**: - **Reduced database load**: 24-hour caching minimizes repeated user context queries - **Improved reliability**: Fixed HTTP retry inefficiencies - **Better monitoring**: Cleaner logs without 4xx retry noise --- **Primary goal**: Enable rich LaunchDarkly targeting with user context and segments **Infrastructure changes**: Required for proper serialization and reliability 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
AutoGPT Platform
Welcome to the AutoGPT Platform - a powerful system for creating and running AI agents to solve business problems. This platform enables you to harness the power of artificial intelligence to automate tasks, analyze data, and generate insights for your organization.
Getting Started
Prerequisites
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
Running the System
To run the AutoGPT Platform, follow these steps:
-
Clone this repository to your local machine and navigate to the
autogpt_platformdirectory within the repository:git clone <https://github.com/Significant-Gravitas/AutoGPT.git | git@github.com:Significant-Gravitas/AutoGPT.git> cd AutoGPT/autogpt_platform -
Run the following command:
cp .env.example .envThis command will copy the
.env.examplefile to.env. You can modify the.envfile to add your own environment variables. -
Run the following command:
docker compose up -dThis command will start all the necessary backend services defined in the
docker-compose.ymlfile in detached mode. -
Navigate to
frontendwithin theautogpt_platformdirectory:cd frontendYou will need to run your frontend application separately on your local machine.
-
Run the following command:
cp .env.example .env.localThis command will copy the
.env.examplefile to.env.localin thefrontenddirectory. You can modify the.env.localwithin this folder to add your own environment variables for the frontend application. -
Run the following command:
Enable corepack and install dependencies by running:
corepack enable pnpm iGenerate the API client (this step is required before running the frontend):
pnpm generate:api-clientThen start the frontend application in development mode:
pnpm dev -
Open your browser and navigate to
http://localhost:3000to access the AutoGPT Platform frontend.
Docker Compose Commands
Here are some useful Docker Compose commands for managing your AutoGPT Platform:
docker compose up -d: Start the services in detached mode.docker compose stop: Stop the running services without removing them.docker compose rm: Remove stopped service containers.docker compose build: Build or rebuild services.docker compose down: Stop and remove containers, networks, and volumes.docker compose watch: Watch for changes in your services and automatically update them.
Sample Scenarios
Here are some common scenarios where you might use multiple Docker Compose commands:
-
Updating and restarting a specific service:
docker compose build api_srv docker compose up -d --no-deps api_srvThis rebuilds the
api_srvservice and restarts it without affecting other services. -
Viewing logs for troubleshooting:
docker compose logs -f api_srv ws_srvThis shows and follows the logs for both
api_srvandws_srvservices. -
Scaling a service for increased load:
docker compose up -d --scale executor=3This scales the
executorservice to 3 instances to handle increased load. -
Stopping the entire system for maintenance:
docker compose stop docker compose rm -f docker compose pull docker compose up -dThis stops all services, removes containers, pulls the latest images, and restarts the system.
-
Developing with live updates:
docker compose watchThis watches for changes in your code and automatically updates the relevant services.
-
Checking the status of services:
docker compose psThis shows the current status of all services defined in your docker-compose.yml file.
These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively.
Persisting Data
To persist data for PostgreSQL and Redis, you can modify the docker-compose.yml file to add volumes. Here's how:
-
Open the
docker-compose.ymlfile in a text editor. -
Add volume configurations for PostgreSQL and Redis services:
services: postgres: # ... other configurations ... volumes: - postgres_data:/var/lib/postgresql/data redis: # ... other configurations ... volumes: - redis_data:/data volumes: postgres_data: redis_data: -
Save the file and run
docker compose up -dto apply the changes.
This configuration will create named volumes for PostgreSQL and Redis, ensuring that your data persists across container restarts.
API Client Generation
The platform includes scripts for generating and managing the API client:
pnpm fetch:openapi: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)pnpm generate:api-client: Generates the TypeScript API client from the OpenAPI specification using Orvalpnpm generate:api-all: Runs both fetch and generate commands in sequence
Manual API Client Updates
If you need to update the API client after making changes to the backend API:
-
Ensure the backend services are running:
docker compose up -d -
Generate the updated API client:
pnpm generate:api-all
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.