mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-08 03:00:28 -04:00
### Changes 🏗️ Adds `autogpt_platform/analytics/` — 14 SQL view definitions that expose production data safely through a locked-down `analytics` schema. **Security model:** - Views use `security_invoker = false` (PostgreSQL 15+), so they execute as their owner (`postgres`), not the caller - `analytics_readonly` role only has access to `analytics.*` — cannot touch `platform` or `auth` tables directly **Files:** - `backend/generate_views.py` — does everything; auto-reads credentials from `backend/.env` - `analytics/queries/*.sql` — 14 documented view definitions (auth, user activity, executions, onboarding funnel, cohort retention) --- ### Running locally (dev) ```bash cd autogpt_platform/backend # First time only — creates analytics schema, role, grants poetry run analytics-setup # Create / refresh views (auto-reads backend/.env) poetry run analytics-views ``` ### Running in production (Supabase) ```bash cd autogpt_platform/backend # Step 1 — first time only (run in Supabase SQL Editor as postgres superuser) poetry run analytics-setup --dry-run # Paste the output into Supabase SQL Editor and run # Step 2 — apply views (use direct connection host, not pooler) poetry run analytics-views --db-url "postgresql://postgres:PASSWORD@db.<ref>.supabase.co:5432/postgres" # Step 3 — set password for analytics_readonly so external tools can connect # Run in Supabase SQL Editor: # ALTER ROLE analytics_readonly WITH PASSWORD 'your-password'; ``` --- ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Setup + views applied cleanly on local Postgres 15 - [x] `analytics_readonly` can `SELECT` from all 14 `analytics.*` views - [x] `analytics_readonly` gets `permission denied` on `platform.*` and `auth.*` directly --------- Co-authored-by: Otto (AGPT) <otto@agpt.co>
72 lines
3.8 KiB
SQL
72 lines
3.8 KiB
SQL
-- =============================================================
|
|
-- View: analytics.user_block_spending
|
|
-- Looker source alias: ds6 | Charts: 5
|
|
-- =============================================================
|
|
-- DESCRIPTION
|
|
-- One row per credit transaction (last 90 days).
|
|
-- Shows how users spend credits broken down by block type,
|
|
-- LLM provider and model. Joins node execution stats for
|
|
-- token-level detail.
|
|
--
|
|
-- SOURCE TABLES
|
|
-- platform.CreditTransaction — Credit debit/credit records
|
|
-- platform.AgentNodeExecution — Node execution stats (for token counts)
|
|
--
|
|
-- OUTPUT COLUMNS
|
|
-- transactionKey TEXT Unique transaction identifier
|
|
-- userId TEXT User who was charged
|
|
-- amount DECIMAL Credit amount (positive = credit, negative = debit)
|
|
-- negativeAmount DECIMAL amount * -1 (convenience for spend charts)
|
|
-- transactionType TEXT Transaction type (e.g. 'USAGE', 'REFUND', 'TOP_UP')
|
|
-- transactionTime TIMESTAMPTZ When the transaction was recorded
|
|
-- blockId TEXT Block UUID that triggered the spend
|
|
-- blockName TEXT Human-readable block name
|
|
-- llm_provider TEXT LLM provider (e.g. 'openai', 'anthropic')
|
|
-- llm_model TEXT Model name (e.g. 'gpt-4o', 'claude-3-5-sonnet')
|
|
-- node_exec_id TEXT Linked node execution UUID
|
|
-- llm_call_count INT LLM API calls made in that execution
|
|
-- llm_retry_count INT LLM retries in that execution
|
|
-- llm_input_token_count INT Input tokens consumed
|
|
-- llm_output_token_count INT Output tokens produced
|
|
--
|
|
-- WINDOW
|
|
-- Rolling 90 days (createdAt > CURRENT_DATE - 90 days)
|
|
--
|
|
-- EXAMPLE QUERIES
|
|
-- -- Total spend per user (last 90 days)
|
|
-- SELECT "userId", SUM("negativeAmount") AS total_spent
|
|
-- FROM analytics.user_block_spending
|
|
-- WHERE "transactionType" = 'USAGE'
|
|
-- GROUP BY 1 ORDER BY total_spent DESC;
|
|
--
|
|
-- -- Spend by LLM provider + model
|
|
-- SELECT "llm_provider", "llm_model",
|
|
-- SUM("negativeAmount") AS total_cost,
|
|
-- SUM("llm_input_token_count") AS input_tokens,
|
|
-- SUM("llm_output_token_count") AS output_tokens
|
|
-- FROM analytics.user_block_spending
|
|
-- WHERE "llm_provider" IS NOT NULL
|
|
-- GROUP BY 1, 2 ORDER BY total_cost DESC;
|
|
-- =============================================================
|
|
|
|
SELECT
|
|
c."transactionKey" AS transactionKey,
|
|
c."userId" AS userId,
|
|
c."amount" AS amount,
|
|
c."amount" * -1 AS negativeAmount,
|
|
c."type" AS transactionType,
|
|
c."createdAt" AS transactionTime,
|
|
c.metadata->>'block_id' AS blockId,
|
|
c.metadata->>'block' AS blockName,
|
|
c.metadata->'input'->'credentials'->>'provider' AS llm_provider,
|
|
c.metadata->'input'->>'model' AS llm_model,
|
|
c.metadata->>'node_exec_id' AS node_exec_id,
|
|
(ne."stats"->>'llm_call_count')::int AS llm_call_count,
|
|
(ne."stats"->>'llm_retry_count')::int AS llm_retry_count,
|
|
(ne."stats"->>'input_token_count')::int AS llm_input_token_count,
|
|
(ne."stats"->>'output_token_count')::int AS llm_output_token_count
|
|
FROM platform."CreditTransaction" c
|
|
LEFT JOIN platform."AgentNodeExecution" ne
|
|
ON (c.metadata->>'node_exec_id') = ne."id"::text
|
|
WHERE c."createdAt" > CURRENT_DATE - INTERVAL '90 days'
|