## Summary
This PR adds proper execution end time tracking and fixes timestamp
handling throughout the execution analytics system.
### Key Changes
1. **Added `endedAt` field to database schema** - Executions now have a
dedicated field for tracking when they finish
2. **Fixed timestamp nullable handling** - `started_at` and `ended_at`
are now properly nullable in types
3. **Fixed chart aggregation** - Reduced threshold from ≥3 to ≥1
executions per day
4. **Improved timestamp display** - Moved timestamps to expandable
details section in analytics table
5. **Fixed nullable timestamp bugs** - Updated all frontend code to
handle null timestamps correctly
## Problem Statement
### Issue 1: Missing Execution End Times
Previously, executions used `updatedAt` (last DB update) as a proxy for
"end time". This broke when adding correctness scores retroactively -
the end time would change to whenever the score was added, not when the
execution actually finished.
### Issue 2: Chart Shows Only One Data Point
The accuracy trends chart showed only one data point despite having
executions across multiple days. Root cause: aggregation required ≥3
executions per day.
### Issue 3: Incorrect Type Definitions
Manually maintained types defined `started_at` and `ended_at` as
non-nullable `Date`, contradicting reality where QUEUED executions
haven't started yet.
## Solution
### Database Schema (`schema.prisma`)
```prisma
model AgentGraphExecution {
// ...
startedAt DateTime?
endedAt DateTime? // NEW FIELD
// ...
}
```
### Execution Lifecycle
- **QUEUED**: `startedAt = null`, `endedAt = null` (not started)
- **RUNNING**: `startedAt = set`, `endedAt = null` (in progress)
- **COMPLETED/FAILED/TERMINATED**: `startedAt = set`, `endedAt = set`
(finished)
### Migration Strategy
```sql
-- Add endedAt column
ALTER TABLE "AgentGraphExecution" ADD COLUMN "endedAt" TIMESTAMP(3);
-- Backfill ONLY terminal executions (prevents marking RUNNING executions as ended)
UPDATE "AgentGraphExecution"
SET "endedAt" = "updatedAt"
WHERE "endedAt" IS NULL
AND "executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED');
```
## Changes by Component
### Backend
**`schema.prisma`**
- Added `endedAt` field to `AgentGraphExecution`
**`execution.py`**
- Made `started_at` and `ended_at` optional with Field descriptions
- Updated `from_db()` to use `endedAt` instead of `updatedAt`
- `update_graph_execution_stats()` sets `endedAt` when status becomes
terminal
**`execution_analytics_routes.py`**
- Removed `created_at`/`updated_at` from `ExecutionAnalyticsResult` (DB
metadata, not execution data)
- Kept only `started_at`/`ended_at` (actual execution runtime)
- Made settings global (avoid recreation)
- Moved OpenAI key validation to `_process_batch` (only check when LLM
actually runs)
**`analytics.py`**
- Fixed aggregation: `COUNT(*) >= 1` (was 3) - include all days with ≥1
execution
- Uses `createdAt` for chart grouping (when execution was queued)
**`late_execution_monitor.py`**
- Handle optional `started_at` with fallback to `datetime.min` for
sorting
- Display "Not started" when `started_at` is null
### Frontend
**Type Definitions**
- Fixed manually maintained `types.ts`: `started_at: Date | null` (was
non-nullable)
- Generated types were already correct
**Analytics Components**
- `AnalyticsResultsTable.tsx`: Show only `started_at`/`ended_at` in
2-column expandable grid
- `ExecutionAnalyticsForm.tsx`: Added filter explanation UI
**Monitoring Components** - Fixed null handling bugs:
- `OldAgentLibraryView.tsx`: Handle null in reduce function
- `agent-runs-selector-list.tsx`: Safe sorting with `?.getTime() ?? 0`
- `AgentFlowList.tsx`: Filter/sort with null checks
- `FlowRunsStatus.tsx`: Filter null timestamps
- `FlowRunsTimeline.tsx`: Filter executions with null timestamps before
rendering
- `monitoring/page.tsx`: Safe sorting
- `ActivityItem.tsx`: Fallback to "recently" for null timestamps
## Benefits
✅ **Accurate End Times**: `endedAt` is frozen when execution finishes,
not updated later
✅ **Type Safety**: Nullable types match reality, exposing real bugs
✅ **Better UX**: Chart shows all days with data (not just days with ≥3
executions)
✅ **Bug Fixes**: 7+ frontend components now handle null timestamps
correctly
✅ **Documentation**: Field descriptions explain when timestamps are null
## Testing
### Backend
```bash
cd autogpt_platform/backend
poetry run format # ✅ All checks passed
poetry run lint # ✅ All checks passed
```
### Frontend
```bash
cd autogpt_platform/frontend
pnpm format # ✅ All checks passed
pnpm lint # ✅ All checks passed
pnpm types # ✅ All type errors fixed
```
### Test Data Generation
Created script to generate 35 test executions across 7 days with
correctness scores:
```bash
poetry run python scripts/generate_test_analytics_data.py
```
## Migration Notes
⚠️ **Important**: The migration only backfills `endedAt` for executions
with terminal status (COMPLETED, FAILED, TERMINATED). Active executions
(QUEUED, RUNNING) correctly keep `endedAt = null`.
## Breaking Changes
None - this is backward compatible:
- `endedAt` is nullable, existing code that doesn't use it is unaffected
- Frontend already used generated types which were correct
- Migration safely backfills historical data
<!-- CURSOR_SUMMARY -->
---
> [!NOTE]
> Introduces explicit execution end-time tracking and normalizes
timestamp handling across backend and frontend.
>
> - Adds `endedAt` to `AgentGraphExecution` (schema + migration);
backfills terminal executions; sets `endedAt` on terminal status updates
> - Makes `GraphExecutionMeta.started_at/ended_at` optional; updates
`from_db()` to use DB `endedAt`; exposes timestamps in
`ExecutionAnalyticsResult`
> - Moves OpenAI key validation into batch processing; instantiates
`Settings` once
> - Accuracy trends: reduce daily aggregation threshold to `>= 1`;
optional historical series
> - Monitoring/analytics UI: results table shows/export
`started_at`/`ended_at`; adds chart filter explainer
> - Frontend null-safety: update types (`Date | null`) and fix
sorting/filtering/rendering for nullable timestamps across monitoring
and library views
> - Late execution monitor: safe sorting/display when `started_at` is
null
> - OpenAPI specs updated for new/nullable fields
>
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1d987ca6e5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
This is the frontend for AutoGPT's next generation
🧢 Getting Started
This project uses pnpm as the package manager via corepack. Corepack is a Node.js tool that automatically manages package managers without requiring global installations.
For architecture, conventions, data fetching, feature flags, design system usage, state management, and PR process, see CONTRIBUTING.md. For Playwright and Storybook testing setup, see TESTING.md.
Prerequisites
Make sure you have Node.js 16.10+ installed. Corepack is included with Node.js by default.
Setup
1. Enable corepack (run this once on your system):
corepack enable
This enables corepack to automatically manage pnpm based on the packageManager field in package.json.
2. Install dependencies:
pnpm i
3. Start the development server:
Running the Front-end & Back-end separately
We recommend this approach if you are doing active development on the project. First spin up the Back-end:
# on `autogpt_platform`
docker compose --profile local up deps_backend -d
# on `autogpt_platform/backend`
poetry run app
Then start the Front-end:
# on `autogpt_platform/frontend`
pnpm dev
Open http://localhost:3000 with your browser to see the result. If the server starts on http://localhost:3001 it means the Front-end is already running via Docker. You have to kill the container then or do docker compose down.
You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.
Running both the Front-end and Back-end via Docker
If you run:
# on `autogpt_platform`
docker compose up -d
It will spin up the Back-end and Front-end via Docker. The Front-end will start on port 3000. This might not be
what you want when actively contributing to the Front-end as you won't have direct/easy access to the Next.js dev server.
Subsequent Runs
For subsequent development sessions, you only need to run:
pnpm dev
Every time a new Front-end dependency is added by you or others, you will need to run pnpm i to install the new dependencies.
Available Scripts
pnpm dev- Start development serverpnpm build- Build for productionpnpm start- Start production serverpnpm lint- Run ESLint and Prettier checkspnpm format- Format code with Prettierpnpm types- Run TypeScript type checkingpnpm test- Run Playwright testspnpm test-ui- Run Playwright tests with UIpnpm fetch:openapi- Fetch OpenAPI spec from backendpnpm generate:api-client- Generate API client from OpenAPI specpnpm generate:api- Fetch OpenAPI spec and generate API client
This project uses next/font to automatically optimize and load Inter, a custom Google Font.
🔄 Data Fetching
See CONTRIBUTING.md for guidance on generated API hooks, SSR + hydration patterns, and usage examples. You generally do not need to run OpenAPI commands unless adding/modifying backend endpoints.
🚩 Feature Flags
See CONTRIBUTING.md for feature flag usage patterns, local development with mocks, and how to add new flags.
🚚 Deploy
TODO
📙 Storybook
Storybook is a powerful development environment for UI components. It allows you to build UI components in isolation, making it easier to develop, test, and document your components independently from your main application.
Purpose in the Development Process
- Component Development: Develop and test UI components in isolation.
- Visual Testing: Easily spot visual regressions.
- Documentation: Automatically document components and their props.
- Collaboration: Share components with your team or stakeholders for feedback.
How to Use Storybook
-
Start Storybook: Run the following command to start the Storybook development server:
pnpm storybookThis will start Storybook on port 6006. Open http://localhost:6006 in your browser to view your component library.
-
Build Storybook: To build a static version of Storybook for deployment, use:
pnpm build-storybook -
Running Storybook Tests: Storybook tests can be run using:
pnpm test-storybook -
Writing Stories: Create
.stories.tsxfiles alongside your components to define different states and variations of your components.
By integrating Storybook into our development workflow, we can streamline UI development, improve component reusability, and maintain a consistent design system across the project.
🔭 Tech Stack
Core Framework & Language
- Next.js - React framework with App Router
- React - UI library for building user interfaces
- TypeScript - Typed JavaScript for better developer experience
Styling & UI Components
- Tailwind CSS - Utility-first CSS framework
- shadcn/ui - Re-usable components built with Radix UI and Tailwind CSS
- Radix UI - Headless UI components for accessibility
- Phosphor Icons - Icon set used across the app
- Framer Motion - Animation library for React
Development & Testing
- Storybook - Component development environment
- Playwright - End-to-end testing framework
- ESLint - JavaScript/TypeScript linting
- Prettier - Code formatting
Backend & Services
- Supabase - Backend-as-a-Service (database, auth, storage)
- Sentry - Error monitoring and performance tracking
Package Management
Additional Libraries
- React Hook Form - Forms with easy validation
- Zod - TypeScript-first schema validation
- React Table - Headless table library
- React Flow - Interactive node-based diagrams
- React Query - Data fetching and caching
- React Query DevTools - Debugging tool for React Query
Development Tools
NEXT_PUBLIC_REACT_QUERY_DEVTOOL- Enable React Query DevTools. Set totrueto enable.