chore(frontend): move from yarn1 to pnpm (#10072)

## 🧢 Overview
This PR migrates the AutoGPT Platform frontend from [yarn
1](https://classic.yarnpkg.com/lang/en/) to [pnpm](https://pnpm.io/)
using **corepack** for automatic package manager management.

**yarn1** is not longer maintained and a bit old, moving to **pnpm** we
get:
-  Significantly faster install times,
- 💾 Better disk space efficiency,
- 🛠️ Better community support and maintenance,
- 💆🏽‍♂️  Config swap very easy

##  🏗️ Changes

### Package Management Migration

- updated [corepack](https://github.com/nodejs/corepack) to use
[pnpm](https://pnpm.io/)
- Deleted `yarn.lock` and generated new `pnpm-lock.yaml`
- Updated `.gitignore`

### Documentation Updates

- `frontend/README.md`: 
  - added comprehensive tech stack overview with links
  - updated all commands to use pnpm
  - added corepack setup instructions
  - and included migration disclaimer for yarn users
- `backend/README.md`: 
  - Updated installation instructions to use pnpm with corepack
- `AGENTS.md`: 
  - Updated testing commands from yarn to pnpm

### CI/CD & Infrastructure

- **GitHub Workflows** : 
  - updated all jobs to use pnpm with corepack enable
  - cleaned FE Playwright test workflow to avoid Sentry noise
- **Dockerfile**:
- updated to use pnpm with corepack, changed lock file reference, and
updated cache mount path

###  📋 Checklist

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
  > assuming you are on the `frontend` folder 
- [x] Clean installation works: `rm -rf node_modules && corepack enable
&& pnpm install`
  - [x] Development server starts correctly: `pnpm dev`
  - [x] Build process works: `pnpm build`
  - [x] Linting and formatting work: `pnpm lint` and `pnpm format`
  - [x] Type checking works: `pnpm type-check`
  - [x] Tests run successfully: `pnpm test`
  - [x] Storybook starts correctly: `pnpm storybook`
  - [x] Docker build succeeds with new pnpm configuration
  - [x] GitHub Actions workflow passes with pnpm commands

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
This commit is contained in:
Ubbe
2025-06-04 17:07:29 +04:00
committed by GitHub
parent c8f2c7bc88
commit 73a3d980ca
27 changed files with 16596 additions and 12556 deletions

View File

@@ -27,7 +27,7 @@
!autogpt_platform/frontend/src/ !autogpt_platform/frontend/src/
!autogpt_platform/frontend/public/ !autogpt_platform/frontend/public/
!autogpt_platform/frontend/package.json !autogpt_platform/frontend/package.json
!autogpt_platform/frontend/yarn.lock !autogpt_platform/frontend/pnpm-lock.yaml
!autogpt_platform/frontend/tsconfig.json !autogpt_platform/frontend/tsconfig.json
!autogpt_platform/frontend/README.md !autogpt_platform/frontend/README.md
## config ## config

View File

@@ -14,13 +14,13 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
# backend (Poetry project) # backend (Poetry project)
- package-ecosystem: "pip" - package-ecosystem: "pip"
@@ -36,16 +36,16 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
# frontend (Next.js project) # frontend (Next.js project)
- package-ecosystem: "npm" - package-ecosystem: "pnpm"
directory: "autogpt_platform/frontend" directory: "autogpt_platform/frontend"
schedule: schedule:
interval: "weekly" interval: "weekly"
@@ -58,13 +58,13 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
# infra (Terraform) # infra (Terraform)
- package-ecosystem: "terraform" - package-ecosystem: "terraform"
@@ -81,14 +81,13 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
# GitHub Actions # GitHub Actions
- package-ecosystem: "github-actions" - package-ecosystem: "github-actions"
@@ -101,14 +100,13 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
# Docker # Docker
- package-ecosystem: "docker" - package-ecosystem: "docker"
@@ -121,16 +119,16 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
# Docs # Docs
- package-ecosystem: 'pip' - package-ecosystem: "pip"
directory: "docs/" directory: "docs/"
schedule: schedule:
interval: "weekly" interval: "weekly"
@@ -142,10 +140,10 @@ updates:
production-dependencies: production-dependencies:
dependency-type: "production" dependency-type: "production"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"
development-dependencies: development-dependencies:
dependency-type: "development" dependency-type: "development"
update-types: update-types:
- "minor" - "minor"
- "patch" - "patch"

View File

@@ -29,13 +29,14 @@ jobs:
with: with:
node-version: "21" node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Install dependencies - name: Install dependencies
run: | run: pnpm install --frozen-lockfile
yarn install --frozen-lockfile
- name: Run lint - name: Run lint
run: | run: pnpm lint
yarn lint
type-check: type-check:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -48,13 +49,14 @@ jobs:
with: with:
node-version: "21" node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Install dependencies - name: Install dependencies
run: | run: pnpm install --frozen-lockfile
yarn install --frozen-lockfile
- name: Run tsc check - name: Run tsc check
run: | run: pnpm type-check
yarn type-check
test: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -74,6 +76,9 @@ jobs:
with: with:
node-version: "21" node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Free Disk Space (Ubuntu) - name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main uses: jlumbroso/free-disk-space@main
with: with:
@@ -93,25 +98,20 @@ jobs:
docker compose -f ../docker-compose.yml up -d docker compose -f ../docker-compose.yml up -d
- name: Install dependencies - name: Install dependencies
run: | run: pnpm install --frozen-lockfile
yarn install --frozen-lockfile
- name: Setup Builder .env - name: Setup .env
run: | run: cp .env.example .env
cp .env.example .env
- name: Install Browser '${{ matrix.browser }}' - name: Install Browser '${{ matrix.browser }}'
run: yarn playwright install --with-deps ${{ matrix.browser }} run: pnpm playwright install --with-deps ${{ matrix.browser }}
- name: Run tests - name: Run Playwright tests
timeout-minutes: 20 run: pnpm test --project=${{ matrix.browser }}
run: |
yarn test --project=${{ matrix.browser }}
- name: Print Final Docker Compose logs - name: Print Final Docker Compose logs
if: always() if: always()
run: | run: docker compose -f ../docker-compose.yml logs
docker compose -f ../docker-compose.yml logs
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v4
if: ${{ !cancelled() }} if: ${{ !cancelled() }}

View File

@@ -235,7 +235,7 @@ repos:
hooks: hooks:
- id: tsc - id: tsc
name: Typecheck - AutoGPT Platform - Frontend name: Typecheck - AutoGPT Platform - Frontend
entry: bash -c 'cd autogpt_platform/frontend && npm run type-check' entry: bash -c 'cd autogpt_platform/frontend && pnpm type-check'
files: ^autogpt_platform/frontend/ files: ^autogpt_platform/frontend/
types: [file] types: [file]
language: system language: system

View File

@@ -3,6 +3,7 @@
This guide provides context for Codex when updating the **autogpt_platform** folder. This guide provides context for Codex when updating the **autogpt_platform** folder.
## Directory overview ## Directory overview
- `autogpt_platform/backend` FastAPI based backend service. - `autogpt_platform/backend` FastAPI based backend service.
- `autogpt_platform/autogpt_libs` Shared Python libraries. - `autogpt_platform/autogpt_libs` Shared Python libraries.
- `autogpt_platform/frontend` Next.js + Typescript frontend. - `autogpt_platform/frontend` Next.js + Typescript frontend.
@@ -11,12 +12,14 @@ This guide provides context for Codex when updating the **autogpt_platform** fol
See `docs/content/platform/getting-started.md` for setup instructions. See `docs/content/platform/getting-started.md` for setup instructions.
## Code style ## Code style
- Format Python code with `poetry run format`. - Format Python code with `poetry run format`.
- Format frontend code using `yarn format`. - Format frontend code using `pnpm format`.
## Testing ## Testing
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma). - Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).
- Frontend: `yarn test` or `yarn test-ui` for Playwright tests. See `docs/content/platform/contributing/tests.md` for tips. - Frontend: `pnpm test` or `pnpm test-ui` for Playwright tests. See `docs/content/platform/contributing/tests.md` for tips.
Always run the relevant linters and tests before committing. Always run the relevant linters and tests before committing.
Use conventional commit messages for all commits (e.g. `feat(backend): add API`). Use conventional commit messages for all commits (e.g. `feat(backend): add API`).
@@ -38,6 +41,7 @@ Use conventional commit messages for all commits (e.g. `feat(backend): add API`)
- blocks - blocks
## Pull requests ## Pull requests
- Use the template in `.github/PULL_REQUEST_TEMPLATE.md`. - Use the template in `.github/PULL_REQUEST_TEMPLATE.md`.
- Rely on the pre-commit checks for linting and formatting - Rely on the pre-commit checks for linting and formatting
- Fill out the **Changes** section and the checklist. - Fill out the **Changes** section and the checklist.
@@ -47,4 +51,3 @@ Use conventional commit messages for all commits (e.g. `feat(backend): add API`)
- For changes touching `data/*.py`, validate user ID checks or explain why not needed. - For changes touching `data/*.py`, validate user ID checks or explain why not needed.
- If adding protected frontend routes, update `frontend/lib/supabase/middleware.ts`. - If adding protected frontend routes, update `frontend/lib/supabase/middleware.ts`.
- Use the linear ticket branch structure if given codex/open-1668-resume-dropped-runs - Use the linear ticket branch structure if given codex/open-1668-resume-dropped-runs

View File

@@ -15,44 +15,57 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
To run the AutoGPT Platform, follow these steps: To run the AutoGPT Platform, follow these steps:
1. Clone this repository to your local machine and navigate to the `autogpt_platform` directory within the repository: 1. Clone this repository to your local machine and navigate to the `autogpt_platform` directory within the repository:
``` ```
git clone <https://github.com/Significant-Gravitas/AutoGPT.git | git@github.com:Significant-Gravitas/AutoGPT.git> git clone <https://github.com/Significant-Gravitas/AutoGPT.git | git@github.com:Significant-Gravitas/AutoGPT.git>
cd AutoGPT/autogpt_platform cd AutoGPT/autogpt_platform
``` ```
2. Run the following command: 2. Run the following command:
``` ```
cp .env.example .env cp .env.example .env
``` ```
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables. This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command: 3. Run the following command:
``` ```
docker compose up -d docker compose up -d
``` ```
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode. This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. Navigate to `frontend` within the `autogpt_platform` directory: 4. Navigate to `frontend` within the `autogpt_platform` directory:
``` ```
cd frontend cd frontend
``` ```
You will need to run your frontend application separately on your local machine. You will need to run your frontend application separately on your local machine.
5. Run the following command: 5. Run the following command:
``` ```
cp .env.example .env.local cp .env.example .env.local
``` ```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application. This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command: 6. Run the following command:
Enable corepack and install dependencies by running:
``` ```
npm install corepack enable
npm run dev pnpm i
``` ```
This command will install the necessary dependencies and start the frontend application in development mode.
If you are using Yarn, you can run the following commands instead: Then start the frontend application in development mode:
``` ```
yarn install && yarn dev pnpm dev
``` ```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend. 7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
@@ -68,43 +81,52 @@ Here are some useful Docker Compose commands for managing your AutoGPT Platform:
- `docker compose down`: Stop and remove containers, networks, and volumes. - `docker compose down`: Stop and remove containers, networks, and volumes.
- `docker compose watch`: Watch for changes in your services and automatically update them. - `docker compose watch`: Watch for changes in your services and automatically update them.
### Sample Scenarios ### Sample Scenarios
Here are some common scenarios where you might use multiple Docker Compose commands: Here are some common scenarios where you might use multiple Docker Compose commands:
1. Updating and restarting a specific service: 1. Updating and restarting a specific service:
``` ```
docker compose build api_srv docker compose build api_srv
docker compose up -d --no-deps api_srv docker compose up -d --no-deps api_srv
``` ```
This rebuilds the `api_srv` service and restarts it without affecting other services. This rebuilds the `api_srv` service and restarts it without affecting other services.
2. Viewing logs for troubleshooting: 2. Viewing logs for troubleshooting:
``` ```
docker compose logs -f api_srv ws_srv docker compose logs -f api_srv ws_srv
``` ```
This shows and follows the logs for both `api_srv` and `ws_srv` services. This shows and follows the logs for both `api_srv` and `ws_srv` services.
3. Scaling a service for increased load: 3. Scaling a service for increased load:
``` ```
docker compose up -d --scale executor=3 docker compose up -d --scale executor=3
``` ```
This scales the `executor` service to 3 instances to handle increased load. This scales the `executor` service to 3 instances to handle increased load.
4. Stopping the entire system for maintenance: 4. Stopping the entire system for maintenance:
``` ```
docker compose stop docker compose stop
docker compose rm -f docker compose rm -f
docker compose pull docker compose pull
docker compose up -d docker compose up -d
``` ```
This stops all services, removes containers, pulls the latest images, and restarts the system. This stops all services, removes containers, pulls the latest images, and restarts the system.
5. Developing with live updates: 5. Developing with live updates:
``` ```
docker compose watch docker compose watch
``` ```
This watches for changes in your code and automatically updates the relevant services. This watches for changes in your code and automatically updates the relevant services.
6. Checking the status of services: 6. Checking the status of services:
@@ -115,7 +137,6 @@ Here are some common scenarios where you might use multiple Docker Compose comma
These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively. These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively.
### Persisting Data ### Persisting Data
To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml` file to add volumes. Here's how: To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml` file to add volumes. Here's how:

View File

@@ -22,9 +22,14 @@
# debug # debug
npm-debug.log* npm-debug.log*
pnpm-debug.log*
yarn-debug.log* yarn-debug.log*
yarn-error.log* yarn-error.log*
# lock files (from yarn1 or npm)
yarn.lock
package-lock.json
# local env files # local env files
.env*.local .env*.local

View File

@@ -1,4 +1,5 @@
node_modules node_modules
pnpm-lock.yaml
.next .next
build build
public public

View File

@@ -1,8 +1,9 @@
# Base stage for both dev and prod # Base stage for both dev and prod
FROM node:21-alpine AS base FROM node:21-alpine AS base
WORKDIR /app WORKDIR /app
COPY autogpt_platform/frontend/package.json autogpt_platform/frontend/yarn.lock ./ RUN corepack enable
RUN --mount=type=cache,target=/usr/local/share/.cache yarn install --frozen-lockfile COPY autogpt_platform/frontend/package.json autogpt_platform/frontend/pnpm-lock.yaml ./
RUN --mount=type=cache,target=/root/.local/share/pnpm pnpm install --frozen-lockfile
# Dev stage # Dev stage
FROM base AS dev FROM base AS dev
@@ -10,13 +11,13 @@ ENV NODE_ENV=development
ENV HOSTNAME=0.0.0.0 ENV HOSTNAME=0.0.0.0
COPY autogpt_platform/frontend/ . COPY autogpt_platform/frontend/ .
EXPOSE 3000 EXPOSE 3000
CMD ["yarn", "run", "dev", "--hostname", "0.0.0.0"] CMD ["pnpm", "run", "dev", "--hostname", "0.0.0.0"]
# Build stage for prod # Build stage for prod
FROM base AS build FROM base AS build
COPY autogpt_platform/frontend/ . COPY autogpt_platform/frontend/ .
ENV SKIP_STORYBOOK_TESTS=true ENV SKIP_STORYBOOK_TESTS=true
RUN yarn build RUN pnpm build
# Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile # Prod stage - based on NextJS reference Dockerfile https://github.com/vercel/next.js/blob/64271354533ed16da51be5dce85f0dbd15f17517/examples/with-docker/Dockerfile
FROM node:21-alpine AS prod FROM node:21-alpine AS prod

View File

@@ -1,46 +1,76 @@
This is the frontend for AutoGPT's next generation This is the frontend for AutoGPT's next generation
## Getting Started ## 🧢 Getting Started
Run the following installation once. This project uses [**pnpm**](https://pnpm.io/) as the package manager via **corepack**. [Corepack](https://github.com/nodejs/corepack) is a Node.js tool that automatically manages package managers without requiring global installations.
```bash ### Prerequisites
npm install
# or
yarn install
# or
pnpm install
# or
bun install
```
Next, run the development server: Make sure you have Node.js 16.10+ installed. Corepack is included with Node.js by default.
```bash ### ⚠️ Migrating from yarn
npm run dev
# or > This project was previously using yarn1, make sure to clean up the old files if you set it up previously with yarn:
yarn dev >
# or > ```bash
pnpm dev > rm -f yarn.lock && rm -rf node_modules
# or > ```
bun dev >
``` > Then follow the setup steps below.
### Setup
1. **Enable corepack** (run this once on your system):
```bash
corepack enable
```
This enables corepack to automatically manage pnpm based on the `packageManager` field in `package.json`.
2. **Install dependencies**:
```bash
pnpm i
```
3. **Start the development server**:
```bash
pnpm dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
For subsequent runs, you do not have to `npm install` again. Simply do `npm run dev`. ### Subsequent Runs
If the project is updated via git, you will need to `npm install` after each update. For subsequent development sessions, you only need to run:
```bash
pnpm dev
```
Every time a new Front-end dependency is added by you or others, you will need to run `pnpm i` to install the new dependencies.
### Available Scripts
- `pnpm dev` - Start development server
- `pnpm build` - Build for production
- `pnpm start` - Start production server
- `pnpm lint` - Run ESLint and Prettier checks
- `pnpm format` - Format code with Prettier
- `pnpm type-check` - Run TypeScript type checking
- `pnpm test` - Run Playwright tests
- `pnpm test-ui` - Run Playwright tests with UI
This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font. This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font.
## Deploy ## 🚚 Deploy
TODO TODO
## Storybook ## 📙 Storybook
Storybook is a powerful development environment for UI components. It allows you to build UI components in isolation, making it easier to develop, test, and document your components independently from your main application. Storybook is a powerful development environment for UI components. It allows you to build UI components in isolation, making it easier to develop, test, and document your components independently from your main application.
@@ -57,7 +87,7 @@ Storybook is a powerful development environment for UI components. It allows you
Run the following command to start the Storybook development server: Run the following command to start the Storybook development server:
```bash ```bash
npm run storybook pnpm storybook
``` ```
This will start Storybook on port 6006. Open [http://localhost:6006](http://localhost:6006) in your browser to view your component library. This will start Storybook on port 6006. Open [http://localhost:6006](http://localhost:6006) in your browser to view your component library.
@@ -66,23 +96,63 @@ Storybook is a powerful development environment for UI components. It allows you
To build a static version of Storybook for deployment, use: To build a static version of Storybook for deployment, use:
```bash ```bash
npm run build-storybook pnpm build-storybook
``` ```
3. **Running Storybook Tests**: 3. **Running Storybook Tests**:
Storybook tests can be run using: Storybook tests can be run using:
```bash ```bash
npm run test-storybook pnpm test-storybook
``` ```
For CI environments, use: For CI environments, use:
```bash ```bash
npm run test-storybook:ci pnpm test-storybook:ci
``` ```
4. **Writing Stories**: 4. **Writing Stories**:
Create `.stories.tsx` files alongside your components to define different states and variations of your components. Create `.stories.tsx` files alongside your components to define different states and variations of your components.
By integrating Storybook into our development workflow, we can streamline UI development, improve component reusability, and maintain a consistent design system across the project. By integrating Storybook into our development workflow, we can streamline UI development, improve component reusability, and maintain a consistent design system across the project.
## 🔭 Tech Stack
### Core Framework & Language
- [**Next.js**](https://nextjs.org/) - React framework with App Router
- [**React**](https://react.dev/) - UI library for building user interfaces
- [**TypeScript**](https://www.typescriptlang.org/) - Typed JavaScript for better developer experience
### Styling & UI Components
- [**Tailwind CSS**](https://tailwindcss.com/) - Utility-first CSS framework
- [**shadcn/ui**](https://ui.shadcn.com/) - Re-usable components built with Radix UI and Tailwind CSS
- [**Radix UI**](https://www.radix-ui.com/) - Headless UI components for accessibility
- [**Lucide React**](https://lucide.dev/guide/packages/lucide-react) - Beautiful & consistent icons
- [**Framer Motion**](https://motion.dev/) - Animation library for React
### Development & Testing
- [**Storybook**](https://storybook.js.org/) - Component development environment
- [**Playwright**](https://playwright.dev/) - End-to-end testing framework
- [**ESLint**](https://eslint.org/) - JavaScript/TypeScript linting
- [**Prettier**](https://prettier.io/) - Code formatting
### Backend & Services
- [**Supabase**](https://supabase.com/) - Backend-as-a-Service (database, auth, storage)
- [**Sentry**](https://sentry.io/) - Error monitoring and performance tracking
### Package Management
- [**pnpm**](https://pnpm.io/) - Fast, disk space efficient package manager
- [**Corepack**](https://github.com/nodejs/corepack) - Node.js package manager management
### Additional Libraries
- [**React Hook Form**](https://react-hook-form.com/) - Forms with easy validation
- [**Zod**](https://zod.dev/) - TypeScript-first schema validation
- [**React Table**](https://tanstack.com/table) - Headless table library
- [**React Flow**](https://reactflow.dev/) - Interactive node-based diagrams

View File

@@ -5,55 +5,59 @@
import { getEnvironmentStr } from "@/lib/utils"; import { getEnvironmentStr } from "@/lib/utils";
import * as Sentry from "@sentry/nextjs"; import * as Sentry from "@sentry/nextjs";
Sentry.init({ if (process.env.NODE_ENV === "production") {
dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288", Sentry.init({
dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288",
enabled: process.env.DISABLE_SENTRY !== "true", environment: getEnvironmentStr(),
environment: getEnvironmentStr(),
// Add optional integrations for additional features // Add optional integrations for additional features
integrations: [ integrations: [
Sentry.replayIntegration(), Sentry.replayIntegration(),
Sentry.httpClientIntegration(), Sentry.httpClientIntegration(),
Sentry.replayCanvasIntegration(), Sentry.replayCanvasIntegration(),
Sentry.reportingObserverIntegration(), Sentry.reportingObserverIntegration(),
Sentry.browserProfilingIntegration(), Sentry.browserProfilingIntegration(),
// Sentry.feedbackIntegration({ // Sentry.feedbackIntegration({
// // Additional SDK configuration goes in here, for example: // // Additional SDK configuration goes in here, for example:
// colorScheme: "system", // colorScheme: "system",
// }), // }),
], ],
// Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control. // Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control.
tracesSampleRate: 1, tracesSampleRate: 1,
// Set `tracePropagationTargets` to control for which URLs trace propagation should be enabled // Set `tracePropagationTargets` to control for which URLs trace propagation should be enabled
tracePropagationTargets: [ tracePropagationTargets: [
"localhost", "localhost",
"localhost:8006", "localhost:8006",
/^https:\/\/dev\-builder\.agpt\.co\/api/, /^https:\/\/dev\-builder\.agpt\.co\/api/,
/^https:\/\/.*\.agpt\.co\/api/, /^https:\/\/.*\.agpt\.co\/api/,
], ],
// Define how likely Replay events are sampled. // Define how likely Replay events are sampled.
// This sets the sample rate to be 10%. You may want this to be 100% while // This sets the sample rate to be 10%. You may want this to be 100% while
// in development and sample at a lower rate in production // in development and sample at a lower rate in production
replaysSessionSampleRate: 0.1, replaysSessionSampleRate: 0.1,
// Define how likely Replay events are sampled when an error occurs. // Define how likely Replay events are sampled when an error occurs.
replaysOnErrorSampleRate: 1.0, replaysOnErrorSampleRate: 1.0,
// Setting this option to true will print useful information to the console while you're setting up Sentry. // Setting this option to true will print useful information to the console while you're setting up Sentry.
debug: false, debug: false,
// Set profilesSampleRate to 1.0 to profile every transaction. // Set profilesSampleRate to 1.0 to profile every transaction.
// Since profilesSampleRate is relative to tracesSampleRate, // Since profilesSampleRate is relative to tracesSampleRate,
// the final profiling rate can be computed as tracesSampleRate * profilesSampleRate // the final profiling rate can be computed as tracesSampleRate * profilesSampleRate
// For example, a tracesSampleRate of 0.5 and profilesSampleRate of 0.5 would // For example, a tracesSampleRate of 0.5 and profilesSampleRate of 0.5 would
// result in 25% of transactions being profiled (0.5*0.5=0.25) // result in 25% of transactions being profiled (0.5*0.5=0.25)
profilesSampleRate: 1.0, profilesSampleRate: 1.0,
_experiments: { _experiments: {
// Enable logs to be sent to Sentry. // Enable logs to be sent to Sentry.
enableLogs: true, enableLogs: true,
}, },
}); });
}
// Export the required hook for navigation instrumentation
export const onRouterTransitionStart = Sentry.captureRouterTransitionStart;

View File

@@ -23,56 +23,60 @@ const nextConfig = {
transpilePackages: ["geist"], transpilePackages: ["geist"],
}; };
export default withSentryConfig(nextConfig, { const isDevelopmentBuild = process.env.NODE_ENV !== "production";
// For all available options, see:
// https://github.com/getsentry/sentry-webpack-plugin#options
org: "significant-gravitas", export default isDevelopmentBuild
project: "builder", ? nextConfig
: withSentryConfig(nextConfig, {
// For all available options, see:
// https://github.com/getsentry/sentry-webpack-plugin#options
// Only print logs for uploading source maps in CI org: "significant-gravitas",
silent: !process.env.CI, project: "builder",
// For all available options, see: // Only print logs for uploading source maps in CI
// https://docs.sentry.io/platforms/javascript/guides/nextjs/manual-setup/ silent: !process.env.CI,
// Upload a larger set of source maps for prettier stack traces (increases build time) // For all available options, see:
widenClientFileUpload: true, // https://docs.sentry.io/platforms/javascript/guides/nextjs/manual-setup/
// Automatically annotate React components to show their full name in breadcrumbs and session replay // Upload a larger set of source maps for prettier stack traces (increases build time)
reactComponentAnnotation: { widenClientFileUpload: true,
enabled: true,
},
// Route browser requests to Sentry through a Next.js rewrite to circumvent ad-blockers. // Automatically annotate React components to show their full name in breadcrumbs and session replay
// This can increase your server load as well as your hosting bill. reactComponentAnnotation: {
// Note: Check that the configured route will not match with your Next.js middleware, otherwise reporting of client- enabled: true,
// side errors will fail.
tunnelRoute: "/store",
// Hides source maps from generated client bundles
hideSourceMaps: true,
// Automatically tree-shake Sentry logger statements to reduce bundle size
disableLogger: true,
// Enables automatic instrumentation of Vercel Cron Monitors. (Does not yet work with App Router route handlers.)
// See the following for more information:
// https://docs.sentry.io/product/crons/
// https://vercel.com/docs/cron-jobs
automaticVercelMonitors: true,
async headers() {
return [
{
source: "/:path*",
headers: [
{
key: "Document-Policy",
value: "js-profiling",
},
],
}, },
];
}, // Route browser requests to Sentry through a Next.js rewrite to circumvent ad-blockers.
}); // This can increase your server load as well as your hosting bill.
// Note: Check that the configured route will not match with your Next.js middleware, otherwise reporting of client-
// side errors will fail.
tunnelRoute: "/store",
// Hides source maps from generated client bundles
hideSourceMaps: true,
// Automatically tree-shake Sentry logger statements to reduce bundle size
disableLogger: true,
// Enables automatic instrumentation of Vercel Cron Monitors. (Does not yet work with App Router route handlers.)
// See the following for more information:
// https://docs.sentry.io/product/crons/
// https://vercel.com/docs/cron-jobs
automaticVercelMonitors: true,
async headers() {
return [
{
source: "/:path*",
headers: [
{
key: "Document-Policy",
value: "js-profiling",
},
],
},
];
},
});

View File

@@ -4,7 +4,6 @@
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "next dev", "dev": "next dev",
"dev:nosentry": "NODE_ENV=development && DISABLE_SENTRY=true && next dev",
"dev:test": "NODE_ENV=test && next dev", "dev:test": "NODE_ENV=test && next dev",
"build": "SKIP_STORYBOOK_TESTS=true next build", "build": "SKIP_STORYBOOK_TESTS=true next build",
"start": "next start", "start": "next start",
@@ -17,7 +16,7 @@
"storybook": "storybook dev -p 6006", "storybook": "storybook dev -p 6006",
"build-storybook": "storybook build", "build-storybook": "storybook build",
"test-storybook": "test-storybook", "test-storybook": "test-storybook",
"test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"npm run build-storybook -- --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && npm run test-storybook\"" "test-storybook:ci": "concurrently -k -s first -n \"SB,TEST\" -c \"magenta,blue\" \"pnpm run build-storybook -- --quiet && npx http-server storybook-static --port 6006 --silent\" \"wait-on tcp:6006 && pnpm run test-storybook\""
}, },
"browserslist": [ "browserslist": [
"defaults" "defaults"
@@ -81,6 +80,7 @@
"react-modal": "^3.16.3", "react-modal": "^3.16.3",
"react-shepherd": "^6.1.8", "react-shepherd": "^6.1.8",
"recharts": "^2.15.3", "recharts": "^2.15.3",
"shepherd.js": "^14.5.0",
"tailwind-merge": "^2.6.0", "tailwind-merge": "^2.6.0",
"tailwindcss-animate": "^1.0.7", "tailwindcss-animate": "^1.0.7",
"uuid": "^11.1.0", "uuid": "^11.1.0",
@@ -121,10 +121,10 @@
"tailwindcss": "^3.4.17", "tailwindcss": "^3.4.17",
"typescript": "^5" "typescript": "^5"
}, },
"packageManager": "yarn@1.22.22+sha512.a6b2f7906b721bba3d67d4aff083df04dad64c399707841b7acf00f6b133b7ac24255f2652fa22ae3534329dc6180534e98d17432037ff6fd140556e2bb3137e",
"msw": { "msw": {
"workerDirectory": [ "workerDirectory": [
"public" "public"
] ]
} },
"packageManager": "pnpm@10.11.1+sha256.211e9990148495c9fc30b7e58396f7eeda83d9243eb75407ea4f8650fb161f7c"
} }

View File

@@ -22,7 +22,7 @@ export default defineConfig({
/* Opt out of parallel tests on CI. */ /* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined, workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */ /* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: "html", reporter: process.env.CI ? [["html"]] : [["list"], ["html"]],
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */ /* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: { use: {
/* Base URL to use in actions like `await page.goto('/')`. */ /* Base URL to use in actions like `await page.goto('/')`. */
@@ -76,9 +76,12 @@ export default defineConfig({
/* Run your local dev server before starting the tests */ /* Run your local dev server before starting the tests */
webServer: { webServer: {
command: "npm run build && npm run start", command: "pnpm run build && pnpm run start",
url: "http://localhost:3000/", url: "http://localhost:3000/",
reuseExistingServer: !process.env.CI, reuseExistingServer: !process.env.CI,
timeout: 120 * 1000, timeout: 120 * 1000,
env: {
NODE_ENV: "test",
},
}, },
}); });

16243
autogpt_platform/frontend/pnpm-lock.yaml generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -9,7 +9,6 @@ import { getEnvironmentStr } from "./src/lib/utils";
Sentry.init({ Sentry.init({
dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288", dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288",
enabled: process.env.NODE_ENV !== "development",
environment: getEnvironmentStr(), environment: getEnvironmentStr(),
// Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control. // Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control.

View File

@@ -9,7 +9,6 @@ import * as Sentry from "@sentry/nextjs";
Sentry.init({ Sentry.init({
dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288", dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288",
enabled: process.env.NODE_ENV !== "development",
environment: getEnvironmentStr(), environment: getEnvironmentStr(),
// Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control. // Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control.

View File

@@ -2,6 +2,9 @@ import BackendAPI from "@/lib/autogpt-server-api";
import { redirect } from "next/navigation"; import { redirect } from "next/navigation";
import { finishOnboarding } from "./6-congrats/actions"; import { finishOnboarding } from "./6-congrats/actions";
// Force dynamic rendering to avoid static generation issues with cookies
export const dynamic = "force-dynamic";
export default async function OnboardingPage() { export default async function OnboardingPage() {
const api = new BackendAPI(); const api = new BackendAPI();

View File

@@ -8,6 +8,9 @@ import { Separator } from "@/components/ui/separator";
import { Metadata } from "next"; import { Metadata } from "next";
import getServerUser from "@/lib/supabase/getServerUser"; import getServerUser from "@/lib/supabase/getServerUser";
// Force dynamic rendering to avoid static generation issues with cookies
export const dynamic = "force-dynamic";
export async function generateMetadata({ export async function generateMetadata({
params, params,
}: { }: {

View File

@@ -6,6 +6,9 @@ import { CreatorInfoCard } from "@/components/agptui/CreatorInfoCard";
import { CreatorLinks } from "@/components/agptui/CreatorLinks"; import { CreatorLinks } from "@/components/agptui/CreatorLinks";
import { Separator } from "@/components/ui/separator"; import { Separator } from "@/components/ui/separator";
// Force dynamic rendering to avoid static generation issues with cookies
export const dynamic = "force-dynamic";
export async function generateMetadata({ export async function generateMetadata({
params, params,
}: { }: {

View File

@@ -18,6 +18,9 @@ import {
} from "@/lib/autogpt-server-api/types"; } from "@/lib/autogpt-server-api/types";
import BackendAPI from "@/lib/autogpt-server-api"; import BackendAPI from "@/lib/autogpt-server-api";
// Force dynamic rendering to avoid static generation issues with cookies
export const dynamic = "force-dynamic";
async function getStoreData() { async function getStoreData() {
try { try {
const api = new BackendAPI(); const api = new BackendAPI();

View File

@@ -4,6 +4,9 @@ import { ProfileInfoForm } from "@/components/agptui/ProfileInfoForm";
import BackendAPI from "@/lib/autogpt-server-api"; import BackendAPI from "@/lib/autogpt-server-api";
import { CreatorDetails } from "@/lib/autogpt-server-api/types"; import { CreatorDetails } from "@/lib/autogpt-server-api/types";
// Force dynamic rendering to avoid static generation issues with cookies
export const dynamic = "force-dynamic";
async function getProfileData(api: BackendAPI) { async function getProfileData(api: BackendAPI) {
try { try {
const profile = await api.getStoreProfile(); const profile = await api.getStoreProfile();

View File

@@ -1,14 +1,14 @@
/* flow.css or index.css */ /* flow.css or index.css */
body { body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", font-family:
"Oxygen", "Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu",
"Helvetica Neue", sans-serif; "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif;
} }
code { code {
font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New", font-family:
monospace; source-code-pro, Menlo, Monaco, Consolas, "Courier New", monospace;
} }
.modal { .modal {

File diff suppressed because it is too large Load Diff

View File

@@ -13,19 +13,19 @@ To run the tests, you can use the following commands:
Running the tests without the UI, and headless: Running the tests without the UI, and headless:
```bash ```bash
yarn test pnpm test
``` ```
If you want to run the tests in a UI where you can identify each locator used you can use the following command: If you want to run the tests in a UI where you can identify each locator used you can use the following command:
```bash ```bash
yarn test-ui pnpm test-ui
``` ```
You can also pass `--debug` to the test command to open the browsers in view mode rather than headless. This works with both the `yarn test` and `yarn test-ui` commands. You can also pass `--debug` to the test command to open the browsers in view mode rather than headless. This works with both the `pnpm test` and `pnpm test-ui` commands.
```bash ```bash
yarn test --debug pnpm test --debug
``` ```
In CI, we run the tests in headless mode, with multiple browsers, and retry a failed test up to 2 times. In CI, we run the tests in headless mode, with multiple browsers, and retry a failed test up to 2 times.
@@ -45,7 +45,7 @@ No matter what you do, you should **always** double check that your locators are
If you need to debug a test, you can use the below command to open the test in the playwright test editor. This is helpful if you want to see the test in the browser and see the state of the page as the test sees it and the locators it uses. If you need to debug a test, you can use the below command to open the test in the playwright test editor. This is helpful if you want to see the test in the browser and see the state of the page as the test sees it and the locators it uses.
```bash ```bash
yarn test --debug --test-name-pattern="test-name" pnpm test --debug --test-name-pattern="test-name"
``` ```
#### Using vscode #### Using vscode
@@ -64,7 +64,7 @@ This will save a file called `.auth/gentest-user.json` that can be loaded for al
### Saving a session for gen tests to always use ### Saving a session for gen tests to always use
```bash ```bash
yarn gentests --save-storage .auth/gentest-user.json pnpm gentests --save-storage .auth/gentest-user.json
``` ```
Stop your session with `CTRL + C` after you are logged in and swap the `--save-storage` flag with `--load-storage` to load the session for all future tests. Stop your session with `CTRL + C` after you are logged in and swap the `--save-storage` flag with `--load-storage` to load the session for all future tests.
@@ -72,7 +72,7 @@ Stop your session with `CTRL + C` after you are logged in and swap the `--save-s
### Loading a session for gen tests to always use ### Loading a session for gen tests to always use
```bash ```bash
yarn gentests --load-storage .auth/gentest-user.json pnpm gentests --load-storage .auth/gentest-user.json
``` ```
## How to make a new test ## How to make a new test

View File

@@ -117,23 +117,27 @@ To run the backend services, follow these steps:
To run the frontend application open a new terminal and follow these steps: To run the frontend application open a new terminal and follow these steps:
* Navigate to `frontend` folder within the `autogpt_platform` directory: - Navigate to `frontend` folder within the `autogpt_platform` directory:
``` ```
cd frontend cd frontend
``` ```
* Copy the `.env.example` file available in the `frontend` directory to `.env` in the same directory: - Copy the `.env.example` file available in the `frontend` directory to `.env` in the same directory:
``` ```
cp .env.example .env cp .env.example .env
``` ```
You can modify the `.env` within this folder to add your own environment variables for the frontend application. You can modify the `.env` within this folder to add your own environment variables for the frontend application.
* Run the following command: - Run the following command:
``` ```
npm install corepack enable
npm run dev pnpm install
pnpm dev
``` ```
This command will install the necessary dependencies and start the frontend application in development mode. This command will enable corepack, install the necessary dependencies with pnpm, and start the frontend application in development mode.
### Checking if the application is running ### Checking if the application is running

View File

@@ -12,7 +12,9 @@ Follow these steps to set up and run Ollama with the AutoGPT platform.
## Setup Steps ## Setup Steps
### 1. Launch Ollama ### 1. Launch Ollama
Open a new terminal and execute: Open a new terminal and execute:
```bash ```bash
ollama run llama3.2 ollama run llama3.2
``` ```
@@ -20,17 +22,23 @@ ollama run llama3.2
> **Note**: This will download the [llama3.2](https://ollama.com/library/llama3.2) model and start the service. Keep this terminal running in the background. > **Note**: This will download the [llama3.2](https://ollama.com/library/llama3.2) model and start the service. Keep this terminal running in the background.
### 2. Start the Backend ### 2. Start the Backend
Open a new terminal and navigate to the autogpt_platform directory: Open a new terminal and navigate to the autogpt_platform directory:
```bash ```bash
cd autogpt_platform cd autogpt_platform
docker compose up -d --build docker compose up -d --build
``` ```
### 3. Start the Frontend ### 3. Start the Frontend
Open a new terminal and navigate to the frontend directory: Open a new terminal and navigate to the frontend directory:
```bash ```bash
cd autogpt_platform/frontend cd autogpt_platform/frontend
npm run dev corepack enable
pnpm i
pnpm dev
``` ```
Then visit [http://localhost:3000](http://localhost:3000) to see the frontend running, after registering an account/logging in, navigate to the build page at [http://localhost:3000/build](http://localhost:3000/build) Then visit [http://localhost:3000](http://localhost:3000) to see the frontend running, after registering an account/logging in, navigate to the build page at [http://localhost:3000/build](http://localhost:3000/build)
@@ -46,13 +54,13 @@ Now that both Ollama and the AutoGPT platform are running we can move onto using
![Select Ollama Model](../imgs/ollama/Ollama-Select-Llama32.png) ![Select Ollama Model](../imgs/ollama/Ollama-Select-Llama32.png)
3. Now we need to add some prompts then save and then run the graph: 3. Now we need to add some prompts then save and then run the graph:
![Add Prompt](../imgs/ollama/Ollama-Add-Prompts.png) ![Add Prompt](../imgs/ollama/Ollama-Add-Prompts.png)
That's it! You've successfully setup the AutoGPT platform and made a LLM call to Ollama. That's it! You've successfully setup the AutoGPT platform and made a LLM call to Ollama.
![Ollama Output](../imgs/ollama/Ollama-Output.png) ![Ollama Output](../imgs/ollama/Ollama-Output.png)
### Using Ollama on a Remote Server with AutoGPT
### Using Ollama on a Remote Server with AutoGPT
For running Ollama on a remote server, simply make sure the Ollama server is running and is accessible from other devices on your network/remotely through the port 11434, then you can use the same steps above but you need to add the Ollama servers IP address to the "Ollama Host" field in the block settings like so: For running Ollama on a remote server, simply make sure the Ollama server is running and is accessible from other devices on your network/remotely through the port 11434, then you can use the same steps above but you need to add the Ollama servers IP address to the "Ollama Host" field in the block settings like so:
![Ollama Remote Host](../imgs/ollama/Ollama-Remote-Host.png) ![Ollama Remote Host](../imgs/ollama/Ollama-Remote-Host.png)
@@ -69,4 +77,4 @@ For common errors:
1. **Connection Refused**: Make sure Ollama is running and the host address is correct (also make sure the port is correct, its default is 11434) 1. **Connection Refused**: Make sure Ollama is running and the host address is correct (also make sure the port is correct, its default is 11434)
2. **Model Not Found**: Try running `ollama pull llama3.2` manually first 2. **Model Not Found**: Try running `ollama pull llama3.2` manually first
3. **Docker Issues**: Ensure Docker daemon is running with `docker ps` 3. **Docker Issues**: Ensure Docker daemon is running with `docker ps`