* fix(kb): improve error logging when connector token resolution fails The generic "Failed to obtain access token" error hid the actual root cause. Now logs credentialId, userId, authMode, and provider to help diagnose token refresh failures in trigger.dev. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(kb): disable connectors after 10 consecutive sync failures Connectors that fail 10 times in a row are set to 'disabled' status, stopping the cron from scheduling further syncs. The UI shows an alert triangle with a reconnect banner. Users can re-enable via the play button or by reconnecting their account, which resets failures. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(kb): disable sync button for disabled connectors, use amber badge variant Sync button should be disabled when connector is in disabled state to guide users toward reconnecting first. Badge variant changed from red to amber to match the warning banner styling. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(kb): address PR review comments for disabled connector feature - Use `=== undefined` instead of falsy check for nextSyncAt to preserve explicit null (manual sync only) when syncIntervalMinutes is 0 - Gate Reconnect button on serviceId/providerId so it only renders for OAuth connectors; show appropriate copy for API key connectors Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(kb): move resolveAccessToken inside try/catch for circuit-breaker coverage Token resolution failures (e.g. revoked OAuth tokens) were thrown before the try/catch block, bypassing consecutiveFailures tracking entirely. Also removes dead `if (refreshed)` guards at mid-sync refresh sites since resolveAccessToken now always returns a string or throws. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(kb): remove dead interval branch when re-enabling connector When `updates.nextSyncAt === undefined`, syncIntervalMinutes was not in the request, so `parsed.data.syncIntervalMinutes` is always undefined. Simplify to just schedule an immediate sync — the sync engine sets the proper nextSyncAt based on the connector's DB interval after completion. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to orchestrate agentic workflows.
Build Workflows with Ease
Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.
Supercharge with Copilot
Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.
Integrate Vector Databases
Upload documents to a vector store and let agents answer questions grounded in your specific content.
Quickstart
Cloud-hosted: sim.ai
Self-hosted: NPM Package
npx simstudio
Note
Docker must be installed and running on your machine.
Options
| Flag | Description |
|---|---|
-p, --port <port> |
Port to run Sim on (default 3000) |
--no-pull |
Skip pulling latest Docker images |
Self-hosted: Docker Compose
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d
Background worker note
The Docker Compose stack starts a dedicated worker container by default. If REDIS_URL is not configured, the worker will start, log that it is idle, and do no queue processing. This is expected. Queue-backed API, webhook, and schedule execution requires Redis; installs without Redis continue to use the inline execution path.
Sim also supports local models via Ollama and vLLM — see the Docker self-hosting docs for setup details.
Self-hosted: Manual Setup
Requirements: Bun, Node.js v20+, PostgreSQL 12+ with pgvector
- Clone and install:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install
bun run prepare # Set up pre-commit hooks
- Set up PostgreSQL with pgvector:
docker run --name simstudio-db -e POSTGRES_PASSWORD=your_password -e POSTGRES_DB=simstudio -p 5432:5432 -d pgvector/pgvector:pg17
Or install manually via the pgvector guide.
- Configure environment:
cp apps/sim/.env.example apps/sim/.env
# Create your secrets
perl -i -pe "s/your_encryption_key/$(openssl rand -hex 32)/" apps/sim/.env
perl -i -pe "s/your_internal_api_secret/$(openssl rand -hex 32)/" apps/sim/.env
perl -i -pe "s/your_api_encryption_key/$(openssl rand -hex 32)/" apps/sim/.env
# DB configs for migration
cp packages/db/.env.example packages/db/.env
# Edit both .env files to set DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"
- Run migrations:
cd packages/db && bun run db:migrate
- Start development servers:
bun run dev:full # Starts Next.js app, realtime socket server, and the BullMQ worker
If REDIS_URL is not configured, the worker will remain idle and execution continues inline.
Or run separately: bun run dev (Next.js), cd apps/sim && bun run dev:sockets (realtime), and cd apps/sim && bun run worker (BullMQ worker).
Copilot API Keys
Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set
COPILOT_API_KEYenvironment variable in your self-hosted apps/sim/.env file to that value
Environment Variables
See the environment variables reference for the full list, or apps/sim/.env.example for defaults.
Tech Stack
- Framework: Next.js (App Router)
- Runtime: Bun
- Database: PostgreSQL with Drizzle ORM
- Authentication: Better Auth
- UI: Shadcn, Tailwind CSS
- State Management: Zustand
- Flow Editor: ReactFlow
- Docs: Fumadocs
- Monorepo: Turborepo
- Realtime: Socket.io
- Background Jobs: Trigger.dev
- Remote Code Execution: E2B
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with ❤️ by the Sim Team


