* improvement(tables): race-free row-count trigger + scoped tx timeouts * fix(tables): close upsert race + serialize replaceTableRows Two concurrency bugs flagged by review: 1. `upsertRow` insert path: removing FOR UPDATE broke serialization between the initial existing-row SELECT and the INSERT. Two concurrent upserts on the same conflict target could both find no match, then both insert, producing a duplicate that bypasses the app-level unique check. Fixed by re-checking for the matching row *after* acquiring the per-table advisory lock; if a racing tx already inserted, switch to UPDATE. 2. `replaceTableRows`: under READ COMMITTED each tx's DELETE uses its own MVCC snapshot, so two concurrent replaces could each DELETE only the rows visible at their start, then both INSERT, leaving the table with the union of both row sets. Fixed by acquiring the per-table advisory lock at the start of the tx to serialize replaces against each other and against auto-position inserts. Also updated a stale docstring on `upsertRow` that still referenced the removed FOR UPDATE. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * fix(tables): serialize explicit-position inserts with advisory lock The `(table_id, position)` index is non-unique. Concurrent explicit- position inserts at the same slot can both observe the slot empty, both skip the shift, then each INSERT — producing duplicate `(table_id, position)` rows. Take the per-table advisory lock in the explicit-position branches of `insertRow` and `batchInsertRows` (the auto-position branches already do this). Updated the test that asserted the explicit path skipped the lock, and added coverage for `batchInsertRows` with explicit positions. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * refactor(tables): dedupe upsert UPDATE path + extract nextAutoPosition Two pure cleanups on the round-1 changes: 1. Extract `nextAutoPosition(trx, tableId)` — the `SELECT coalesce(max( position), -1) + 1` pattern was repeated in three call sites (`insertRow` auto branch, `batchInsertRows` auto branch, `upsertRow` insert branch). One helper, same behavior. 2. Consolidate `upsertRow` update path. The initial-SELECT match and the post-lock re-select match previously had two literal duplicates of the same UPDATE + return block. Resolve `matchedRowId` first, then run one UPDATE branch. Lock is still only acquired when we don't find a match on the first pass. No behavior change. 98/98 table unit tests unchanged. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
The open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to orchestrate agentic workflows.
Build Workflows with Ease
Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.
Supercharge with Copilot
Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.
Integrate Vector Databases
Upload documents to a vector store and let agents answer questions grounded in your specific content.
Quickstart
Cloud-hosted: sim.ai
Self-hosted: NPM Package
npx simstudio
Note
Docker must be installed and running on your machine.
Options
| Flag | Description |
|---|---|
-p, --port <port> |
Port to run Sim on (default 3000) |
--no-pull |
Skip pulling latest Docker images |
Self-hosted: Docker Compose
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -d
Sim also supports local models via Ollama and vLLM — see the Docker self-hosting docs for setup details.
Self-hosted: Manual Setup
Requirements: Bun, Node.js v20+, PostgreSQL 12+ with pgvector
- Clone and install:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install
bun run prepare # Set up pre-commit hooks
- Set up PostgreSQL with pgvector:
docker run --name simstudio-db -e POSTGRES_PASSWORD=your_password -e POSTGRES_DB=simstudio -p 5432:5432 -d pgvector/pgvector:pg17
Or install manually via the pgvector guide.
- Configure environment:
cp apps/sim/.env.example apps/sim/.env
# Create your secrets
perl -i -pe "s/your_encryption_key/$(openssl rand -hex 32)/" apps/sim/.env
perl -i -pe "s/your_internal_api_secret/$(openssl rand -hex 32)/" apps/sim/.env
perl -i -pe "s/your_api_encryption_key/$(openssl rand -hex 32)/" apps/sim/.env
# DB configs for migration
cp packages/db/.env.example packages/db/.env
# Edit both .env files to set DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"
- Run migrations:
cd packages/db && bun run db:migrate
- Start development servers:
bun run dev:full # Starts Next.js app and realtime socket server
Or run separately: bun run dev (Next.js) and cd apps/sim && bun run dev:sockets (realtime).
Copilot API Keys
Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set
COPILOT_API_KEYenvironment variable in your self-hosted apps/sim/.env file to that value
Environment Variables
See the environment variables reference for the full list, or apps/sim/.env.example for defaults.
Tech Stack
- Framework: Next.js (App Router)
- Runtime: Bun
- Database: PostgreSQL with Drizzle ORM
- Authentication: Better Auth
- UI: Shadcn, Tailwind CSS
- Streaming Markdown: Streamdown
- State Management: Zustand, TanStack Query
- Flow Editor: ReactFlow
- Docs: Fumadocs
- Monorepo: Turborepo
- Realtime: Socket.io
- Background Jobs: Trigger.dev
- Remote Code Execution: E2B
- Isolated Code Execution: isolated-vm
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with ❤️ by the Sim Team


