Files
sim/docker/app.Dockerfile
Vikhyath Mondreti 2f932054a7 fix(execution): run pptx/docx/pdf generation inside isolated-vm sandbox (#4217)
* fix(execution): run pptx/docx/pdf generation inside isolated-vm sandbox

Retires the legacy doc-worker.cjs / pptx-worker.cjs pipeline that ran user
DSL via node:vm + full require() in the same UID/PID namespace as the main
Next.js process. User code now runs inside the existing isolated-vm pool
(V8 isolate, no process / require / fs, no /proc/1/environ reachability).

Introduces a first-class SandboxTask abstraction under apps/sim/sandbox-tasks/
that mirrors apps/sim/background/ — one file per task, central typed
registry, kebab-case ids. Adding a new thing that runs in the isolate is
one file plus one registry entry.

Runtime additions in lib/execution/:
 - task-mode execution in isolated-vm-worker.cjs: load pre-built library
   bundles, run task bootstrap, run user code, run finalize, transfer
   Uint8Array result as base64 via IPC
 - named broker IPC bridge (generalizes the existing fetch bridge) with
   args size, result size, and per-execution call caps
 - cooperative AbortSignal support: cancel IPC disposes the isolate, pool
   slot is freed, pending broker-call timers are swept
 - compiled scripts + references explicitly released per execution
 - isolate.isDisposed used for cancellation detection (no error-string
   substring matching)

Library bundles (pptxgenjs, docx, pdf-lib) are built into isolate-safe
IIFE bundles by apps/sim/lib/execution/sandbox/bundles/build.ts and
committed; next.config.ts / trigger.config.ts / Dockerfile updated to
ship them instead of the deleted dist/*-worker.cjs artifacts.

Call sites migrated:
 - app/api/workspaces/[id]/pptx/preview/route.ts
 - app/api/files/serve/[...path]/route.ts (+ test mock)
 - lib/copilot/tools/server/files/{workspace-file,edit-content}.ts

All pass owner key user:<userId> for per-user pool fairness + distributed
lease accounting.

Made-with: Cursor

* improvement(sandbox): delegate timers to Node, add phase timings + saturation logs

Follow-ups on top of the isolated-vm migration (da14027b2):

Timer delegation (laverdet/isolated-vm#136 recommended pattern):
 - setTimeout / setInterval / clearTimeout / clearImmediate delegate to
   Node's real timer heap via ivm.Reference. Real delays are honored;
   clearTimeout actually cancels; ms is clamped to the script timeout
   so callbacks can't fire after the isolate is disposed.
 - Per-execution timer tracking + dispose-sweep in finally. Zero stale
   callbacks post-dispose.
 - unwrapPrimitive helper normalizes ivm.Reference-wrapped primitives
   (arguments: { reference: true } applies uniformly to all args).
 - _polyfills.ts shrinks from ~130 lines to the global->globalThis alias.
   Timers / TextEncoder / TextDecoder / console all install per-execution
   from the worker via ivm bridges.

AbortSignal race fix (pre-existing bug surfaced by the timer smoke):
 - Listener is registered after await tryAcquireDistributedLease. If the
   signal aborted during that ~200ms window (Redis down), AbortSignal
   doesn't fire listeners registered after the fact — the abort was
   silently missed. Now re-checks signal.aborted synchronously after
   addEventListener.

Observability:
 - executeTask returns IsolatedVMTaskTimings (setup, runtimeBootstrap,
   bundles, brokerInstall, taskBootstrap, harden, userCode, finalize,
   total) in every success + error path. run-task.ts logs these with
   workspaceId + queueMs so 'which tenant is slow' is queryable.
 - Pool saturation events now emit structured logger.warn with reason
   codes: queue_full_global, queue_full_owner, queue_wait_timeout,
   distributed_lease_limit. Matches the existing broker reject pattern.

Security policy:
 - New .cursor/rules/sim-sandbox.mdc codifies the hard rules for the
   worker process: no app credentials, all credentialed work goes
   through host-side brokers, every broker scopes by workspaceId.
   Pre-merge checklist for future changes to isolated-vm-worker.cjs.

Measured phase breakdown (local smoke, Redis down): pptx wall=~310ms
with bundles=~16ms, finalize=~83ms; docx ~290ms / 17ms / 70ms; pdf
~235ms / 17ms / 5ms. Bundle compilation is not the bottleneck —
library finalize is.

Made-with: Cursor

* fix(sandbox): thread AbortSignal into runSandboxTask at every call site

Three remaining callers of runSandboxTask were not threading a
cancellation signal, so a client disconnect mid-compile left the pool
slot occupied for the full 60s task timeout. Matching the pattern the
pptx-preview route already uses.

 - apps/sim/app/api/files/serve/[...path]/route.ts — GET forwards
   `request.signal` into handleLocalFile / handleCloudProxy, which
   forward into compileDocumentIfNeeded, which forwards into
   runSandboxTask.
 - apps/sim/lib/copilot/tools/server/files/workspace-file.ts — passes
   `context.abortSignal` (transport/user stop) into runSandboxTask.
 - apps/sim/lib/copilot/tools/server/files/edit-content.ts — same.

Smoke: simulated client disconnect at t=1000ms during a task that would
otherwise have waited 10s. The pool slot unwinds at t=1002ms with
AbortError; previously would have sat 60s until the task-level timeout.

Made-with: Cursor

* chore(build): raise node heap to 8GB for next build type-check

Next.js's type-check worker OOMs at the default 4GB heap on Node 23 for
this project's type graph size. Bumps the heap to 8GB only for the
`next build` invocation inside `bun run build`.

Docker builds are unaffected — `next.config.ts` sets
`typescript.ignoreBuildErrors: true` when DOCKER_BUILD=1, which skips
the type-check pass entirely. This only fixes local `bun run build`.

No functional code changes.

Made-with: Cursor

* fix lint

* refactor(copilot): dedup getDocumentFormatInfo across copilot file tools

The same extension -> { formatName, sourceMime, taskId } mapping was
duplicated in workspace-file.ts and edit-content.ts. Any future format
or task-id change had to happen in two places.

Exports getDocumentFormatInfo + DocumentFormatInfo from workspace-file.ts
(which already owned the PPTX/DOCX/PDF source MIME constants) and
imports it in edit-content.ts. Same source-of-truth pattern the file
already uses for inferContentType.

Made-with: Cursor

* fix(sandbox): propagate empty-message broker/fetch errors

Both bridges in the isolate used truthiness to detect host-side errors:

    if (response.error) throw new Error(response.error);   // broker
    if (result.error)   throw new Error(result.error);     // fetch

If a host handler ever threw `new Error('')`, err.message would be ''
(falsy), so { error: '' } was silently swallowed and the isolate saw
a successful null result. Existing call sites don't throw empty-message
errors, but the pattern was structurally unsafe.

Switch both to typeof check === 'string' and fall back to a default
message if the string is empty, so all host-reported errors propagate
into the isolate regardless of message content.

Made-with: Cursor
2026-04-17 19:07:46 -07:00

143 lines
5.9 KiB
Docker

# ========================================
# Base Stage: Debian-based Bun with Node.js 22
# ========================================
FROM oven/bun:1.3.11-slim AS base
# Install Node.js 22 and common dependencies once in base stage
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
python3 python3-pip python3-venv make g++ curl ca-certificates bash ffmpeg \
&& curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y nodejs
# ========================================
# Dependencies Stage: Install Dependencies
# ========================================
FROM base AS deps
WORKDIR /app
COPY package.json bun.lock turbo.json ./
RUN mkdir -p apps packages/db packages/testing packages/logger packages/tsconfig
COPY apps/sim/package.json ./apps/sim/package.json
COPY packages/db/package.json ./packages/db/package.json
COPY packages/testing/package.json ./packages/testing/package.json
COPY packages/logger/package.json ./packages/logger/package.json
COPY packages/tsconfig/package.json ./packages/tsconfig/package.json
# Install dependencies, then rebuild isolated-vm for Node.js
# Use --linker=hoisted for flat node_modules layout (required for Docker multi-stage builds)
RUN --mount=type=cache,id=bun-cache,target=/root/.bun/install/cache \
--mount=type=cache,id=npm-cache,target=/root/.npm \
HUSKY=0 bun install --omit=dev --ignore-scripts --linker=hoisted && \
cd node_modules/isolated-vm && npx node-gyp rebuild --release
# ========================================
# Builder Stage: Build the Application
# ========================================
FROM base AS builder
WORKDIR /app
# Install turbo globally (cached for fast reinstall)
RUN --mount=type=cache,id=bun-cache,target=/root/.bun/install/cache \
bun install -g turbo
# Copy node_modules from deps stage (cached if dependencies don't change)
COPY --from=deps /app/node_modules ./node_modules
# Copy package configuration files (needed for build)
COPY package.json bun.lock turbo.json ./
COPY apps/sim/package.json ./apps/sim/package.json
COPY packages/db/package.json ./packages/db/package.json
COPY packages/testing/package.json ./packages/testing/package.json
COPY packages/logger/package.json ./packages/logger/package.json
# Copy workspace configuration files (needed for turbo)
COPY apps/sim/next.config.ts ./apps/sim/next.config.ts
COPY apps/sim/tsconfig.json ./apps/sim/tsconfig.json
COPY apps/sim/tailwind.config.ts ./apps/sim/tailwind.config.ts
COPY apps/sim/postcss.config.mjs ./apps/sim/postcss.config.mjs
# Copy source code (changes most frequently - placed last to maximize cache hits)
COPY apps/sim ./apps/sim
COPY packages ./packages
# Required for standalone nextjs build
WORKDIR /app/apps/sim
RUN --mount=type=cache,id=bun-cache,target=/root/.bun/install/cache \
HUSKY=0 bun install sharp --linker=hoisted
ENV NEXT_TELEMETRY_DISABLED=1 \
VERCEL_TELEMETRY_DISABLED=1 \
DOCKER_BUILD=1
WORKDIR /app
# Provide dummy database URLs during image build so server code that imports @sim/db
# can be evaluated without crashing. Runtime environments should override these.
ARG DATABASE_URL="postgresql://user:pass@localhost:5432/dummy"
ENV DATABASE_URL=${DATABASE_URL}
# Provide dummy NEXT_PUBLIC_APP_URL for build-time evaluation
# Runtime environments should override this with the actual URL
ARG NEXT_PUBLIC_APP_URL="http://localhost:3000"
ENV NEXT_PUBLIC_APP_URL=${NEXT_PUBLIC_APP_URL}
RUN bun run build
# ========================================
# Runner Stage: Run the actual app
# ========================================
FROM base AS runner
WORKDIR /app
# Node.js 22, Python, ffmpeg, etc. are already installed in base stage
ENV NODE_ENV=production
# Create non-root user and group
RUN groupadd -g 1001 nodejs && \
useradd -u 1001 -g nodejs nextjs
# Copy application artifacts from builder
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/public ./apps/sim/public
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/.next/static ./apps/sim/.next/static
# Copy blog/author content for runtime filesystem reads (not part of the JS bundle)
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/content ./apps/sim/content
# Copy isolated-vm native module (compiled for Node.js in deps stage)
COPY --from=deps --chown=nextjs:nodejs /app/node_modules/isolated-vm ./node_modules/isolated-vm
# Copy the isolated-vm worker script
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/lib/execution/isolated-vm-worker.cjs ./apps/sim/lib/execution/isolated-vm-worker.cjs
# Copy the pre-built sandbox library bundles (pptxgenjs, docx, pdf-lib) that
# run inside the V8 isolate. Committed into the repo; see
# apps/sim/lib/execution/sandbox/bundles/build.ts to regenerate.
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/lib/execution/sandbox/bundles ./apps/sim/lib/execution/sandbox/bundles
# Guardrails setup with pip caching
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/lib/guardrails/requirements.txt ./apps/sim/lib/guardrails/requirements.txt
COPY --from=builder --chown=nextjs:nodejs /app/apps/sim/lib/guardrails/validate_pii.py ./apps/sim/lib/guardrails/validate_pii.py
# Install Python dependencies with pip cache mount for faster rebuilds
RUN --mount=type=cache,target=/root/.cache/pip \
python3 -m venv ./apps/sim/lib/guardrails/venv && \
./apps/sim/lib/guardrails/venv/bin/pip install --upgrade pip && \
./apps/sim/lib/guardrails/venv/bin/pip install -r ./apps/sim/lib/guardrails/requirements.txt && \
chown -R nextjs:nodejs /app/apps/sim/lib/guardrails
# Create .next/cache directory with correct ownership
RUN mkdir -p apps/sim/.next/cache && \
chown -R nextjs:nodejs apps/sim/.next/cache
# Switch to non-root user
USER nextjs
EXPOSE 3000
ENV PORT=3000 \
HOSTNAME="0.0.0.0"
CMD ["bun", "apps/sim/server.js"]