Compare commits

...

29 Commits

Author SHA1 Message Date
Waleed Latif
55f55ff900 fix(layout): render PublicEnvScript as plain inline <script>
next-runtime-env's <PublicEnvScript /> uses next/script beforeInteractive
internally. On Next.js error-page shells (<html id="__next_error__">),
beforeInteractive is not honored — the env script is serialized into the
RSC stream instead of being hoisted as a real <script> in <head>, so it
runs after client chunks evaluate. Any module that calls getBaseUrl()
at module-eval time (e.g. lib/auth/auth-client.ts) then crashes because
window['__ENV'] is still undefined.

disableNextScript renders a plain inline <script> in <head> that the
browser executes synchronously during HTML parse on every render path,
including error pages. Fixes "Application error: a client-side exception"
on 404 routes like /models/anthropic/xxx and /integrations/rand.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-21 11:45:15 -07:00
Waleed Latif
8ad41a2e2b fix(landing): render proper 404 for invalid /models and /integrations routes 2026-04-18 21:10:10 -07:00
Waleed
f91c1b614a chore(docker): add packages/utils to app and realtime Dockerfiles (#4229)
* chore(docker): add packages/utils to app and realtime Dockerfiles

* chore(docker): copy packages/utils in realtime runner stage
2026-04-18 18:00:44 -07:00
Waleed
b5674d9ed4 improvement(codebase): centralize test mocks, extract @sim/utils, remove dead code (#4228)
* improvement(codebase): centralize test mocks, extract @sim/utils, remove dead code

* improvement(codebase): apply @sim/utils conventions to staging-introduced files
2026-04-18 14:39:03 -07:00
Theodore Li
c19187257e fix(ui): stop scrolling on leaving workflow sidebar for drag-drop (#4139)
* fix(ui): stop scrolling on leaving workflow sidebar for drag-drop

* Address comments, fix hover state

* address comments
2026-04-18 15:45:25 -04:00
Vikhyath Mondreti
c246f5c660 improvement(billing): route scope by subscription referenceId, sync plan from Stripe, transfer storage on org join, outbox service (#4219)
* fix(billing): route scope by subscription referenceId, sync plan from Stripe, transfer storage on org join

Route every billing decision (usage limits, credits, storage, rate
limit, threshold billing, webhooks, UI permissions) through the
subscription's `referenceId` instead of plan-name heuristics. Fixes
the production state where a `pro_6000` subscription attached to an
organization was treated as personal Pro by display/edit code while
execution correctly enforced the org cap.

Scope
- Add `isOrgScopedSubscription(sub, userId)` (pure) and
  `isSubscriptionOrgScoped(sub)` (async DB-backed) helpers. One is
  used wherever a user perspective is available; the other in webhook
  handlers that only have a subscription row.
- Replace plan-name scope checks in ~20 files: usage/limit readers,
  credits balance + purchase, threshold billing, storage limits +
  tracking, rate limiter, invoice + subscription webhooks, seat
  management, membership join/leave, `switch-plan` admin gate,
  admin credits/billing routes, copilot 402 handler, UI subscription
  settings + permissions + sidebar indicator, React Query types.

Plan sync
- Add `syncSubscriptionPlan(subscriptionId, currentPlan, planFromStripe)`
  called from `onSubscriptionComplete` and `onSubscriptionUpdate` so
  the DB `plan` column heals on every Stripe event. Pro->Team upgrades
  previously updated price, seats, and referenceId but left `plan`
  stale — this is what produced the `pro_6000`-on-org row.

Priority + grace period
- `getHighestPrioritySubscription` now prefers org over personal
  within each tier (Enterprise > Team > Pro, org > personal at each).
  A user with a `cancelAtPeriodEnd` personal Pro who joins a paid org
  routes pooled resources to the org through the grace window.
- `calculateSubscriptionOverage` personal-Pro branch reads user_stats
  directly (bypassing priority) and bills only `proPeriodCostSnapshot`
  when the user joined a paid org mid-cycle, so post-join org usage
  isn't double-charged on the personal Pro's final invoice.
  `resetUsageForSubscription` mirrors this: preserves
  `currentPeriodCost` / `currentPeriodCopilotCost` when
  `proPeriodCostSnapshot > 0` so the org's next cycle-close captures
  post-join usage correctly.

Uniform base-price formula
- `basePrice × (seats ?? 1)` everywhere: `getOrgUsageLimit`,
  `updateOrganizationUsageLimit`, `setUsageLimitForCredits`,
  `calculateSubscriptionOverage`, threshold billing,
  `syncSubscriptionUsageLimits`, `getOrganizationBillingData`.
  Admin dashboard math now agrees with enforcement math.

Storage transfer on join
- Invitation-accept flow moves `user_stats.storageUsedBytes` into
  `organization.storageUsedBytes` inside the same transaction when
  the org is paid.
- `syncSubscriptionUsageLimits` runs a bulk-backfill version so
  members who joined before this fix, or orgs that upgraded from
  free to paid after members joined, get pulled into the org pool
  on the next subscription event. Idempotent.

UX polish
- Copilot 402 handler differentiates personal-scoped ("increase your
  usage limit") from org-scoped ("ask an owner or admin to raise the
  limit") while keeping the `increase_limit` action code the parser
  already understands.
- Duplicate-subscription error on team upgrade names the existing
  plan via `getDisplayPlanName`.
- Invitation-accept invalidates subscription + organization React
  Query caches before redirect so settings doesn't flash the user's
  pre-join personal view.

Dead code removal
- Remove unused `calculateUserOverage`, and the following fields on
  `SubscriptionBillingData` / `getSimplifiedBillingSummary` that no
  consumer in the monorepo read: `basePrice`, `overageAmount`,
  `totalProjected`, `tierCredits`, `basePriceCredits`,
  `currentUsageCredits`, `overageAmountCredits`, `totalProjectedCredits`,
  `usageLimitCredits`, `currentCredits`, `limitCredits`,
  `lastPeriodCostCredits`, `lastPeriodCopilotCostCredits`,
  `copilotCostCredits`, and the `organizationData` subobject. Add
  `metadata: unknown` to match what the server returns.

Notes for the triggering customer
- The `pro_6000`-on-org row self-heals on the next Stripe event via
  `syncSubscriptionPlan`. For the one known customer, a direct
  UPDATE is sufficient:
  `UPDATE subscription SET plan='team_6000' WHERE id='aq2...' AND plan='pro_6000'`.

Made-with: Cursor

* fix tests

* address more comments

* progress

* harden further

* outbox service

* address comments

* address comment on check

* simplify

* cleanup code

* minor improvement
2026-04-18 10:46:14 -07:00
Waleed
28b4c4cc67 fix(blocks): resolve variable display in mothership resource preview (#4226)
* fix(blocks): resolve variable display in mothership resource preview

Variables block showed empty assignments in the embedded workflow preview
because currentWorkflowId was read from URL params, which don't contain
workflowId in the mothership route. Fall back to activeWorkflowId from
the workflow registry.

* fix(blocks): narrow currentWorkflowId to string to satisfy strict null checks
2026-04-18 10:26:46 -07:00
Vikhyath Mondreti
32541e79d4 chore(readme): update tech stack section (#4227)
* chore(readme): update tech stack section

* fix
2026-04-18 10:02:54 -07:00
Waleed
a01f80c6a3 feat(tables): column selection, keyboard shortcuts, drag reorder, and undo improvements (#4222)
* feat(tables): add column selection, missing keyboard shortcuts, and Sheets-aligned operations

Click column headers to select entire columns, shift-click to extend to
a column range. Delete, cut, and copy operations work on column
selections with full undo/redo support. Adds Home, End, Ctrl+Home,
Ctrl+End, PageUp, PageDown, Ctrl+Space, and all Shift variants.
Changes Ctrl+A to select all cells instead of checkbox rows. Column
header dropdown menu now opens on right-click instead of left-click.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): chevron opens dropdown, drag header to reorder columns

Split column header into label area (click to select, draggable for
reorder) and chevron button (click to open dropdown menu). Remove
the grip handle — dragging the header itself now reorders columns.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): full-column highlight during drag reorder

Replace the thin 2px line drop indicator with a full-column highlight
that spans the entire table height, matching Google Sheets behavior.
The insertion line is still shown at the drop edge for precision.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): handle drag reorder edge cases, dim source column

Suppress drop indicator when drag would result in no position change
(dragging onto self or adjacent no-op positions). Dim the source
column body cells during drag with a background overlay. Skip the
API call when the computed order is identical to the current order.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(tables): add column reorder undo/redo, body drop targets, and escape cancel

Column drag-and-drop now supports dropping anywhere in a column (not just headers),
pressing Escape to cancel a drag, and full undo/redo integration for column reordering.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): merge partial updates in updateRow to prevent column data loss

When Mothership called updateRow directly (bypassing the PATCH API route),
it passed only the changed fields — which were written as the entire row,
wiping all other columns. Move the merge logic into updateRow itself so
all callers get correct partial-update semantics, and remove the now-redundant
pre-merge from both PATCH routes.

* test(tables): add updateRow partial merge tests

Covers the bug where partial updates wiped unmentioned columns — verifies
that fields not in the update payload are preserved, nulling a field works,
full-row updates are idempotent, and missing rows throw correctly.

* feat(tables): add delete-column undo/redo, rename metadata sync, and comprehensive row ID patching

- Delete column now captures column definition, cell data, order, and width for full undo/redo
- Column rename undo/redo now properly syncs columnWidths and columnOrder metadata
- patchRedoRowId/patchUndoRowId extended to handle all action types containing row IDs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): remove source column dimming during drag reorder

Only show the insertion line at the drop position, matching Google Sheets
behavior. Remove dragSourceBounds memo and isDragging prop.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): preserve selection on right-click, auto-resize on double-click, fix escape during drag

- Right-clicking within an existing selection now preserves it instead of
  resetting to a single cell, so context menu operations apply to the full range
- Double-clicking a column border auto-resizes the column to fit its content
- Escape during column drag now immediately clears refs before state update,
  preventing the dragend handler from executing the reorder

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): add aria-hidden value and aria-label for column header accessibility

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): tighten auto-resize padding to match Google Sheets

Reduce header padding from +48px to +36px (icon + cell padding) and cell
padding from +20px to +17px (cell padding + border) for a snug fit.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): clean drag ghost and clear selection on drag start

- Create a minimal custom drag image showing only the column name instead
  of the browser's default ghost that includes adjacent columns/checkboxes
- Clear any existing cell/column selection when starting a column drag to
  prevent stale highlights from persisting during reorder

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(tables): add Shift+Space row selection and Ctrl+D fill down

Shift+Space now selects the entire row (all columns) instead of toggling
a checkbox, matching Google Sheets behavior. Ctrl+D copies the top cell's
value down through the selected range with full undo/redo support.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): show toast on incompatible column type change

The server validates type compatibility and returns a clear error message
(e.g. "3 row(s) have incompatible values"), but the client was silently
swallowing it. Now surfaces the error via toast notification. Also moved
the undo push to onSuccess so a failed type change doesn't pollute the
undo stack.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): scroll-into-view for selection focus, Home/End origin, delete-column undo timing

- Scroll-into-view now tracks selectionFocus (not just anchor), so
  Shift+Arrow extending selection off-screen properly auto-scrolls
- Shift+Home/End now uses the current focus as origin (matching
  Shift+Arrow behavior) instead of always using anchor
- Delete column undo entry is now pushed in onSuccess, preventing
  a corrupted undo stack if the server rejects the deletion
- Dialog copy updated from "cannot be undone" to "You can undo this
  action" since undo/redo is supported

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: resolve duplicate declarations from rebase against staging

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix file upload

* fix(tables): merge column widths on delete-column undo, try/finally for auto-resize

- Delete-column undo now reads current column widths via getColumnWidths
  callback and merges the restored column's width into the full map,
  preventing other columns' widths from being wiped
- Auto-resize measurement span is now wrapped in try/finally to ensure
  DOM cleanup if an exception occurs during measurement

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: revert accidental home.tsx change from rebase conflict resolution

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): clear isColumnSelection on double-click and right-click, skip scroll for column select

- Clear isColumnSelection when double-clicking a cell to edit, preventing
  the column selection effect from fighting with the editing state
- Clear isColumnSelection when right-clicking outside the current
  selection, preventing stale column selection from re-expanding
- Skip scroll-into-view when isColumnSelection is true, preventing
  the viewport from jumping to the bottom row when clicking a column header

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): remove inline font override in auto-resize, guard undefined columnOrder

- Remove `font:inherit` from measurement span inline style so Tailwind
  classes (font-medium, text-small) control font properties for accurate
  column width measurement
- Only include columnOrder in metadata update when defined, preventing
  handleColumnRename from clearing a persisted column order when
  columnOrderRef is null

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): capture columnRequired in delete-column undo for full restoration

The delete-column undo action captured columnUnique but not columnRequired,
so undoing a delete on a required column would silently drop the constraint.
Now captures and restores both constraints.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): restore width independently of order on delete-column undo, batch fill-down

- Column width restoration in delete-column undo no longer requires
  previousOrder to be non-null — width is restored independently
- Ctrl+D fill-down now uses batchUpdateRef (single API call) instead
  of calling mutateRef per row in a loop

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): multi-column delete, select-all cell model, cut flash, chevron alignment

- Multi-select delete: detect column selection range and delete all selected
  columns sequentially with individual undo entries
- Select all (header checkbox): use cell selection model instead of checkbox
  model for consistent highlighting
- Cut flash: batch cell clears into single mutation to prevent stale data
  flashing from multiple onSettled invalidations
- Chevron alignment: adjust right padding from pr-2 to pr-2.5

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): restore column width locally on delete-column undo

Add onColumnWidthsChange callback to undo hook so restored column
widths update local component state, not just server metadata.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): prevent Ctrl+D bookmark dialog, batch Delete/Backspace mutations

- Move e.preventDefault() before early returns in Ctrl+D handler so
  the browser bookmark dialog is always suppressed
- Replace per-row mutateRef calls with single batchUpdateRef call in
  both Delete/Backspace handlers (checked rows and cell selection),
  consistent with cut and fill-down

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): adjust column positions for multi-column delete undo

Capture original schema positions upfront and adjust each by the
count of previously-deleted columns with lower positions, so undo
restores columns at correct server-side positions in LIFO order.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): only multi-delete when clicked column is within selection

Check that the right-clicked column is within the selected column
range before using multi-column delete. If the click is outside the
selection, delete only the clicked column.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): prevent duplicate undo entry on column drag-drop

Clear dragColumnNameRef immediately in handleColumnDragEnd so the
second invocation (from dragend after drop already fired) is a no-op.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): clean up width on delete-column redo, suppress click during drag

- Redo path for delete-column now removes the column's width from
  metadata and local state, preventing stale width entries
- Add didDragRef to ColumnHeaderMenu to suppress the click event
  that fires after a drag operation, preventing selection flash

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): remove unstable mutation object from useCallback deps

deleteTableMutation is not referentially stable — only .mutateAsync()
is. Including the mutation object causes unnecessary callback recreation
on every mutation state change.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): fix auto-resize header padding, deduplicate rename metadata logic

Increase header text measurement padding from 36px to 57px to account
for the chevron dropdown button (pl-0.5 + 9px icon + pr-2.5) that
always occupies layout space. Prevents header text truncation on
auto-resize.

Deduplicate column rename metadata logic by having columnRename.onSave
call handleColumnRename instead of reimplementing the same width/order
transfer and metadata persist.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): log error on cell data restoration failure during undo

Add onError handler to the batchUpdateRowsMutation inside
delete-column undo so failures are logged instead of silently
swallowed. The column schema restores first, and the cell data
restoration is a separate async call that the outer try/catch
cannot intercept.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): address audit findings across table, undo hook, and store

- Add missing bounds check in handleCopy (c >= cols.length) matching
  handleCut for defensive consistency
- Clear lastCheckboxRowRef in Ctrl+Space and Shift+Space to prevent
  stale shift-click checkbox range after keyboard selection
- Fix stale snapshot race in patchRedoRowId/patchUndoRowId by reading
  state inside the set() updater instead of via get() outside it
- Add metadata cleanup to create-column undo so column width is removed
  from both local state and server, symmetric with delete-column redo
- Remove stale width key from columnWidths on column delete instead of
  persisting orphaned entries
- Normalize undefined vs null in handleInlineSave change detection to
  prevent unnecessary mutations when oldValue is undefined
- Use ghost.parentNode?.removeChild instead of document.body.removeChild
  in drag ghost cleanup to prevent throw on component unmount

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tables): reset didDragRef in handleDragEnd to prevent stale flag

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 01:30:32 -07:00
Vikhyath Mondreti
524f33cc9e fix(pdf): PDF previews by adding the missing preview endpoint and allowing same-origin blob URLs in iframe CSP (#4225)
* fix(pdf): PDF previews by adding the missing preview endpoint and allowing same-origin blob URLs in iframe CSP

* fixed

* add preview routes and tests

* follow nextjs route gen strat
2026-04-17 20:59:38 -07:00
Waleed
47519e34d9 fix(fireflies): support V2 webhook payload format for meetingId mapping (#4221)
* fix(fireflies): support V2 webhook payload format for meetingId mapping

Fireflies V2 webhooks use snake_case field names (meeting_id, event,
client_reference_id) instead of camelCase (meetingId, eventType,
clientReferenceId). The formatInput handler now auto-detects V1 vs V2
payloads and maps fields correctly, fixing empty meetingId on V2 webhooks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(fireflies): guard against NaN timestamp, use stricter V2 detection

Address PR review feedback:
- Use Number.isFinite guard to prevent NaN timestamp propagation
- Use AND instead of OR for V2 detection since both meeting_id and
  event are required fields in every V2 payload

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 19:43:28 -07:00
Vikhyath Mondreti
2f932054a7 fix(execution): run pptx/docx/pdf generation inside isolated-vm sandbox (#4217)
* fix(execution): run pptx/docx/pdf generation inside isolated-vm sandbox

Retires the legacy doc-worker.cjs / pptx-worker.cjs pipeline that ran user
DSL via node:vm + full require() in the same UID/PID namespace as the main
Next.js process. User code now runs inside the existing isolated-vm pool
(V8 isolate, no process / require / fs, no /proc/1/environ reachability).

Introduces a first-class SandboxTask abstraction under apps/sim/sandbox-tasks/
that mirrors apps/sim/background/ — one file per task, central typed
registry, kebab-case ids. Adding a new thing that runs in the isolate is
one file plus one registry entry.

Runtime additions in lib/execution/:
 - task-mode execution in isolated-vm-worker.cjs: load pre-built library
   bundles, run task bootstrap, run user code, run finalize, transfer
   Uint8Array result as base64 via IPC
 - named broker IPC bridge (generalizes the existing fetch bridge) with
   args size, result size, and per-execution call caps
 - cooperative AbortSignal support: cancel IPC disposes the isolate, pool
   slot is freed, pending broker-call timers are swept
 - compiled scripts + references explicitly released per execution
 - isolate.isDisposed used for cancellation detection (no error-string
   substring matching)

Library bundles (pptxgenjs, docx, pdf-lib) are built into isolate-safe
IIFE bundles by apps/sim/lib/execution/sandbox/bundles/build.ts and
committed; next.config.ts / trigger.config.ts / Dockerfile updated to
ship them instead of the deleted dist/*-worker.cjs artifacts.

Call sites migrated:
 - app/api/workspaces/[id]/pptx/preview/route.ts
 - app/api/files/serve/[...path]/route.ts (+ test mock)
 - lib/copilot/tools/server/files/{workspace-file,edit-content}.ts

All pass owner key user:<userId> for per-user pool fairness + distributed
lease accounting.

Made-with: Cursor

* improvement(sandbox): delegate timers to Node, add phase timings + saturation logs

Follow-ups on top of the isolated-vm migration (da14027b2):

Timer delegation (laverdet/isolated-vm#136 recommended pattern):
 - setTimeout / setInterval / clearTimeout / clearImmediate delegate to
   Node's real timer heap via ivm.Reference. Real delays are honored;
   clearTimeout actually cancels; ms is clamped to the script timeout
   so callbacks can't fire after the isolate is disposed.
 - Per-execution timer tracking + dispose-sweep in finally. Zero stale
   callbacks post-dispose.
 - unwrapPrimitive helper normalizes ivm.Reference-wrapped primitives
   (arguments: { reference: true } applies uniformly to all args).
 - _polyfills.ts shrinks from ~130 lines to the global->globalThis alias.
   Timers / TextEncoder / TextDecoder / console all install per-execution
   from the worker via ivm bridges.

AbortSignal race fix (pre-existing bug surfaced by the timer smoke):
 - Listener is registered after await tryAcquireDistributedLease. If the
   signal aborted during that ~200ms window (Redis down), AbortSignal
   doesn't fire listeners registered after the fact — the abort was
   silently missed. Now re-checks signal.aborted synchronously after
   addEventListener.

Observability:
 - executeTask returns IsolatedVMTaskTimings (setup, runtimeBootstrap,
   bundles, brokerInstall, taskBootstrap, harden, userCode, finalize,
   total) in every success + error path. run-task.ts logs these with
   workspaceId + queueMs so 'which tenant is slow' is queryable.
 - Pool saturation events now emit structured logger.warn with reason
   codes: queue_full_global, queue_full_owner, queue_wait_timeout,
   distributed_lease_limit. Matches the existing broker reject pattern.

Security policy:
 - New .cursor/rules/sim-sandbox.mdc codifies the hard rules for the
   worker process: no app credentials, all credentialed work goes
   through host-side brokers, every broker scopes by workspaceId.
   Pre-merge checklist for future changes to isolated-vm-worker.cjs.

Measured phase breakdown (local smoke, Redis down): pptx wall=~310ms
with bundles=~16ms, finalize=~83ms; docx ~290ms / 17ms / 70ms; pdf
~235ms / 17ms / 5ms. Bundle compilation is not the bottleneck —
library finalize is.

Made-with: Cursor

* fix(sandbox): thread AbortSignal into runSandboxTask at every call site

Three remaining callers of runSandboxTask were not threading a
cancellation signal, so a client disconnect mid-compile left the pool
slot occupied for the full 60s task timeout. Matching the pattern the
pptx-preview route already uses.

 - apps/sim/app/api/files/serve/[...path]/route.ts — GET forwards
   `request.signal` into handleLocalFile / handleCloudProxy, which
   forward into compileDocumentIfNeeded, which forwards into
   runSandboxTask.
 - apps/sim/lib/copilot/tools/server/files/workspace-file.ts — passes
   `context.abortSignal` (transport/user stop) into runSandboxTask.
 - apps/sim/lib/copilot/tools/server/files/edit-content.ts — same.

Smoke: simulated client disconnect at t=1000ms during a task that would
otherwise have waited 10s. The pool slot unwinds at t=1002ms with
AbortError; previously would have sat 60s until the task-level timeout.

Made-with: Cursor

* chore(build): raise node heap to 8GB for next build type-check

Next.js's type-check worker OOMs at the default 4GB heap on Node 23 for
this project's type graph size. Bumps the heap to 8GB only for the
`next build` invocation inside `bun run build`.

Docker builds are unaffected — `next.config.ts` sets
`typescript.ignoreBuildErrors: true` when DOCKER_BUILD=1, which skips
the type-check pass entirely. This only fixes local `bun run build`.

No functional code changes.

Made-with: Cursor

* fix lint

* refactor(copilot): dedup getDocumentFormatInfo across copilot file tools

The same extension -> { formatName, sourceMime, taskId } mapping was
duplicated in workspace-file.ts and edit-content.ts. Any future format
or task-id change had to happen in two places.

Exports getDocumentFormatInfo + DocumentFormatInfo from workspace-file.ts
(which already owned the PPTX/DOCX/PDF source MIME constants) and
imports it in edit-content.ts. Same source-of-truth pattern the file
already uses for inferContentType.

Made-with: Cursor

* fix(sandbox): propagate empty-message broker/fetch errors

Both bridges in the isolate used truthiness to detect host-side errors:

    if (response.error) throw new Error(response.error);   // broker
    if (result.error)   throw new Error(result.error);     // fetch

If a host handler ever threw `new Error('')`, err.message would be ''
(falsy), so { error: '' } was silently swallowed and the isolate saw
a successful null result. Existing call sites don't throw empty-message
errors, but the pattern was structurally unsafe.

Switch both to typeof check === 'string' and fall back to a default
message if the string is empty, so all host-reported errors propagate
into the isolate regardless of message content.

Made-with: Cursor
2026-04-17 19:07:46 -07:00
Vikhyath Mondreti
319e0db732 improvement(mothership): agent model dropdown validations, markers for recommended models (#4213)
* improvement(mothership): agent model dropdown validations, recommendation system

* mark a few more models:

* remove regex based checks'

* remove dead code

* remove inherited reseller flags

* fix note

* address bugbot comments

* code cleanup
2026-04-17 18:58:02 -07:00
Waleed
003e931546 improvement(terminal): resize output panel on any layout change via ResizeObserver (#4220)
Replaces MutationObserver on document.documentElement (watching CSS variable
changes) + window resize listener with a ResizeObserver on the terminal element
itself. The terminal now measures its own rendered width directly, so it responds
correctly to all layout changes — sidebar, workflow panel, and mothership resize —
without indirect CSS variable plumbing or cross-component coupling.
2026-04-17 18:23:51 -07:00
Waleed
948cdbcc3f fix(chat): prevent @-mention menu focus loss and stabilize render identity (#4218)
* fix(docs): preserve gif playback position in lightbox and clean up ui components

- Capture currentTime on click and seek lightbox video to match using useLayoutEffect
- Convert lightboxStartTime from useState to useRef (no independent render needed)
- Apply same fix to ActionVideo in action-media.tsx
- Remove dead AnimatedBlocks component (zero imports)
- Fix language-dropdown to derive currentLang during render instead of mirroring into state via effect
- Replace template literals with cn() in faq.tsx and video.tsx

* fix(chat): prevent @-mention menu focus loss and stabilize render identity

Radix DropdownMenu's FocusScope was restoring focus from the search input
to the content root whenever registered menu items mounted or unmounted
inside the content, interrupting typing after a keystroke or two.

- Keep the default tree always mounted under `hidden` instead of swapping
  subtrees when the filter activates.
- Render filtered results as plain <button role="menuitem"> so they do not
  participate in Radix's menu Collection.
- Add activeIndex state with ArrowUp/Down/Enter keyboard nav, mouse-hover
  sync, and scrollIntoView so the highlighted row stays visible and users
  can see what Enter will select.

While tracing the cascade that compounded the bug:

- Hoist `select` in useWorkflowMap / useWorkspacesQuery / useFolderMap to
  module scope so TanStack Query caches the select result across renders.
- Guard setSelectedContexts([]) with a functional updater that bails out
  when already empty, preventing a fresh [] literal from invalidating
  consumers that key on reference identity.
- Wrap WorkspaceHeader in React.memo so it bails out on parent renders
  once its (now-stable) props are unchanged.

Made-with: Cursor

* remove extraneous comments

* cleanup

* fix(chat): apply same setState bail-out to clearContexts for consistency

Matches the invariant we already established for the message effect:
calling setSelectedContexts([]) against an already-empty array emits a
fresh [] reference (Object.is bails out are not reference-level), which
cascades through consumers that key on selectedContexts identity.
clearContexts is part of the hook's public API so callers can't know
whether the list is empty — make it safe for them.

Made-with: Cursor
2026-04-17 17:38:37 -07:00
Theodore Li
5e716d74bc docs(assets): Add pics and videos for mothership (#4216)
* Add pics and videos for mothership

* Minimal edit
2026-04-17 16:31:34 -04:00
Waleed
e1018f1c72 improvement(utils): add shared utility functions and replace inline patterns (#4214)
* improvement(utils): add shared utility functions and replace inline patterns

Add sleep, toError, safeJsonParse, isNonNull helpers and invariant/assertNever
assertions. Replace all inline implementations across the codebase with these
shared utilities for consistency. Zero behavioral changes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): remove import type from .server module to fix client bundle build

Turbopack resolves .server.ts modules even for type-only imports,
pulling dns/promises into client bundles. Define SecureFetchResponse
locally instead.

* fix(agiloft): revert to client-safe imports to fix build

The SSRF upgrade to input-validation.server introduced dns/promises
into client bundles via tools/registry.ts. Revert to the original
client-safe validateExternalUrl + fetch. The SSRF DNS-pinning upgrade
for agiloft directExecution should be done via API routes in a
separate PR.

* feat(agiloft): add API route for retrieve_attachment, matching established file patterns

Convert retrieve_attachment from directExecution to standard API route
pattern, consistent with Slack download and Google Drive download tools.

- Create /api/tools/agiloft/retrieve with DNS validation, auth lifecycle,
  and base64 file response matching the { file: { name, mimeType, data,
  size } } convention
- Update retrieve_attachment tool to use request/transformResponse
  instead of directExecution, removing the dependency on
  executeAgiloftRequest from the tool definition
- File output type: 'file' enables FileToolProcessor to store downloaded
  files in execution filesystem automatically

* shopify

* fix(agiloft): add optional flag to nullable lock record block outputs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): revert optional flag on block outputs — property only exists on tool outputs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore(utils): remove unused utilities (asserts, safeJsonParse, isNonNull)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 13:29:42 -07:00
Waleed
351873ac04 improvement(sidebar): interleave folders and workflows by sort order in all resource pickers (#4215)
* improvement(sidebar): interleave folders and workflows by sort order in all resource pickers

- Merge folder/workflow submenus into a single Workflows tree sorted by sortOrder in both the @ plus-menu and add-resource dropdowns
- Widen both dropdowns from 240px to 320px and remove type labels from search results
- Fix isOpen/onSwitch regression: WorkflowFolderTreeItems now forwards node.isOpen so already-open tabs are switched to rather than duplicated
- Apply same interleaved sortOrder ordering to the collapsed sidebar's root-level folder+workflow list

* fix(add-resource-dropdown): align sort tiebreaker with compareByOrder, document empty-folder omission

Use id.localeCompare as the sort tiebreaker in buildWorkflowFolderTree to match the sidebar's
compareByOrder fallback (sortOrder → id) instead of name. Add a comment clarifying that empty
folders are intentionally omitted from the tree view.

* chore: remove extraneous inline comment
2026-04-17 13:01:44 -07:00
Waleed
38864fac34 feat(monday): add full Monday.com integration (#4210)
* feat(monday): add full Monday.com integration with tools, block, triggers, and OAuth

Adds a comprehensive Monday.com integration:
- 13 tools: list/get boards, CRUD items, search, subitems, updates, groups, move, archive
- Block with operation dropdown, board/group selectors, OAuth credential, advanced mode
- 9 webhook triggers with auto-subscription lifecycle (create/delete via GraphQL API)
- OAuth config with 7 scopes (boards, updates, webhooks, me:read)
- Provider handler with challenge verification, formatInput, idempotency
- Docs, icon, selectors, and all registry wiring

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(monday): cast userId to string in deleteSubscription fallback

The DeleteSubscriptionContext type has userId as unknown, causing a
TypeScript error when passing it to getOAuthToken which expects string.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(monday): escape string params in GraphQL, align deleteSubscription with established patterns

- Use JSON.stringify() for groupId in get_items.ts (matches create_item.ts
  and move_item_to_group.ts)
- Use JSON.stringify() for notificationUrl in webhook provider
- Remove non-standard getOAuthToken fallback in deleteSubscription to match
  Airtable/Webflow pattern (credential resolution only, warn and return on failure)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(monday): sanitize columns JSON in search_items GraphQL query

Parse and re-stringify the columns param to ensure well-formed JSON
before interpolating into the GraphQL query, preventing injection
via malformed input.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(monday): validate all numeric IDs and sanitize columns in GraphQL queries

- Add sanitizeNumericId() helper to tools/monday/utils.ts for consistent
  validation across all tool body builders
- Apply to all 13 instances of boardId, itemId, parentItemId interpolation
  across 11 tool files, preventing GraphQL injection via crafted IDs
- Wrap JSON.parse in search_items.ts with try-catch for user-friendly
  error on malformed column filter JSON

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(monday): deduplicate numeric ID validation, sanitize limit/page params

- Refactor sanitizeNumericId to delegate to validateMondayNumericId
  from input-validation.ts, eliminating duplicated regex logic
- Add sanitizeLimit helper for safe integer coercion with bounds
- Apply sanitizeLimit to limit/page params in list_boards, get_items,
  and search_items for consistent validation across all GraphQL params

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(monday): align list_boards limit description with code (max 500)

The param description said "max 100" but sanitizeLimit caps at 500,
which is what Monday.com's API supports for boards. Updated both the
tool description and docs to say "max 500".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 20:27:44 -07:00
Waleed
2266bb384b feat(triggers): add Atlassian triggers for Jira, JSM, and Confluence (#4211)
* feat(triggers): add Atlassian triggers for Jira, JSM, and Confluence

- Jira: add 9 new triggers (sprint created/started/closed, project created, version released, comment updated/deleted, worklog updated/deleted)
- JSM: add 5 triggers from scratch (request created/updated/commented/resolved, generic webhook)
- Confluence: add 7 new triggers (comment updated, attachment updated, page/blog restored, space removed, page permissions updated, user created)
- Add JSM webhook provider handler with HMAC validation and changelog-based event matching
- Add Atlassian webhook identifier to idempotency service for native dedup
- Add extractIdempotencyId to Confluence handler
- Fix Jira generic webhook to pass through full payload for non-issue events
- Fix output schemas: add description (ADF), updateAuthor, resolution, components, fixVersions, worklog timestamps, note emailAddress as Jira Server only

* fix(triggers): replace any with Record<string, unknown> in confluence extract functions

* lint

* fix(triggers): use comment.id in JSM idempotency, fix confluence type cast

JSM extractIdempotencyId now prioritizes comment.id over issue.id for
comment_created events, matching Jira's documented webhook payload
structure. Also fixes type cast for confluence extract function calls.

* fix(triggers): correct comment.body type to json, fix TriggerOutput description type

- JSM webhook comment.body changed from string to json (ADF format)
- Widened TriggerOutput.description to accept TriggerOutput objects,
  removing unsafe `as unknown as string` casts for Jira description fields
2026-04-16 19:59:12 -07:00
Vikhyath Mondreti
a589c8e318 improvement(mothership): whitespace only deltas need to be preserved, update docs for theshold billing (#4212)
* fix(mothership): content block spaces trimmed

* update overage threshold docs
2026-04-16 19:22:01 -07:00
Vikhyath Mondreti
3d909d5416 fix(resolver): turn off resolver for opaque schema nodes, unrun paths (#4208)
* fix(resolver): turn off resolver for opaque schema nodes, unrun paths

* fix subflows to make them consistent

* fix tests
2026-04-16 18:33:28 -07:00
JaeHyung Jang
3e2a7a2eb1 fix(export): preserve unicode characters in workflow filenames (#4120)
Previously, Non-ASCII characters (like Korean) in workflow names were
replaced by dashes during export because of a restrictive regex.
This update uses a Unicode-aware regex to allow letters and numbers
from any language while still sanitizing unsafe filesystem characters.

fixes #4119

Signed-off-by: JaeHyung Jang <jaehyung.jang@navercorp.com>
2026-04-16 18:18:56 -07:00
Waleed
49a1495e15 improvement(logs): fix trigger badge wrapping, time range picker, status filters, and React anti-patterns (#4207)
* improvement(logs): fix trigger badge wrapping, time range picker, status filters, and React anti-patterns

* chore(logs): remove dev mock logs

* fix(logs): prevent DatePicker onOpenChange from reverting time range after Apply
2026-04-16 18:00:57 -07:00
Waleed
1d0e118eef fix(socket): sync deploy button state across collaborators (#4206)
* fix(socket): sync deploy button state across collaborators

Broadcast workflow-deployed events via socket so all connected users
invalidate their deployment query cache when any user deploys, undeploys,
activates a version, or triggers a deploy through chat/form endpoints.

* fix(socket): check response status on deployment notification

Log a warning when the socket server returns a non-2xx status for
deployment notifications, matching the pattern in lifecycle.ts.

* improvement(config): consolidate socket server URL into getSocketServerUrl/getSocketUrl

Replace all inline `env.SOCKET_SERVER_URL || 'http://localhost:3002'` and
`getEnv('NEXT_PUBLIC_SOCKET_URL') || 'http://localhost:3002'` with centralized
utility functions in urls.ts, matching the getBaseUrl() pattern.

* improvement(config): consolidate Ollama URL and CSP socket/Ollama hardcodes

Add getOllamaUrl() to urls.ts and replace inline env.OLLAMA_URL fallbacks
in the provider and API route. Update CSP to use getSocketUrl(),
getOllamaUrl(), and a local toWebSocketUrl() helper instead of hardcoded
localhost strings.

* lint

* fix(tests): add missing mocks for new URL utility exports

Update lifecycle, async execute, and chat manage test mocks to include
getSocketServerUrl, getOllamaUrl, and notifySocketDeploymentChanged.

* fix(csp): remove urls.ts import to fix next.config.ts build

CSP is loaded by next.config.ts which transpiles outside the @/ alias
context. Use local constants instead of importing from urls.ts.

* fix(queries): invalidate chat and form status on deployment change

Add chatStatus and formStatus to invalidateDeploymentQueries so all
deployment-related queries refresh when any user deploys or undeploys.
2026-04-16 17:46:53 -07:00
Waleed
c06361b142 improvement(tables): clean up duplicate types, unnecessary memos, and barrel imports (#4205)
* improvement(tables): clean up duplicate types, unnecessary memos, and barrel imports

* fix(tables): revert barrel import in client component to avoid bundling server-only deps
2026-04-16 16:47:51 -07:00
Waleed
6fd1767c7e improvement(ui): remove React anti-patterns, fix CSP violations (#4203)
* improvement(ui): remove React anti-patterns, fix CSP violations

* fix(ui): restore useMemo on existingKeys — it is observed by useAvailableResources

* improvement(ui): add RefreshCw icon, update Bell SVG, active state styling for header actions

* minor UI improvements
2026-04-16 13:58:15 -07:00
Vikhyath Mondreti
6aa6346330 fix(executor): subflow edge keys mismatch (#4202)
* fix(executor): subflow edge keys mismatch'

* improve style
2026-04-16 13:41:11 -07:00
Vikhyath Mondreti
1708bbee35 feat(tables): import csv into existing tables (#4199)
* feat(tables): import csv into existing tables

* update types

* address comments

* address comment
2026-04-16 13:37:54 -07:00
1060 changed files with 41090 additions and 9337 deletions

View File

@@ -16,17 +16,34 @@ User arguments: $ARGUMENTS
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## The one rule that matters
`useCallback` is only useful when **something observes the reference**. Ask: does anything care if this function gets a new identity on re-render?
Observers that care about reference stability:
- A `useEffect` that lists the function in its deps array
- A `useMemo` that lists the function in its deps array
- Another `useCallback` that lists the function in its deps array
- A child component wrapped in `React.memo` that receives the function as a prop
If none of those apply — if the function is only called inline, or passed to a non-memoized child, or assigned to a native element event — the reference is unobserved and `useCallback` adds overhead with zero benefit.
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
2. **useCallback with deps that change every render**: Memoization is wasted.
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
5. **useCallback with empty deps when deps are needed**: Stale closures.
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
1. **No observer tracks the reference**: The function is only called inline in the same component, or passed to a non-memoized child, or used as a native element handler (`<button onClick={fn}>`). Nothing re-runs or bails out based on reference identity. Remove `useCallback`.
2. **useCallback with deps that change every render**: If a dep is a plain object/array created inline, or state that changes on every interaction, memoization buys nothing — the function gets a new identity anyway.
3. **useCallback on handlers passed only to native elements**: `<button onClick={fn}>` — React never does reference equality on native element props. No benefit.
4. **useCallback wrapping functions that return new objects/arrays**: Stable function identity, unstable return value — memoization is at the wrong level. Use `useMemo` on the return value instead, or restructure.
5. **useCallback with empty deps when deps are needed**: Stale closure — reads initial values forever. This is a correctness bug, not just a performance issue.
6. **Pairing useCallback + React.memo on trivially cheap renders**: If the child renders in < 1ms and re-renders rarely, the memo infrastructure costs more than it saves.
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
## Patterns that ARE correct — do not flag
- `useCallback` whose result is in a `useEffect` dep array — prevents the effect from re-running on every render
- `useCallback` whose result is in a `useMemo` dep array — prevents the memo from recomputing on every render
- `useCallback` whose result is a dep of another `useCallback` — stabilises a callback chain
- `useCallback` passed to a `React.memo`-wrapped child — the whole point of the pattern
- This codebase's ref pattern: `useRef` + callback with empty deps that reads the ref inside — correct, do not flag
## Steps

View File

@@ -10,7 +10,7 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
Never update global styles. Keep all styling local to components.
## ID Generation
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@/lib/core/utils/uuid`:
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
- `generateId()` — UUID v4, use by default
- `generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
@@ -24,11 +24,32 @@ import { v4 as uuidv4 } from 'uuid'
const id = crypto.randomUUID()
// ✓ Good
import { generateId, generateShortId } from '@/lib/core/utils/uuid'
import { generateId, generateShortId } from '@sim/utils/id'
const uuid = generateId()
const shortId = generateShortId()
const tiny = generateShortId(8)
```
## Common Utilities
Use shared helpers from `@sim/utils` instead of writing inline implementations:
- `sleep(ms)` — async delay. Never write `new Promise(resolve => setTimeout(resolve, ms))`
- `toError(value)` — normalize unknown caught values to `Error`. Never write `e instanceof Error ? e : new Error(String(e))`
- `toError(value).message` — get error message safely. Never write `e instanceof Error ? e.message : String(e)`
```typescript
// ✗ Bad
await new Promise(resolve => setTimeout(resolve, 1000))
const msg = error instanceof Error ? error.message : String(error)
const err = error instanceof Error ? error : new Error(String(error))
// ✓ Good
import { sleep } from '@sim/utils/helpers'
import { toError } from '@sim/utils/errors'
await sleep(1000)
const msg = toError(error).message
const err = toError(error)
```
## Package Manager
Use `bun` and `bunx`, not `npm` and `npx`.

View File

@@ -102,10 +102,6 @@ vi.mock('@/lib/workspaces/utils', () => ({
}))
```
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
@@ -135,38 +131,61 @@ await new Promise(r => setTimeout(r, 1))
vi.useFakeTimers()
```
## Mock Pattern Reference
## Centralized Mocks (prefer over local declarations)
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
| Module mocked | Import | Factory form |
|---|---|---|
| `@/app/api/auth/oauth/utils` | `authOAuthUtilsMock`, `authOAuthUtilsMockFns` | `vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)` |
| `@/app/api/knowledge/utils` | `knowledgeApiUtilsMock`, `knowledgeApiUtilsMockFns` | `vi.mock('@/app/api/knowledge/utils', () => knowledgeApiUtilsMock)` |
| `@/app/api/workflows/utils` | `workflowsApiUtilsMock`, `workflowsApiUtilsMockFns` | `vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)` |
| `@/lib/audit/log` | `auditMock`, `auditMockFns` | `vi.mock('@/lib/audit/log', () => auditMock)` |
| `@/lib/auth` | `authMock`, `authMockFns` | `vi.mock('@/lib/auth', () => authMock)` |
| `@/lib/auth/hybrid` | `hybridAuthMock`, `hybridAuthMockFns` | `vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)` |
| `@/lib/copilot/request/http` | `copilotHttpMock`, `copilotHttpMockFns` | `vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)` |
| `@/lib/core/config/env` | `envMock`, `createEnvMock(overrides)` | `vi.mock('@/lib/core/config/env', () => envMock)` |
| `@/lib/core/config/feature-flags` | `featureFlagsMock` | `vi.mock('@/lib/core/config/feature-flags', () => featureFlagsMock)` |
| `@/lib/core/config/redis` | `redisConfigMock`, `redisConfigMockFns` | `vi.mock('@/lib/core/config/redis', () => redisConfigMock)` |
| `@/lib/core/security/encryption` | `encryptionMock`, `encryptionMockFns` | `vi.mock('@/lib/core/security/encryption', () => encryptionMock)` |
| `@/lib/core/security/input-validation.server` | `inputValidationMock`, `inputValidationMockFns` | `vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)` |
| `@/lib/core/utils/request` | `requestUtilsMock`, `requestUtilsMockFns` | `vi.mock('@/lib/core/utils/request', () => requestUtilsMock)` |
| `@/lib/core/utils/urls` | `urlsMock`, `urlsMockFns` | `vi.mock('@/lib/core/utils/urls', () => urlsMock)` |
| `@/lib/execution/preprocessing` | `executionPreprocessingMock`, `executionPreprocessingMockFns` | `vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)` |
| `@/lib/logs/execution/logging-session` | `loggingSessionMock`, `loggingSessionMockFns`, `LoggingSessionMock` | `vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)` |
| `@/lib/workflows/orchestration` | `workflowsOrchestrationMock`, `workflowsOrchestrationMockFns` | `vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)` |
| `@/lib/workflows/persistence/utils` | `workflowsPersistenceUtilsMock`, `workflowsPersistenceUtilsMockFns` | `vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)` |
| `@/lib/workflows/utils` | `workflowsUtilsMock`, `workflowsUtilsMockFns` | `vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)` |
| `@/lib/workspaces/permissions/utils` | `permissionsMock`, `permissionsMockFns` | `vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)` |
| `@sim/db/schema` | `schemaMock` | `vi.mock('@sim/db/schema', () => schemaMock)` |
### Auth mocking (API routes)
```typescript
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { authMock, authMockFns } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
// In tests:
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
mockGetSession.mockResolvedValue(null) // unauthenticated
import { GET } from '@/app/api/my-route/route'
beforeEach(() => {
vi.clearAllMocks()
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
```
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
### Hybrid auth mocking
```typescript
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
mockCheckSessionOrInternalAuth: vi.fn(),
}))
import { hybridAuthMock, hybridAuthMockFns } from '@sim/testing'
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
// In tests:
mockCheckSessionOrInternalAuth.mockResolvedValue({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true, userId: 'user-1', authType: 'session',
})
```
@@ -197,21 +216,23 @@ Always prefer over local test data.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
| **Module mocks** | See "Centralized Mocks" table above |
| **Logger helpers** | `loggerMock`, `createMockLogger()`, `getLoggerCalls()`, `clearLoggerMocks()` |
| **Database helpers** | `databaseMock`, `drizzleOrmMock`, `createMockDb()`, `createMockSql()`, `createMockSqlOperators()` |
| **Fetch helpers** | `setupGlobalFetchMock()`, `createMockFetch()`, `createMockResponse()`, `mockFetchError()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
| **Requests** | `createMockRequest()`, `createEnvMock()` |
| **Requests** | `createMockRequest()`, `createMockFormDataRequest()` |
## Rules Summary
1. `@vitest-environment node` unless DOM is required
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
3. `vi.mock()` calls before importing mocked modules
4. `@sim/testing` utilities over local mocks
2. Prefer centralized mocks from `@sim/testing` (see table above) over local `vi.hoisted()` + `vi.mock()` boilerplate
3. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
4. `vi.mock()` calls before importing mocked modules
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
6. No `vi.importActual()` — mock everything explicitly
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
9. Use absolute imports in test files
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
7. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
8. Use absolute imports in test files
9. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`

View File

@@ -17,7 +17,7 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
Never update global styles. Keep all styling local to components.
## ID Generation
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@/lib/core/utils/uuid`:
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
- `generateId()` — UUID v4, use by default
- `generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
@@ -31,11 +31,32 @@ import { v4 as uuidv4 } from 'uuid'
const id = crypto.randomUUID()
// ✓ Good
import { generateId, generateShortId } from '@/lib/core/utils/uuid'
import { generateId, generateShortId } from '@sim/utils/id'
const uuid = generateId()
const shortId = generateShortId()
const tiny = generateShortId(8)
```
## Common Utilities
Use shared helpers from `@sim/utils` instead of writing inline implementations:
- `sleep(ms)` — async delay. Never write `new Promise(resolve => setTimeout(resolve, ms))`
- `toError(value)` — normalize unknown caught values to `Error`. Never write `e instanceof Error ? e : new Error(String(e))`
- `toError(value).message` — get error message safely. Never write `e instanceof Error ? e.message : String(e)`
```typescript
// ✗ Bad
await new Promise(resolve => setTimeout(resolve, 1000))
const msg = error instanceof Error ? error.message : String(error)
const err = error instanceof Error ? error : new Error(String(error))
// ✓ Good
import { sleep } from '@sim/utils/helpers'
import { toError } from '@sim/utils/errors'
await sleep(1000)
const msg = toError(error).message
const err = toError(error)
```
## Package Manager
Use `bun` and `bunx`, not `npm` and `npx`.

View File

@@ -0,0 +1,85 @@
---
description: Isolated-vm sandbox worker security policy. Hard rules for anything that lives in the worker child process that runs user code.
globs: ["apps/sim/lib/execution/isolated-vm-worker.cjs", "apps/sim/lib/execution/isolated-vm.ts", "apps/sim/lib/execution/sandbox/**", "apps/sim/sandbox-tasks/**"]
---
# Sim Sandbox — Worker Security Policy
The isolated-vm worker child process at
`apps/sim/lib/execution/isolated-vm-worker.cjs` runs untrusted user code inside
V8 isolates. The process itself is a trust boundary. Everything in this rule is
about what must **never** live in that process.
## Hard rules
1. **No app credentials in the worker process**. The worker must not hold, load,
or receive via IPC: database URLs, Redis URLs, AWS keys, Stripe keys,
session-signing keys, encryption keys, OAuth client secrets, internal API
secrets, or any LLM / email / search provider API keys. If you catch yourself
`require`'ing `@/lib/auth`, `@sim/db`, `@/lib/uploads/core/storage-service`,
or anything that imports `env` directly inside the worker, stop and use a
host-side broker instead.
2. **Host-side brokers own all credentialed work**. The worker can only access
resources through `ivm.Reference` / `ivm.Callback` bridges back to the host
process. Today the only broker is `workspaceFileBroker`
(`apps/sim/lib/execution/sandbox/brokers/workspace-file.ts`); adding a new
one requires co-reviewing this file.
3. **Host-side brokers must scope every resource access to a single tenant**.
The `SandboxBrokerContext` always carries `workspaceId`. Any new broker that
accesses storage, DB, or an external API must use `ctx.workspaceId` to scope
the lookup — never accept a raw path, key, or URL from isolate code without
validation.
4. **Nothing that runs in the isolate is trusted, even if we wrote it**. The
task `bootstrap` and `finalize` strings in `apps/sim/sandbox-tasks/` execute
inside the isolate. They must treat `globalThis` as adversarial — no pulling
values from it that might have been mutated by user code. The hardening
script in `executeTask` undefines dangerous globals before user code runs.
## Why
A V8 JIT bug (Chrome ships these roughly monthly) gives an attacker a native
code primitive inside the process that owns whatever that process can reach.
If the worker only holds `isolated-vm` + a single narrow workspace-file broker,
a V8 escape leaks one tenant's files. If the worker holds a Stripe key or a DB
connection, a V8 escape leaks the service.
The original `doc-worker.cjs` vulnerability (CVE-class, 225 production secrets
leaked via `/proc/1/environ`) was the forcing function for this architecture.
Keep the blast radius small.
## Checklist for changes to `isolated-vm-worker.cjs`
Before landing any change that adds a new `require(...)` or `process.send(...)`
payload or `ivm.Reference` wrapper in the worker:
- [ ] Does it load a credential, key, connection string, or secret? If yes,
move it host-side and expose as a broker.
- [ ] Does it import from `@/lib/auth`, `@sim/db`, `@/lib/uploads/core/*`,
`@/lib/core/config/env`, or any module that reads `process.env` of the
main app? If yes, same — move host-side.
- [ ] Does it expose a resource that's workspace-scoped without taking a
`workspaceId`? If yes, re-scope.
- [ ] Did you update the broker limits (`IVM_MAX_BROKER_ARGS_JSON_CHARS`,
`IVM_MAX_BROKER_RESULT_JSON_CHARS`, `IVM_MAX_BROKERS_PER_EXECUTION`) if
the new broker can emit large payloads or fire frequently?
## What the worker *may* hold
- `isolated-vm` module
- Node built-ins: `node:fs` (only for reading the checked-in bundle `.cjs`
files) and `node:path`
- The three prebuilt library bundles under
`apps/sim/lib/execution/sandbox/bundles/*.cjs`
- IPC message handlers for `execute`, `cancel`, `fetchResponse`,
`brokerResponse`
The worker deliberately has **no host-side logger**. All errors and
diagnostics flow through IPC back to the host, which has `@sim/logger`. Do
not add `createLogger` or console-based logging to the worker — it would
require pulling the main app's config / env, which is exactly what this
rule is preventing.
Anything else is suspect.

View File

@@ -3,6 +3,7 @@ description: Testing patterns with Vitest and @sim/testing
globs: ["apps/sim/**/*.test.ts", "apps/sim/**/*.test.tsx"]
---
# Testing Patterns
Use Vitest. Test files: `feature.ts` → `feature.test.ts`
@@ -101,10 +102,6 @@ vi.mock('@/lib/workspaces/utils', () => ({
}))
```
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
@@ -134,38 +131,61 @@ await new Promise(r => setTimeout(r, 1))
vi.useFakeTimers()
```
## Mock Pattern Reference
## Centralized Mocks (prefer over local declarations)
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
| Module mocked | Import | Factory form |
|---|---|---|
| `@/app/api/auth/oauth/utils` | `authOAuthUtilsMock`, `authOAuthUtilsMockFns` | `vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)` |
| `@/app/api/knowledge/utils` | `knowledgeApiUtilsMock`, `knowledgeApiUtilsMockFns` | `vi.mock('@/app/api/knowledge/utils', () => knowledgeApiUtilsMock)` |
| `@/app/api/workflows/utils` | `workflowsApiUtilsMock`, `workflowsApiUtilsMockFns` | `vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)` |
| `@/lib/audit/log` | `auditMock`, `auditMockFns` | `vi.mock('@/lib/audit/log', () => auditMock)` |
| `@/lib/auth` | `authMock`, `authMockFns` | `vi.mock('@/lib/auth', () => authMock)` |
| `@/lib/auth/hybrid` | `hybridAuthMock`, `hybridAuthMockFns` | `vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)` |
| `@/lib/copilot/request/http` | `copilotHttpMock`, `copilotHttpMockFns` | `vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)` |
| `@/lib/core/config/env` | `envMock`, `createEnvMock(overrides)` | `vi.mock('@/lib/core/config/env', () => envMock)` |
| `@/lib/core/config/feature-flags` | `featureFlagsMock` | `vi.mock('@/lib/core/config/feature-flags', () => featureFlagsMock)` |
| `@/lib/core/config/redis` | `redisConfigMock`, `redisConfigMockFns` | `vi.mock('@/lib/core/config/redis', () => redisConfigMock)` |
| `@/lib/core/security/encryption` | `encryptionMock`, `encryptionMockFns` | `vi.mock('@/lib/core/security/encryption', () => encryptionMock)` |
| `@/lib/core/security/input-validation.server` | `inputValidationMock`, `inputValidationMockFns` | `vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)` |
| `@/lib/core/utils/request` | `requestUtilsMock`, `requestUtilsMockFns` | `vi.mock('@/lib/core/utils/request', () => requestUtilsMock)` |
| `@/lib/core/utils/urls` | `urlsMock`, `urlsMockFns` | `vi.mock('@/lib/core/utils/urls', () => urlsMock)` |
| `@/lib/execution/preprocessing` | `executionPreprocessingMock`, `executionPreprocessingMockFns` | `vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)` |
| `@/lib/logs/execution/logging-session` | `loggingSessionMock`, `loggingSessionMockFns`, `LoggingSessionMock` | `vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)` |
| `@/lib/workflows/orchestration` | `workflowsOrchestrationMock`, `workflowsOrchestrationMockFns` | `vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)` |
| `@/lib/workflows/persistence/utils` | `workflowsPersistenceUtilsMock`, `workflowsPersistenceUtilsMockFns` | `vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)` |
| `@/lib/workflows/utils` | `workflowsUtilsMock`, `workflowsUtilsMockFns` | `vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)` |
| `@/lib/workspaces/permissions/utils` | `permissionsMock`, `permissionsMockFns` | `vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)` |
| `@sim/db/schema` | `schemaMock` | `vi.mock('@sim/db/schema', () => schemaMock)` |
### Auth mocking (API routes)
```typescript
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { authMock, authMockFns } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
// In tests:
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
mockGetSession.mockResolvedValue(null) // unauthenticated
import { GET } from '@/app/api/my-route/route'
beforeEach(() => {
vi.clearAllMocks()
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
```
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
### Hybrid auth mocking
```typescript
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
mockCheckSessionOrInternalAuth: vi.fn(),
}))
import { hybridAuthMock, hybridAuthMockFns } from '@sim/testing'
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
// In tests:
mockCheckSessionOrInternalAuth.mockResolvedValue({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true, userId: 'user-1', authType: 'session',
})
```
@@ -196,21 +216,23 @@ Always prefer over local test data.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
| **Module mocks** | See "Centralized Mocks" table above |
| **Logger helpers** | `loggerMock`, `createMockLogger()`, `getLoggerCalls()`, `clearLoggerMocks()` |
| **Database helpers** | `databaseMock`, `drizzleOrmMock`, `createMockDb()`, `createMockSql()`, `createMockSqlOperators()` |
| **Fetch helpers** | `setupGlobalFetchMock()`, `createMockFetch()`, `createMockResponse()`, `mockFetchError()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
| **Requests** | `createMockRequest()`, `createEnvMock()` |
| **Requests** | `createMockRequest()`, `createMockFormDataRequest()` |
## Rules Summary
1. `@vitest-environment node` unless DOM is required
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
3. `vi.mock()` calls before importing mocked modules
4. `@sim/testing` utilities over local mocks
2. Prefer centralized mocks from `@sim/testing` (see table above) over local `vi.hoisted()` + `vi.mock()` boilerplate
3. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
4. `vi.mock()` calls before importing mocked modules
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
6. No `vi.importActual()` — mock everything explicitly
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
9. Use absolute imports in test files
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
7. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
8. Use absolute imports in test files
9. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`

View File

@@ -7,7 +7,7 @@ You are a professional software engineer. All code must follow best practices: a
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
- **Styling**: Never update global styles. Keep all styling local to components
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@/lib/core/utils/uuid`
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
## Architecture

View File

@@ -7,7 +7,8 @@ You are a professional software engineer. All code must follow best practices: a
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
- **Styling**: Never update global styles. Keep all styling local to components
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@/lib/core/utils/uuid`
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
- **Common Utilities**: Use shared helpers from `@sim/utils` instead of inline implementations. `sleep(ms)` from `@sim/utils/helpers` for delays, `toError(e)` from `@sim/utils/errors` to normalize caught values.
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
## Architecture

View File

@@ -142,13 +142,15 @@ See the [environment variables reference](https://docs.sim.ai/self-hosting/envir
- **Database**: PostgreSQL with [Drizzle ORM](https://orm.drizzle.team)
- **Authentication**: [Better Auth](https://better-auth.com)
- **UI**: [Shadcn](https://ui.shadcn.com/), [Tailwind CSS](https://tailwindcss.com)
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/)
- **Streaming Markdown**: [Streamdown](https://github.com/vercel/streamdown)
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/), [TanStack Query](https://tanstack.com/query)
- **Flow Editor**: [ReactFlow](https://reactflow.dev/)
- **Docs**: [Fumadocs](https://fumadocs.vercel.app/)
- **Monorepo**: [Turborepo](https://turborepo.org/)
- **Realtime**: [Socket.io](https://socket.io/)
- **Background Jobs**: [Trigger.dev](https://trigger.dev/)
- **Remote Code Execution**: [E2B](https://www.e2b.dev/)
- **Isolated Code Execution**: [isolated-vm](https://github.com/laverdet/isolated-vm)
## Contributing

View File

@@ -3602,6 +3602,29 @@ export function OpenRouterIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function MondayIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
viewBox='0 -50 256 256'
xmlns='http://www.w3.org/2000/svg'
preserveAspectRatio='xMidYMid'
>
<g>
<path
d='M31.8458633,153.488694 C20.3244423,153.513586 9.68073708,147.337265 3.98575204,137.321731 C-1.62714067,127.367831 -1.29055839,115.129325 4.86093879,105.498969 L62.2342919,15.4033556 C68.2125882,5.54538256 79.032489,-0.333585033 90.5563073,0.0146553508 C102.071737,0.290611552 112.546041,6.74705604 117.96667,16.9106216 C123.315033,27.0238906 122.646488,39.1914174 116.240607,48.6847625 L58.9037201,138.780375 C52.9943022,147.988884 42.7873202,153.537154 31.8458633,153.488694 L31.8458633,153.488694 Z'
fill='#F62B54'
/>
<path
d='M130.25575,153.488484 C118.683837,153.488484 108.035731,147.301291 102.444261,137.358197 C96.8438154,127.431292 97.1804475,115.223704 103.319447,105.620522 L160.583402,15.7315506 C166.47539,5.73210989 177.327374,-0.284878136 188.929728,0.0146553508 C200.598885,0.269918151 211.174058,6.7973526 216.522421,17.0078646 C221.834319,27.2183766 221.056375,39.4588356 214.456008,48.9278699 L157.204209,138.816842 C151.313487,147.985468 141.153618,153.5168 130.25575,153.488484 Z'
fill='#FFCC00'
/>
<ellipse fill='#00CA72' cx='226.465527' cy='125.324379' rx='29.5375538' ry='28.9176274' />
</g>
</svg>
)
}
export function MongoDBIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 128 128'>

View File

@@ -1,6 +1,6 @@
'use client'
import { useState } from 'react'
import { useRef, useState } from 'react'
import { cn, getAssetUrl } from '@/lib/utils'
import { Lightbox } from './lightbox'
@@ -50,11 +50,14 @@ export function ActionImage({ src, alt, enableLightbox = true }: ActionImageProp
}
export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProps) {
const videoRef = useRef<HTMLVideoElement>(null)
const startTimeRef = useRef(0)
const [isLightboxOpen, setIsLightboxOpen] = useState(false)
const resolvedSrc = getAssetUrl(src)
const handleClick = () => {
if (enableLightbox) {
startTimeRef.current = videoRef.current?.currentTime ?? 0
setIsLightboxOpen(true)
}
}
@@ -62,6 +65,7 @@ export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProp
return (
<>
<video
ref={videoRef}
src={resolvedSrc}
autoPlay
loop
@@ -80,6 +84,7 @@ export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProp
src={src}
alt={alt}
type='video'
startTime={startTimeRef.current}
/>
)}
</>

View File

@@ -1,195 +0,0 @@
import { memo } from 'react'
const RX = '2.59574'
interface BlockRect {
opacity: number
width: string
height: string
fill: string
x?: string
y?: string
transform?: string
}
const RECTS = {
topRight: [
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '33.7252', fill: '#2ABBF8' },
{ opacity: 0.6, x: '0', y: '0', width: '85.3433', height: '16.8626', fill: '#2ABBF8' },
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
{ opacity: 0.6, x: '34.2403', y: '0', width: '34.2403', height: '33.7252', fill: '#2ABBF8' },
{ opacity: 1, x: '34.2403', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
{
opacity: 1,
x: '51.6188',
y: '16.8626',
width: '16.8626',
height: '16.8626',
fill: '#2ABBF8',
},
{ opacity: 1, x: '68.4812', y: '0', width: '54.6502', height: '16.8626', fill: '#00F701' },
{ opacity: 0.6, x: '106.268', y: '0', width: '34.2403', height: '33.7252', fill: '#00F701' },
{ opacity: 0.6, x: '106.268', y: '0', width: '51.103', height: '16.8626', fill: '#00F701' },
{
opacity: 1,
x: '123.6484',
y: '16.8626',
width: '16.8626',
height: '16.8626',
fill: '#00F701',
},
{ opacity: 0.6, x: '157.371', y: '0', width: '34.2403', height: '16.8626', fill: '#FFCC02' },
{ opacity: 1, x: '157.371', y: '0', width: '16.8626', height: '16.8626', fill: '#FFCC02' },
{ opacity: 0.6, x: '208.993', y: '0', width: '68.4805', height: '16.8626', fill: '#FA4EDF' },
{ opacity: 0.6, x: '209.137', y: '0', width: '16.8626', height: '33.7252', fill: '#FA4EDF' },
{ opacity: 0.6, x: '243.233', y: '0', width: '34.2403', height: '33.7252', fill: '#FA4EDF' },
{ opacity: 1, x: '243.233', y: '0', width: '16.8626', height: '16.8626', fill: '#FA4EDF' },
{ opacity: 0.6, x: '260.096', y: '0', width: '34.04', height: '16.8626', fill: '#FA4EDF' },
{
opacity: 1,
x: '260.611',
y: '16.8626',
width: '16.8626',
height: '16.8626',
fill: '#FA4EDF',
},
],
bottomLeft: [
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '33.7252', fill: '#2ABBF8' },
{ opacity: 0.6, x: '0', y: '0', width: '85.3433', height: '16.8626', fill: '#2ABBF8' },
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
{ opacity: 0.6, x: '34.2403', y: '0', width: '34.2403', height: '33.7252', fill: '#2ABBF8' },
{ opacity: 1, x: '34.2403', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
{
opacity: 1,
x: '51.6188',
y: '16.8626',
width: '16.8626',
height: '16.8626',
fill: '#2ABBF8',
},
{ opacity: 1, x: '68.4812', y: '0', width: '54.6502', height: '16.8626', fill: '#00F701' },
{ opacity: 0.6, x: '106.268', y: '0', width: '34.2403', height: '33.7252', fill: '#00F701' },
{ opacity: 0.6, x: '106.268', y: '0', width: '51.103', height: '16.8626', fill: '#00F701' },
{
opacity: 1,
x: '123.6484',
y: '16.8626',
width: '16.8626',
height: '16.8626',
fill: '#00F701',
},
],
bottomRight: [
{
opacity: 0.6,
width: '16.8626',
height: '33.726',
fill: '#FA4EDF',
transform: 'matrix(0 1 1 0 0 0)',
},
{
opacity: 0.6,
width: '34.241',
height: '16.8626',
fill: '#FA4EDF',
transform: 'matrix(0 1 1 0 16.891 0)',
},
{
opacity: 0.6,
width: '16.8626',
height: '68.482',
fill: '#FA4EDF',
transform: 'matrix(-1 0 0 1 33.739 16.888)',
},
{
opacity: 0.6,
width: '16.8626',
height: '33.726',
fill: '#FA4EDF',
transform: 'matrix(0 1 1 0 0 33.776)',
},
{
opacity: 1,
width: '16.8626',
height: '16.8626',
fill: '#FA4EDF',
transform: 'matrix(-1 0 0 1 33.739 34.272)',
},
{
opacity: 0.6,
width: '16.8626',
height: '34.24',
fill: '#2ABBF8',
transform: 'matrix(-1 0 0 1 33.787 68)',
},
{
opacity: 0.4,
width: '16.8626',
height: '16.8626',
fill: '#1A8FCC',
transform: 'matrix(-1 0 0 1 33.787 85)',
},
],
} as const satisfies Record<string, readonly BlockRect[]>
const GLOBAL_OPACITY = 0.55
const BlockGroup = memo(function BlockGroup({
width,
height,
viewBox,
rects,
}: {
width: number
height: number
viewBox: string
rects: readonly BlockRect[]
}) {
return (
<svg
width={width}
height={height}
viewBox={viewBox}
fill='none'
xmlns='http://www.w3.org/2000/svg'
className='h-auto w-full'
style={{ opacity: GLOBAL_OPACITY }}
>
{rects.map((r, i) => (
<rect
key={i}
x={r.x}
y={r.y}
width={r.width}
height={r.height}
rx={RX}
fill={r.fill}
transform={r.transform}
opacity={r.opacity}
/>
))}
</svg>
)
})
export function AnimatedBlocks() {
return (
<div
className='pointer-events-none fixed inset-0 z-0 hidden overflow-hidden lg:block'
aria-hidden='true'
>
<div className='absolute top-[93px] right-0 w-[calc(140px+10.76vw)] max-w-[295px]'>
<BlockGroup width={295} height={34} viewBox='0 0 295 34' rects={RECTS.topRight} />
</div>
<div className='-left-24 absolute bottom-0 w-[calc(140px+10.76vw)] max-w-[295px] rotate-180'>
<BlockGroup width={295} height={34} viewBox='0 0 295 34' rects={RECTS.bottomLeft} />
</div>
<div className='-bottom-2 absolute right-0 w-[calc(16px+1.25vw)] max-w-[34px]'>
<BlockGroup width={34} height={102} viewBox='0 0 34 102' rects={RECTS.bottomRight} />
</div>
</div>
)
}

View File

@@ -2,6 +2,7 @@
import { useState } from 'react'
import { ChevronRight } from 'lucide-react'
import { cn } from '@/lib/utils'
interface FAQItem {
question: string
@@ -31,9 +32,10 @@ function FAQItemRow({
className='flex w-full cursor-pointer items-center gap-3 px-4 py-2.5 text-left font-[470] text-[0.875rem] text-[rgba(0,0,0,0.8)] transition-colors hover:bg-[rgba(0,0,0,0.02)] dark:text-[rgba(255,255,255,0.85)] dark:hover:bg-[rgba(255,255,255,0.03)]'
>
<ChevronRight
className={`h-3.5 w-3.5 shrink-0 text-[rgba(0,0,0,0.3)] transition-transform duration-200 dark:text-[rgba(255,255,255,0.3)] ${
isOpen ? 'rotate-90' : ''
}`}
className={cn(
'h-3.5 w-3.5 shrink-0 text-[rgba(0,0,0,0.3)] transition-transform duration-200 dark:text-[rgba(255,255,255,0.3)]',
isOpen && 'rotate-90'
)}
/>
{item.question}
</button>
@@ -81,11 +83,10 @@ export function FAQ({ items, title = 'Common Questions' }: FAQProps) {
{items.map((item, index) => (
<div
key={index}
className={
index !== items.length - 1
? 'border-[rgba(0,0,0,0.08)] border-b dark:border-[rgba(255,255,255,0.08)]'
: ''
}
className={cn(
index !== items.length - 1 &&
'border-[rgba(0,0,0,0.08)] border-b dark:border-[rgba(255,255,255,0.08)]'
)}
>
<FAQItemRow
item={item}

View File

@@ -119,6 +119,7 @@ import {
MicrosoftSharepointIcon,
MicrosoftTeamsIcon,
MistralIcon,
MondayIcon,
MongoDBIcon,
MySQLIcon,
Neo4jIcon,
@@ -327,6 +328,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
microsoft_teams: MicrosoftTeamsIcon,
mistral_parse: MistralIcon,
mistral_parse_v3: MistralIcon,
monday: MondayIcon,
mongodb: MongoDBIcon,
mysql: MySQLIcon,
neo4j: Neo4jIcon,

View File

@@ -1,6 +1,5 @@
'use client'
import { useEffect, useState } from 'react'
import { Check } from 'lucide-react'
import { useParams, usePathname, useRouter } from 'next/navigation'
import {
@@ -25,24 +24,9 @@ export function LanguageDropdown() {
const params = useParams()
const router = useRouter()
const [currentLang, setCurrentLang] = useState(() => {
const langFromParams = params?.lang as string
return langFromParams && Object.keys(languages).includes(langFromParams) ? langFromParams : 'en'
})
useEffect(() => {
const langFromParams = params?.lang as string
if (langFromParams && Object.keys(languages).includes(langFromParams)) {
if (langFromParams !== currentLang) {
setCurrentLang(langFromParams)
}
} else {
if (currentLang !== 'en') {
setCurrentLang('en')
}
}
}, [params])
const langFromParams = params?.lang as string
const currentLang =
langFromParams && Object.keys(languages).includes(langFromParams) ? langFromParams : 'en'
const handleLanguageChange = (locale: string) => {
if (locale === currentLang) return

View File

@@ -1,6 +1,6 @@
'use client'
import { useEffect, useRef } from 'react'
import { useEffect, useLayoutEffect, useRef } from 'react'
import { getAssetUrl } from '@/lib/utils'
interface LightboxProps {
@@ -9,10 +9,12 @@ interface LightboxProps {
src: string
alt: string
type: 'image' | 'video'
startTime?: number
}
export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
export function Lightbox({ isOpen, onClose, src, alt, type, startTime }: LightboxProps) {
const overlayRef = useRef<HTMLDivElement>(null)
const videoRef = useRef<HTMLVideoElement>(null)
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
@@ -40,6 +42,12 @@ export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
}
}, [isOpen, onClose])
useLayoutEffect(() => {
if (isOpen && type === 'video' && videoRef.current && startTime != null && startTime > 0) {
videoRef.current.currentTime = startTime
}
}, [isOpen, startTime, type])
if (!isOpen) return null
return (
@@ -61,6 +69,7 @@ export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
/>
) : (
<video
ref={videoRef}
src={getAssetUrl(src)}
autoPlay
loop

View File

@@ -1,7 +1,7 @@
'use client'
import { useState } from 'react'
import { getAssetUrl } from '@/lib/utils'
import { useRef, useState } from 'react'
import { cn, getAssetUrl } from '@/lib/utils'
import { Lightbox } from './lightbox'
interface VideoProps {
@@ -12,6 +12,8 @@ interface VideoProps {
muted?: boolean
playsInline?: boolean
enableLightbox?: boolean
width?: number
height?: number
}
export function Video({
@@ -22,11 +24,16 @@ export function Video({
muted = true,
playsInline = true,
enableLightbox = true,
width,
height,
}: VideoProps) {
const videoRef = useRef<HTMLVideoElement>(null)
const startTimeRef = useRef(0)
const [isLightboxOpen, setIsLightboxOpen] = useState(false)
const handleVideoClick = () => {
if (enableLightbox) {
startTimeRef.current = videoRef.current?.currentTime ?? 0
setIsLightboxOpen(true)
}
}
@@ -34,11 +41,17 @@ export function Video({
return (
<>
<video
ref={videoRef}
autoPlay={autoPlay}
loop={loop}
muted={muted}
playsInline={playsInline}
className={`${className} ${enableLightbox ? 'cursor-pointer transition-opacity hover:opacity-95' : ''}`}
width={width}
height={height}
className={cn(
className,
enableLightbox && 'cursor-pointer transition-opacity hover:opacity-95'
)}
src={getAssetUrl(src)}
onClick={handleVideoClick}
/>
@@ -50,6 +63,7 @@ export function Video({
src={src}
alt={`Video: ${src}`}
type='video'
startTime={startTimeRef.current}
/>
)}
</>

View File

@@ -5,6 +5,7 @@ description: Your per-workflow AI assistant for building and editing workflows.
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Copilot is the AI assistant built into every workflow editor. It is scoped to the workflow you have open — it reads the current structure, makes changes directly, and saves checkpoints so you can revert if needed.
@@ -15,7 +16,7 @@ For workspace-wide tasks (managing multiple workflows, running research, working
Copilot is a Sim-managed service. For self-hosted deployments, go to [sim.ai](https://sim.ai) → Settings → Copilot, generate a Copilot API key, then set `COPILOT_API_KEY` in your self-hosted environment.
</Callout>
{/* TODO: Screenshot of the workflow editor with the Copilot panel open on the right side — showing a conversation with a workflow change applied. Ideally shows a message from the user, a response from Copilot, and the checkpoint icon visible on the message. */}
<Video src="copilot/copilot.mp4" width={700} height={450} />
## What Copilot Can Do

View File

@@ -367,12 +367,12 @@ Sim uses a **base subscription + overage** billing model:
### Threshold Billing
When on-demand is enabled and unbilled overage reaches $50, Sim automatically bills the full unbilled amount.
When on-demand is enabled and unbilled overage reaches $100, Sim automatically bills the full unbilled amount.
**Example:**
- Day 10: $70 overage → Bill $70 immediately
- Day 15: Additional $35 usage ($105 total) → Already billed, no action
- Day 20: Another $50 usage ($155 total, $85 unbilled) → Bill $85 immediately
- Day 10: $120 overage → Bill $120 immediately
- Day 15: Additional $60 usage ($180 total) → Already billed, no action
- Day 20: Another $80 usage ($260 total, $140 unbilled) → Bill $140 immediately
This spreads large overage charges throughout the month instead of one large bill at period end.
@@ -480,5 +480,5 @@ import { FAQ } from '@/components/ui/faq'
{ question: "What happens when I exceed my plan's credit limit?", answer: "By default, your usage is capped at your plan's included credits and runs will stop. If you enable on-demand billing or manually raise your usage limit in Settings, you can continue running workflows and pay for the overage at the end of the billing period." },
{ question: "How does the 1.1x hosted model multiplier work?", answer: "When you use Sim's hosted API keys (instead of bringing your own), a 1.1x multiplier is applied to the base model pricing for Agent blocks. This covers infrastructure and API management costs. You can avoid this multiplier by using your own API keys via the BYOK feature." },
{ question: "Are there any free options for AI models?", answer: "Yes. If you run local models through Ollama or VLLM, there are no API costs for those model calls. You still pay the base run charge of 1 credit per run." },
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $50, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $100, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
]} />

View File

@@ -4,18 +4,17 @@ description: Upload, create, edit, and generate files — documents, presentatio
---
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Describe a document, presentation, image, or visualization and Mothership creates it — streaming the content live into the resource panel as it writes. Attach any file to your message and Mothership reads it, processes it, and saves it to your workspace.
<Video src="mothership/files-pipeline-deals-summarizer.mp4" width={700} height={450} />
{/* TODO: Screenshot of Mothership with the File Write subagent active — file content streaming into the resource panel in split or preview mode. Shows the live streaming preview experience as a document is being written. */}
Describe a document, presentation, image, or visualization and Mothership creates it — streaming the content live into the resource panel as it writes. Attach any file to your message and Mothership reads it, processes it, and saves it to your workspace.
## Uploading Files to the Workspace
Attach any file directly to your Mothership message — drag it into the input, paste it, or click the attachment icon. Mothership reads the file as context and saves it to your workspace.
{/* TODO: Screenshot of the Mothership input area showing a file attached — e.g., a PDF or image thumbnail visible in the input before sending. */}
Use this to:
- Hand Mothership a document and ask it to process, summarize, or extract data from it
- Upload a CSV and have it create a table from it
@@ -48,6 +47,8 @@ Open a file using `@filename` or the **+** menu, then describe the change:
## Presentations
<Image src="/static/mothership/pptx-example.png" alt="Mothership resource panel showing a generated Mothership-Use-Cases.pptx file open with the title slide and first use case slide visible" width={900} height={500} />
Mothership can generate `.pptx` files:
- "Create a pitch deck for Q3 review — 8 slides covering growth, retention, and roadmap"
@@ -58,8 +59,6 @@ Mothership can generate `.pptx` files:
The file is saved to your workspace and can be downloaded.
{/* TODO: Screenshot of the resource panel with a generated .pptx file open or a download prompt visible, showing the file name and confirming it was saved to the workspace. */}
## Images
Mothership can generate images using AI, and can use an existing image as a reference to guide the output:
@@ -73,7 +72,7 @@ Mothership can generate images using AI, and can use an existing image as a refe
- Attach an existing image to your message, then describe what you want: "Generate a new version of this banner with a blue color scheme instead of green"
- "Create a variation of this diagram with the boxes rearranged horizontally [attach image]"
{/* TODO: Screenshot of the resource panel showing a generated image open as a file tab — ideally with the image rendered in the viewer panel. */}
<Image src="/static/mothership/image-example.png" alt="Mothership resource panel showing a generated hero image of a Mothership-branded blimp flying over San Francisco at golden hour, alongside the chat response linking the file" width={900} height={500} />
Generated images are saved as workspace files.
@@ -85,7 +84,7 @@ Mothership can generate charts and data visualizations from data you describe or
- "Create a line chart of token usage over the past 30 days from this data [paste data]"
- "Generate a pie chart showing the distribution of lead sources from the leads table"
{/* TODO: Screenshot of a chart or visualization rendered in the resource panel as a file. */}
<Image src="/static/mothership/chart-example.png" alt="Mothership resource panel showing a generated chart file with bar charts for backend 5xx errors and error rate over time" width={900} height={500} />
Visualizations are saved as files and rendered in the resource panel.
@@ -104,7 +103,7 @@ Results come back directly in the chat. Ask Mothership to save the output as a f
When a file opens in the resource panel, you can switch between three views:
{/* TODO: Screenshot of the file viewer in the resource panel showing the mode selector (editor/split/preview), ideally in split mode with a markdown file showing raw content on the left and rendered preview on the right. */}
<Video src="mothership/toggle-file-view.mp4" width={700} height={450} />
| Mode | What it shows |
|------|--------------|

View File

@@ -4,11 +4,12 @@ description: Your AI command center. Build and manage your entire workspace in n
---
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Describe what you want and Mothership handles it. Build a workflow, run research, generate a presentation, query a table, schedule a recurring job, send a Slack message — Mothership knows your entire workspace and takes action directly.
<Video src="mothership/create-workflow.mp4" width={700} height={450} />
{/* TODO: Screenshot or GIF of the full Mothership home page — chat pane on the left with a conversation in progress, resource panel on the right with a workflow or file tab open. Hero shot for the page. */}
Describe what you want and Mothership handles it. Build a workflow, run research, generate a presentation, query a table, schedule a recurring job, send a Slack message — Mothership knows your entire workspace and takes action directly.
## What You Can Do
@@ -44,6 +45,8 @@ For complex tasks, Mothership delegates to specialized subagents automatically.
Bring any workspace object into the conversation via the **+** menu, `@`-mentions, or drag-and-drop from the sidebar. Mothership also opens resources automatically when it creates or modifies them.
<Video src="mothership/context-menu.mp4" width={700} height={450} />
{/* TODO: Screenshot of the resource panel with multiple tabs open — a workflow tab, a table tab, and a file tab — showing different resource types side by side. */}
| What to add | How it appears |
@@ -59,6 +62,8 @@ Bring any workspace object into the conversation via the **+** menu, `@`-mention
Mothership has two panes. On the left: the chat thread, where your messages and Mothership's responses appear. On the right: the resource panel, where workflows, tables, files, and knowledge bases open as tabs. The panel is resizable; tabs are draggable and closeable.
<Video src="mothership/split-view.mp4" width={700} height={450} />
<FAQ items={[
{ question: "How is Mothership different from Copilot?", answer: "Copilot is scoped to a single workflow — it helps you build and edit that workflow. Mothership has access to your entire workspace and can build workflows, manage data, run research, schedule jobs, take actions across integrations, and more." },
{ question: "What model does Mothership use?", answer: "Mothership always uses Claude Opus 4.6. There is no model selector." },

View File

@@ -4,11 +4,12 @@ description: Create, populate, and query knowledge bases from Mothership.
---
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Create a knowledge base, add documents to it, and query it in plain language — all through conversation. Knowledge bases you create in Mothership are immediately available to Agent blocks in any workflow.
<Video src="mothership/kb.mp4" width={700} height={450} />
{/* TODO: Screenshot of Mothership with a knowledge base open in the resource panel — showing the knowledge base name, document list, and status of indexed documents. */}
Create a knowledge base, add documents to it, and query it in plain language — all through conversation. Knowledge bases you create in Mothership are immediately available to Agent blocks in any workflow.
## Creating Knowledge Bases

View File

@@ -4,11 +4,12 @@ description: Ask Mothership to research anything — it searches, reads, and syn
---
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Ask Mothership to research anything and it figures out the best approach — searching the web, reading specific pages, crawling sites, looking up technical docs. Just describe what you want to know.
<Video src="mothership/research-agent.mp4" width={700} height={450} />
{/* TODO: Screenshot of the Research subagent section in the Mothership chat — expanded, showing it working through a research task with the final report or answer appearing. Ideally with a file tab open in the resource panel showing the output. */}
Ask Mothership to research anything and it figures out the best approach — searching the web, reading specific pages, crawling sites, looking up technical docs. Just describe what you want to know.
## Asking Questions

View File

@@ -6,9 +6,9 @@ description: Create, query, and manage workspace tables from Mothership.
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Create a table from a description or a CSV, query it in plain language, add or update rows, and export the results — all through conversation. Tables open in the resource panel when created or referenced.
<Image src="/static/mothership/table-example.png" alt="Mothership resource panel showing the pipeline_deals table with company, deal_owner, stage, and amount columns, alongside a chat summary of total pipeline value and breakdown by stage" width={900} height={500} />
{/* TODO: Screenshot of Mothership with a table open in the resource panel — ideally after a query or row operation, showing the table with data populated. */}
Create a table from a description or a CSV, query it in plain language, add or update rows, and export the results — all through conversation. Tables open in the resource panel as soon as they're created or referenced.
## Creating Tables

View File

@@ -5,16 +5,17 @@ description: Schedule recurring jobs, take immediate actions, connect integratio
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
<Video src="mothership/job-create.mp4" width={700} height={450} />
Mothership can act on your behalf right now — send a message, create an issue, call an API — or on a schedule, running a prompt automatically every hour, day, or week. It can also connect integrations, set environment variables, add MCP servers, and create custom tools.
## Scheduled Jobs
A scheduled job is a Mothership task that runs on a cron schedule. On each run, Mothership reads the current workspace state and executes the job's prompt as if you had just sent it.
{/* TODO: Screenshot of Mothership chat confirming a scheduled job was created — showing the job name, schedule, and what it will do. If there's a jobs list view in the sidebar, include that as a second screenshot here. */}
### Creating a Job
Describe the recurring task and how often it should run:

View File

@@ -3,13 +3,13 @@ title: Workflows
description: Create, edit, run, debug, deploy, and organize workflows from Mothership.
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Describe a workflow and Mothership builds it. Reference an existing one by name and it edits it. No canvas navigation required — every change appears in the resource panel in real time.
<Video src="mothership/create-workflow.mp4" width={700} height={450} />
{/* TODO: Screenshot of Mothership chat on the left with the Build subagent section visible, and a workflow open in the resource panel on the right. Shows the split-pane experience of building via natural language. */}
Describe a workflow and Mothership builds it. Reference an existing one by name and it edits it. No canvas navigation required — every change appears in the resource panel in real time.
## Creating Workflows
@@ -33,7 +33,7 @@ Open an existing workflow with `@workflow-name` or the **+** menu, then describe
## Running Workflows
{/* TODO: Screenshot or GIF of Mothership running a workflow — showing the chat streaming execution output on the left while the workflow canvas in the resource panel highlights blocks as they execute in real time. */}
<Video src="mothership/run-workflow.mp4" width={700} height={450} />
Ask Mothership to run a workflow and it handles the execution:
@@ -110,10 +110,6 @@ Variables set this way are available via `<variable.VARIABLE_NAME>` syntax insid
- "Delete the old_api_prototype workflow"
- "Delete all workflows in the deprecated folder"
<Callout type="warn">
Workflow deletion is permanent. Deployed versions are also removed. There is no recycle bin.
</Callout>
<FAQ items={[
{ question: "Can Mothership edit a workflow while it's deployed?", answer: "Yes. Editing a workflow does not affect the live deployment. The deployed version is a snapshot — you need to ask Mothership to redeploy to push changes to production." },
{ question: "Can I run a workflow with specific inputs from Mothership?", answer: "Yes. Describe the inputs in your message and Mothership passes them to the workflow's start block." },

View File

@@ -115,6 +115,7 @@
"microsoft_planner",
"microsoft_teams",
"mistral_parse",
"monday",
"mongodb",
"mysql",
"neo4j",

View File

@@ -0,0 +1,387 @@
---
title: Monday
description: Manage Monday.com boards, items, and groups
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="monday"
color="#FFFFFF"
/>
## Usage Instructions
Integrate with Monday.com to list boards, get board details, fetch and search items, create and update items, archive or delete items, create subitems, move items between groups, add updates, and create groups.
## Tools
### `monday_list_boards`
List boards from your Monday.com account
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `limit` | number | No | Maximum number of boards to return \(default 25, max 500\) |
| `page` | number | No | Page number for pagination \(starts at 1\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boards` | array | List of Monday.com boards |
| ↳ `id` | string | Board ID |
| ↳ `name` | string | Board name |
| ↳ `description` | string | Board description |
| ↳ `state` | string | Board state \(active, archived, deleted\) |
| ↳ `boardKind` | string | Board kind \(public, private, share\) |
| ↳ `itemsCount` | number | Number of items on the board |
| ↳ `url` | string | Board URL |
| ↳ `updatedAt` | string | Last updated timestamp |
| `count` | number | Number of boards returned |
### `monday_get_board`
Get a specific Monday.com board with its groups and columns
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `boardId` | string | Yes | The ID of the board to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `board` | json | Board details |
| ↳ `id` | string | Board ID |
| ↳ `name` | string | Board name |
| ↳ `description` | string | Board description |
| ↳ `state` | string | Board state |
| ↳ `boardKind` | string | Board kind \(public, private, share\) |
| ↳ `itemsCount` | number | Number of items |
| ↳ `url` | string | Board URL |
| ↳ `updatedAt` | string | Last updated timestamp |
| `groups` | array | Groups on the board |
| ↳ `id` | string | Group ID |
| ↳ `title` | string | Group title |
| ↳ `color` | string | Group color \(hex\) |
| ↳ `archived` | boolean | Whether the group is archived |
| ↳ `deleted` | boolean | Whether the group is deleted |
| ↳ `position` | string | Group position |
| `columns` | array | Columns on the board |
| ↳ `id` | string | Column ID |
| ↳ `title` | string | Column title |
| ↳ `type` | string | Column type |
### `monday_get_item`
Get a specific item by ID from Monday.com
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `itemId` | string | Yes | The ID of the item to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `item` | json | The requested item |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
### `monday_get_items`
Get items from a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `boardId` | string | Yes | The ID of the board to get items from |
| `groupId` | string | No | Filter items by group ID |
| `limit` | number | No | Maximum number of items to return \(default 25, max 500\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | List of items from the board |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state \(active, archived, deleted\) |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values for the item |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Human-readable text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
| `count` | number | Number of items returned |
### `monday_search_items`
Search for items on a Monday.com board by column values
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `boardId` | string | Yes | The ID of the board to search |
| `columns` | string | Yes | JSON array of column filters, e.g. \[\{"column_id":"status","column_values":\["Done"\]\}\] |
| `limit` | number | No | Maximum number of items to return \(default 25, max 500\) |
| `cursor` | string | No | Pagination cursor from a previous search response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Matching items |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
| `count` | number | Number of items returned |
| `cursor` | string | Pagination cursor for fetching the next page |
### `monday_create_item`
Create a new item on a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `boardId` | string | Yes | The ID of the board to create the item on |
| `itemName` | string | Yes | The name of the new item |
| `groupId` | string | No | The group ID to create the item in |
| `columnValues` | string | No | JSON string of column values to set \(e.g., \{"status":"Done","date":"2024-01-01"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `item` | json | The created item |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
### `monday_update_item`
Update column values of an item on a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `boardId` | string | Yes | The ID of the board containing the item |
| `itemId` | string | Yes | The ID of the item to update |
| `columnValues` | string | Yes | JSON string of column values to update \(e.g., \{"status":"Done","date":"2024-01-01"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `item` | json | The updated item |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
### `monday_delete_item`
Delete an item from a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `itemId` | string | Yes | The ID of the item to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | The ID of the deleted item |
### `monday_archive_item`
Archive an item on a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `itemId` | string | Yes | The ID of the item to archive |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | The ID of the archived item |
### `monday_move_item_to_group`
Move an item to a different group on a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `itemId` | string | Yes | The ID of the item to move |
| `groupId` | string | Yes | The ID of the target group |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `item` | json | The moved item with updated group |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
### `monday_create_subitem`
Create a subitem under a parent item on Monday.com
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `parentItemId` | string | Yes | The ID of the parent item |
| `itemName` | string | Yes | The name of the new subitem |
| `columnValues` | string | No | JSON string of column values to set |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `item` | json | The created subitem |
| ↳ `id` | string | Item ID |
| ↳ `name` | string | Item name |
| ↳ `state` | string | Item state |
| ↳ `boardId` | string | Board ID |
| ↳ `groupId` | string | Group ID |
| ↳ `groupTitle` | string | Group title |
| ↳ `columnValues` | array | Column values |
| ↳ `id` | string | Column ID |
| ↳ `text` | string | Text value |
| ↳ `value` | string | Raw JSON value |
| ↳ `type` | string | Column type |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `url` | string | Item URL |
### `monday_create_update`
Add an update (comment) to a Monday.com item
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `itemId` | string | Yes | The ID of the item to add the update to |
| `body` | string | Yes | The update text content \(supports HTML\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `update` | json | The created update |
| ↳ `id` | string | Update ID |
| ↳ `body` | string | Update body \(HTML\) |
| ↳ `textBody` | string | Plain text body |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `creatorId` | string | Creator user ID |
| ↳ `itemId` | string | Item ID |
### `monday_create_group`
Create a new group on a Monday.com board
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `boardId` | string | Yes | The ID of the board to create the group on |
| `groupName` | string | Yes | The name of the new group \(max 255 characters\) |
| `groupColor` | string | No | The group color as a hex code \(e.g., "#ff642e"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `group` | json | The created group |
| ↳ `id` | string | Group ID |
| ↳ `title` | string | Group title |
| ↳ `color` | string | Group color \(hex\) |
| ↳ `archived` | boolean | Whether archived |
| ↳ `deleted` | boolean | Whether deleted |
| ↳ `position` | string | Group position |

View File

@@ -10,7 +10,7 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="#E0E0E0"
/>
Confluence provides 16 triggers for automating workflows based on events.
Confluence provides 23 triggers for automating workflows based on events.
## Triggers
@@ -98,6 +98,49 @@ Trigger workflow when an attachment is removed in Confluence
| `files` | file[] | Attachment file content downloaded from Confluence \(if includeFileContent is enabled with credentials\) |
---
### Confluence Attachment Updated
Trigger workflow when an attachment is updated in Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
| `confluenceEmail` | string | No | Your Atlassian account email. Required together with API token to download attachment files. |
| `confluenceApiToken` | string | No | API token from https://id.atlassian.com/manage-profile/security/api-tokens. Required to download attachment file content. |
| `includeFileContent` | boolean | No | Download and include actual file content from attachments. Requires email, API token, and domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
| `id` | number | Content ID |
| `title` | string | Content title |
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
| `version` | number | Version number |
| `spaceKey` | string | Space key the content belongs to |
| `creatorAccountId` | string | Account ID of the creator |
| `lastModifierAccountId` | string | Account ID of the last modifier |
| `self` | string | URL link to the content |
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
| `attachment` | object | attachment output from the tool |
| ↳ `mediaType` | string | MIME type of the attachment |
| ↳ `fileSize` | number | File size in bytes |
| ↳ `parent` | object | parent output from the tool |
| ↳ `id` | number | Container page/blog ID |
| ↳ `title` | string | Container page/blog title |
| ↳ `contentType` | string | Container content type |
| `files` | file[] | Attachment file content downloaded from Confluence \(if includeFileContent is enabled with credentials\) |
---
### Confluence Blog Post Created
@@ -142,6 +185,28 @@ Trigger workflow when a blog post is removed in Confluence
| `accountType` | string | Account type \(e.g., customer\) |
---
### Confluence Blog Post Restored
Trigger workflow when a blog post is restored from trash in Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
---
### Confluence Blog Post Updated
@@ -242,6 +307,45 @@ Trigger workflow when a comment is removed in Confluence
| ↳ `self` | string | URL link to the parent content |
---
### Confluence Comment Updated
Trigger workflow when a comment is updated in Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
| `id` | number | Content ID |
| `title` | string | Content title |
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
| `version` | number | Version number |
| `spaceKey` | string | Space key the content belongs to |
| `creatorAccountId` | string | Account ID of the creator |
| `lastModifierAccountId` | string | Account ID of the last modifier |
| `self` | string | URL link to the content |
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
| `comment` | object | comment output from the tool |
| ↳ `parent` | object | parent output from the tool |
| ↳ `id` | number | Parent page/blog ID |
| ↳ `title` | string | Parent page/blog title |
| ↳ `contentType` | string | Parent content type \(page or blogpost\) |
| ↳ `spaceKey` | string | Space key of the parent |
| ↳ `self` | string | URL link to the parent content |
---
### Confluence Label Added
@@ -346,6 +450,40 @@ Trigger workflow when a page is moved in Confluence
| `accountType` | string | Account type \(e.g., customer\) |
---
### Confluence Page Permissions Updated
Trigger workflow when page permissions are changed in Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
| `id` | number | Content ID |
| `title` | string | Content title |
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
| `version` | number | Version number |
| `spaceKey` | string | Space key the content belongs to |
| `creatorAccountId` | string | Account ID of the creator |
| `lastModifierAccountId` | string | Account ID of the last modifier |
| `self` | string | URL link to the content |
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
| `page` | object | page output from the tool |
| ↳ `permissions` | json | Updated permissions object for the page |
---
### Confluence Page Removed
@@ -368,6 +506,28 @@ Trigger workflow when a page is removed or trashed in Confluence
| `accountType` | string | Account type \(e.g., customer\) |
---
### Confluence Page Restored
Trigger workflow when a page is restored from trash in Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
---
### Confluence Page Updated
@@ -416,6 +576,32 @@ Trigger workflow when a new space is created in Confluence
| ↳ `self` | string | URL link to the space |
---
### Confluence Space Removed
Trigger workflow when a space is removed in Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
| `space` | object | space output from the tool |
| ↳ `key` | string | Space key |
| ↳ `name` | string | Space name |
| ↳ `self` | string | URL link to the space |
---
### Confluence Space Updated
@@ -442,6 +628,35 @@ Trigger workflow when a space is updated in Confluence
| ↳ `self` | string | URL link to the space |
---
### Confluence User Created
Trigger workflow when a new user is added to Confluence
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
| `userAccountId` | string | Account ID of the user who triggered the event |
| `accountType` | string | Account type \(e.g., customer\) |
| `user` | object | user output from the tool |
| ↳ `accountId` | string | Account ID of the new user |
| ↳ `accountType` | string | Account type \(e.g., atlassian, app\) |
| ↳ `displayName` | string | Display name of the user |
| ↳ `emailAddress` | string | Email address of the user \(may not be available due to GDPR/privacy settings\) |
| ↳ `publicName` | string | Public name of the user |
| ↳ `self` | string | URL link to the user profile |
---
### Confluence Webhook (All Events)
@@ -472,5 +687,6 @@ Trigger workflow on any Confluence webhook event
| `space` | json | Space object \(present in space events\) |
| `label` | json | Label object \(present in label events\) |
| `content` | json | Content object \(present in label events\) |
| `user` | json | User object \(present in user events\) |
| `files` | file[] | Attachment file content \(present in attachment events when includeFileContent is enabled\) |

View File

@@ -10,10 +10,182 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="#E0E0E0"
/>
Jira provides 6 triggers for automating workflows based on events.
Jira provides 15 triggers for automating workflows based on events.
## Triggers
### Jira Comment Deleted
Trigger workflow when a comment is deleted from a Jira issue
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which comment deletions trigger this workflow using JQL |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `votes` | json | Votes on this issue |
| ↳ `labels` | array | Array of labels applied to this issue |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `created` | string | Issue creation date \(ISO format\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `progress` | json | Progress tracking information |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `comment` | object | comment output from the tool |
| ↳ `id` | string | Comment ID |
| ↳ `body` | json | Comment body in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Comment author display name |
| ↳ `accountId` | string | Comment author account ID |
| ↳ `emailAddress` | string | Comment author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the comment |
| ↳ `accountId` | string | Account ID of the user who last updated the comment |
| ↳ `created` | string | Comment creation date \(ISO format\) |
| ↳ `updated` | string | Comment last updated date \(ISO format\) |
| ↳ `self` | string | REST API URL for this comment |
---
### Jira Comment Updated
Trigger workflow when a comment is updated on a Jira issue
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which comment updates trigger this workflow using JQL |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `votes` | json | Votes on this issue |
| ↳ `labels` | array | Array of labels applied to this issue |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `created` | string | Issue creation date \(ISO format\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `progress` | json | Progress tracking information |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `comment` | object | comment output from the tool |
| ↳ `id` | string | Comment ID |
| ↳ `body` | json | Comment body in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Comment author display name |
| ↳ `accountId` | string | Comment author account ID |
| ↳ `emailAddress` | string | Comment author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the comment |
| ↳ `accountId` | string | Account ID of the user who last updated the comment |
| ↳ `created` | string | Comment creation date \(ISO format\) |
| ↳ `updated` | string | Comment last updated date \(ISO format\) |
| ↳ `self` | string | REST API URL for this comment |
---
### Jira Issue Commented
Trigger workflow when a comment is added to a Jira issue
@@ -31,6 +203,10 @@ Trigger workflow when a comment is added to a Jira issue
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
@@ -46,19 +222,20 @@ Trigger workflow when a comment is added to a Jira issue
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Creator email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Assignee email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
@@ -66,22 +243,31 @@ Trigger workflow when a comment is added to a Jira issue
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Reporter email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `comment` | object | comment output from the tool |
| ↳ `id` | string | Comment ID |
| ↳ `body` | string | Comment text/body |
| ↳ `body` | json | Comment body in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Comment author display name |
| ↳ `accountId` | string | Comment author account ID |
| ↳ `emailAddress` | string | Comment author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the comment |
| ↳ `accountId` | string | Account ID of the user who last updated the comment |
| ↳ `created` | string | Comment creation date \(ISO format\) |
| ↳ `updated` | string | Comment last updated date \(ISO format\) |
| ↳ `self` | string | REST API URL for this comment |
---
@@ -103,6 +289,10 @@ Trigger workflow when a new issue is created in Jira
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
@@ -118,19 +308,20 @@ Trigger workflow when a new issue is created in Jira
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Creator email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Assignee email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
@@ -138,13 +329,18 @@ Trigger workflow when a new issue is created in Jira
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Reporter email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `issue_event_type_name` | string | Issue event type name from Jira \(only present in issue events\) |
@@ -167,6 +363,10 @@ Trigger workflow when an issue is deleted in Jira
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
@@ -182,19 +382,20 @@ Trigger workflow when an issue is deleted in Jira
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Creator email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Assignee email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
@@ -202,13 +403,18 @@ Trigger workflow when an issue is deleted in Jira
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Reporter email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `issue_event_type_name` | string | Issue event type name from Jira \(only present in issue events\) |
@@ -232,6 +438,10 @@ Trigger workflow when an issue is updated in Jira
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
@@ -247,19 +457,20 @@ Trigger workflow when an issue is updated in Jira
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Creator email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Assignee email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
@@ -267,18 +478,194 @@ Trigger workflow when an issue is updated in Jira
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Reporter email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `issue_event_type_name` | string | Issue event type name from Jira \(only present in issue events\) |
| `changelog` | object | changelog output from the tool |
| ↳ `id` | string | Changelog ID |
---
### Jira Project Created
Trigger workflow when a project is created in Jira
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(project_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `project` | object | project output from the tool |
| ↳ `id` | string | Project ID |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `self` | string | REST API URL for this project |
| ↳ `projectTypeKey` | string | Project type \(e.g., software, business\) |
| ↳ `lead` | object | lead output from the tool |
| ↳ `displayName` | string | Project lead display name |
| ↳ `accountId` | string | Project lead account ID |
---
### Jira Sprint Closed
Trigger workflow when a sprint is closed in Jira
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., sprint_started, sprint_closed\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `sprint` | object | sprint output from the tool |
| ↳ `id` | number | Sprint ID |
| ↳ `self` | string | REST API URL for this sprint |
| ↳ `state` | string | Sprint state \(future, active, closed\) |
| ↳ `name` | string | Sprint name |
| ↳ `startDate` | string | Sprint start date \(ISO format\) |
| ↳ `endDate` | string | Sprint end date \(ISO format\) |
| ↳ `completeDate` | string | Sprint completion date \(ISO format\) |
| ↳ `originBoardId` | number | Board ID the sprint belongs to |
| ↳ `goal` | string | Sprint goal |
| ↳ `createdDate` | string | Sprint creation date \(ISO format\) |
---
### Jira Sprint Created
Trigger workflow when a sprint is created in Jira
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., sprint_started, sprint_closed\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `sprint` | object | sprint output from the tool |
| ↳ `id` | number | Sprint ID |
| ↳ `self` | string | REST API URL for this sprint |
| ↳ `state` | string | Sprint state \(future, active, closed\) |
| ↳ `name` | string | Sprint name |
| ↳ `startDate` | string | Sprint start date \(ISO format\) |
| ↳ `endDate` | string | Sprint end date \(ISO format\) |
| ↳ `completeDate` | string | Sprint completion date \(ISO format\) |
| ↳ `originBoardId` | number | Board ID the sprint belongs to |
| ↳ `goal` | string | Sprint goal |
| ↳ `createdDate` | string | Sprint creation date \(ISO format\) |
---
### Jira Sprint Started
Trigger workflow when a sprint is started in Jira
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., sprint_started, sprint_closed\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `sprint` | object | sprint output from the tool |
| ↳ `id` | number | Sprint ID |
| ↳ `self` | string | REST API URL for this sprint |
| ↳ `state` | string | Sprint state \(future, active, closed\) |
| ↳ `name` | string | Sprint name |
| ↳ `startDate` | string | Sprint start date \(ISO format\) |
| ↳ `endDate` | string | Sprint end date \(ISO format\) |
| ↳ `completeDate` | string | Sprint completion date \(ISO format\) |
| ↳ `originBoardId` | number | Board ID the sprint belongs to |
| ↳ `goal` | string | Sprint goal |
| ↳ `createdDate` | string | Sprint creation date \(ISO format\) |
---
### Jira Version Released
Trigger workflow when a version is released in Jira
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(jira:version_released\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `version` | object | version output from the tool |
| ↳ `id` | string | Version ID |
| ↳ `name` | string | Version name |
| ↳ `self` | string | REST API URL for this version |
| ↳ `released` | boolean | Whether the version is released |
| ↳ `releaseDate` | string | Release date \(ISO format\) |
| ↳ `projectId` | number | Project ID the version belongs to |
| ↳ `description` | string | Version description |
| ↳ `archived` | boolean | Whether the version is archived |
---
### Jira Webhook (All Events)
@@ -337,6 +724,10 @@ Trigger workflow when time is logged on a Jira issue
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
@@ -352,19 +743,20 @@ Trigger workflow when time is logged on a Jira issue
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Creator email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Assignee email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
@@ -372,21 +764,213 @@ Trigger workflow when time is logged on a Jira issue
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Reporter email address |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `worklog` | object | worklog output from the tool |
| ↳ `id` | string | Worklog entry ID |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Worklog author display name |
| ↳ `accountId` | string | Worklog author account ID |
| ↳ `emailAddress` | string | Worklog author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the worklog |
| ↳ `accountId` | string | Account ID of the user who last updated the worklog |
| ↳ `timeSpent` | string | Time spent \(e.g., "2h 30m"\) |
| ↳ `timeSpentSeconds` | number | Time spent in seconds |
| ↳ `comment` | string | Worklog comment/description |
| ↳ `started` | string | When the work was started \(ISO format\) |
| ↳ `created` | string | When the worklog entry was created \(ISO format\) |
| ↳ `updated` | string | When the worklog entry was last updated \(ISO format\) |
| ↳ `issueId` | string | ID of the issue this worklog belongs to |
| ↳ `self` | string | REST API URL for this worklog entry |
---
### Jira Worklog Deleted
Trigger workflow when a worklog entry is deleted from a Jira issue
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which worklog deletions trigger this workflow using JQL |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `votes` | json | Votes on this issue |
| ↳ `labels` | array | Array of labels applied to this issue |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `created` | string | Issue creation date \(ISO format\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `progress` | json | Progress tracking information |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `worklog` | object | worklog output from the tool |
| ↳ `id` | string | Worklog entry ID |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Worklog author display name |
| ↳ `accountId` | string | Worklog author account ID |
| ↳ `emailAddress` | string | Worklog author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the worklog |
| ↳ `accountId` | string | Account ID of the user who last updated the worklog |
| ↳ `timeSpent` | string | Time spent \(e.g., "2h 30m"\) |
| ↳ `timeSpentSeconds` | number | Time spent in seconds |
| ↳ `comment` | string | Worklog comment/description |
| ↳ `started` | string | When the work was started \(ISO format\) |
| ↳ `created` | string | When the worklog entry was created \(ISO format\) |
| ↳ `updated` | string | When the worklog entry was last updated \(ISO format\) |
| ↳ `issueId` | string | ID of the issue this worklog belongs to |
| ↳ `self` | string | REST API URL for this worklog entry |
---
### Jira Worklog Updated
Trigger workflow when a worklog entry is updated on a Jira issue
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which worklog updates trigger this workflow using JQL |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, comment_created, worklog_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| ↳ `emailAddress` | string | Email address of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Jira issue key \(e.g., PROJ-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `votes` | json | Votes on this issue |
| ↳ `labels` | array | Array of labels applied to this issue |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `created` | string | Issue creation date \(ISO format\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `duedate` | string | Due date for the issue |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `summary` | string | Issue summary/title |
| ↳ `description` | json | Issue description in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `watches` | json | Watchers information |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `progress` | json | Progress tracking information |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `security` | string | Security level |
| ↳ `subtasks` | array | Array of subtask objects |
| ↳ `versions` | array | Array of affected versions |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name |
| ↳ `id` | string | Issue type ID |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| ↳ `components` | array | Array of component objects associated with this issue |
| ↳ `fixVersions` | array | Array of fix version objects for this issue |
| `worklog` | object | worklog output from the tool |
| ↳ `id` | string | Worklog entry ID |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Worklog author display name |
| ↳ `accountId` | string | Worklog author account ID |
| ↳ `emailAddress` | string | Worklog author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the worklog |
| ↳ `accountId` | string | Account ID of the user who last updated the worklog |
| ↳ `timeSpent` | string | Time spent \(e.g., "2h 30m"\) |
| ↳ `timeSpentSeconds` | number | Time spent in seconds |
| ↳ `comment` | string | Worklog comment/description |
| ↳ `started` | string | When the work was started \(ISO format\) |
| ↳ `created` | string | When the worklog entry was created \(ISO format\) |
| ↳ `updated` | string | When the worklog entry was last updated \(ISO format\) |
| ↳ `issueId` | string | ID of the issue this worklog belongs to |
| ↳ `self` | string | REST API URL for this worklog entry |

View File

@@ -0,0 +1,314 @@
---
title: Jsm
description: Available Jsm triggers for automating workflows
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="jsm"
color="#6B7280"
/>
Jsm provides 5 triggers for automating workflows based on events.
## Triggers
### JSM Request Commented
Trigger workflow when a comment is added to a Jira Service Management request
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which service desk requests trigger this workflow using JQL \(Jira Query Language\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, jira:issue_updated, comment_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Issue key \(e.g., SD-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `summary` | string | Request summary/title |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Current status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name \(e.g., Service Request, Incident\) |
| ↳ `id` | string | Issue type ID |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `created` | string | Request creation date \(ISO format\) |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `duedate` | string | Due date for the request |
| ↳ `labels` | array | Array of labels applied to this request |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| `comment` | object | comment output from the tool |
| ↳ `id` | string | Comment ID |
| ↳ `body` | json | Comment body in Atlassian Document Format \(ADF\). On Jira Server this may be a plain string. |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Comment author display name |
| ↳ `accountId` | string | Comment author account ID |
| ↳ `emailAddress` | string | Comment author email address |
| ↳ `updateAuthor` | object | updateAuthor output from the tool |
| ↳ `displayName` | string | Display name of the user who last updated the comment |
| ↳ `accountId` | string | Account ID of the user who last updated the comment |
| ↳ `created` | string | Comment creation date \(ISO format\) |
| ↳ `updated` | string | Comment last updated date \(ISO format\) |
---
### JSM Request Created
Trigger workflow when a new service request is created in Jira Service Management
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which service desk requests trigger this workflow using JQL \(Jira Query Language\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, jira:issue_updated, comment_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Issue key \(e.g., SD-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `summary` | string | Request summary/title |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Current status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name \(e.g., Service Request, Incident\) |
| ↳ `id` | string | Issue type ID |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `created` | string | Request creation date \(ISO format\) |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `duedate` | string | Due date for the request |
| ↳ `labels` | array | Array of labels applied to this request |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| `issue_event_type_name` | string | Issue event type name from Jira |
---
### JSM Request Resolved
Trigger workflow when a service request is resolved in Jira Service Management
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which service desk requests trigger this workflow using JQL \(Jira Query Language\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, jira:issue_updated, comment_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Issue key \(e.g., SD-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `summary` | string | Request summary/title |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Current status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name \(e.g., Service Request, Incident\) |
| ↳ `id` | string | Issue type ID |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `created` | string | Request creation date \(ISO format\) |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `duedate` | string | Due date for the request |
| ↳ `labels` | array | Array of labels applied to this request |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| `issue_event_type_name` | string | Issue event type name from Jira |
| `changelog` | object | changelog output from the tool |
| ↳ `id` | string | Changelog ID |
---
### JSM Request Updated
Trigger workflow when a service request is updated in Jira Service Management
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which service desk requests trigger this workflow using JQL \(Jira Query Language\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `webhookEvent` | string | The webhook event type \(e.g., jira:issue_created, jira:issue_updated, comment_created\) |
| `timestamp` | number | Timestamp of the webhook event |
| `user` | object | user output from the tool |
| ↳ `displayName` | string | Display name of the user who triggered the event |
| ↳ `accountId` | string | Account ID of the user who triggered the event |
| `issue` | object | issue output from the tool |
| ↳ `id` | string | Jira issue ID |
| ↳ `key` | string | Issue key \(e.g., SD-123\) |
| ↳ `self` | string | REST API URL for this issue |
| ↳ `fields` | object | fields output from the tool |
| ↳ `summary` | string | Request summary/title |
| ↳ `status` | object | status output from the tool |
| ↳ `name` | string | Current status name |
| ↳ `id` | string | Status ID |
| ↳ `statusCategory` | json | Status category information |
| ↳ `priority` | object | priority output from the tool |
| ↳ `name` | string | Priority name |
| ↳ `id` | string | Priority ID |
| ↳ `issuetype` | object | issuetype output from the tool |
| ↳ `name` | string | Issue type name \(e.g., Service Request, Incident\) |
| ↳ `id` | string | Issue type ID |
| ↳ `project` | object | project output from the tool |
| ↳ `key` | string | Project key |
| ↳ `name` | string | Project name |
| ↳ `id` | string | Project ID |
| ↳ `reporter` | object | reporter output from the tool |
| ↳ `displayName` | string | Reporter display name |
| ↳ `accountId` | string | Reporter account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `assignee` | object | assignee output from the tool |
| ↳ `displayName` | string | Assignee display name |
| ↳ `accountId` | string | Assignee account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `creator` | object | creator output from the tool |
| ↳ `displayName` | string | Creator display name |
| ↳ `accountId` | string | Creator account ID |
| ↳ `emailAddress` | string | Email address \(Jira Server only — not available in Jira Cloud webhook payloads\) |
| ↳ `created` | string | Request creation date \(ISO format\) |
| ↳ `updated` | string | Last updated date \(ISO format\) |
| ↳ `duedate` | string | Due date for the request |
| ↳ `labels` | array | Array of labels applied to this request |
| ↳ `resolution` | object | resolution output from the tool |
| ↳ `name` | string | Resolution name \(e.g., Done, Fixed\) |
| ↳ `id` | string | Resolution ID |
| `issue_event_type_name` | string | Issue event type name from Jira |
| `changelog` | object | changelog output from the tool |
| ↳ `id` | string | Changelog ID |
---
### JSM Webhook (All Events)
Trigger workflow on any Jira Service Management webhook event
#### Configuration
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Jira using HMAC signature |
| `jqlFilter` | string | No | Filter which service desk requests trigger this workflow using JQL \(Jira Query Language\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `changelog` | object | changelog output from the tool |
| ↳ `id` | string | Changelog ID |
| `comment` | object | comment output from the tool |
| ↳ `id` | string | Comment ID |
| ↳ `body` | string | Comment text/body |
| ↳ `author` | object | author output from the tool |
| ↳ `displayName` | string | Comment author display name |
| ↳ `accountId` | string | Comment author account ID |
| ↳ `emailAddress` | string | Comment author email address |
| ↳ `created` | string | Comment creation date \(ISO format\) |
| ↳ `updated` | string | Comment last updated date \(ISO format\) |

View File

@@ -27,9 +27,11 @@
"imap",
"intercom",
"jira",
"jsm",
"lemlist",
"linear",
"microsoft-teams",
"monday",
"notion",
"outlook",
"resend",

View File

@@ -0,0 +1,215 @@
---
title: Monday
description: Available Monday triggers for automating workflows
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="monday"
color="#FFFFFF"
/>
Monday provides 9 triggers for automating workflows based on events.
## Triggers
### Monday Column Value Changed
Trigger workflow when any column value changes on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
| `columnId` | string | The ID of the changed column |
| `columnType` | string | The type of the changed column |
| `columnTitle` | string | The title of the changed column |
| `value` | json | The new value of the column |
| `previousValue` | json | The previous value of the column |
---
### Monday Item Archived
Trigger workflow when an item is archived on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
---
### Monday Item Created
Trigger workflow when a new item is created on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
---
### Monday Item Deleted
Trigger workflow when an item is deleted on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
---
### Monday Item Moved to Group
Trigger workflow when an item is moved to any group on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
| `destGroupId` | string | The destination group ID the item was moved to |
| `sourceGroupId` | string | The source group ID the item was moved from |
---
### Monday Item Name Changed
Trigger workflow when an item name changes on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
| `columnId` | string | The ID of the changed column |
| `columnType` | string | The type of the changed column |
| `columnTitle` | string | The title of the changed column |
| `value` | json | The new value of the column |
| `previousValue` | json | The previous value of the column |
---
### Monday Status Changed
Trigger workflow when a status column value changes on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
| `columnId` | string | The ID of the changed column |
| `columnType` | string | The type of the changed column |
| `columnTitle` | string | The title of the changed column |
| `value` | json | The new value of the column |
| `previousValue` | json | The previous value of the column |
---
### Monday Subitem Created
Trigger workflow when a subitem is created on a Monday.com board
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
| `parentItemId` | string | The parent item ID |
| `parentItemBoardId` | string | The parent item board ID |
---
### Monday Update Posted
Trigger workflow when an update or comment is posted on a Monday.com item
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `boardId` | string | The board ID where the event occurred |
| `itemId` | string | The item ID \(pulseId\) |
| `itemName` | string | The item name \(pulseName\) |
| `groupId` | string | The group ID of the item |
| `userId` | string | The ID of the user who triggered the event |
| `triggerTime` | string | ISO timestamp of when the event occurred |
| `triggerUuid` | string | Unique identifier for this event |
| `subscriptionId` | string | The webhook subscription ID |
| `updateId` | string | The ID of the created update |
| `body` | string | The HTML body of the update |
| `textBody` | string | The plain text body of the update |

Binary file not shown.

After

Width:  |  Height:  |  Size: 562 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 726 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 268 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 462 KiB

View File

@@ -119,6 +119,7 @@ import {
MicrosoftSharepointIcon,
MicrosoftTeamsIcon,
MistralIcon,
MondayIcon,
MongoDBIcon,
MySQLIcon,
Neo4jIcon,
@@ -312,6 +313,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
microsoft_planner: MicrosoftPlannerIcon,
microsoft_teams: MicrosoftTeamsIcon,
mistral_parse_v3: MistralIcon,
monday: MondayIcon,
mongodb: MongoDBIcon,
mysql: MySQLIcon,
neo4j: Neo4jIcon,

View File

@@ -2522,6 +2522,16 @@
"name": "Confluence Page Moved",
"description": "Trigger workflow when a page is moved in Confluence"
},
{
"id": "confluence_page_restored",
"name": "Confluence Page Restored",
"description": "Trigger workflow when a page is restored from trash in Confluence"
},
{
"id": "confluence_page_permissions_updated",
"name": "Confluence Page Permissions Updated",
"description": "Trigger workflow when page permissions are changed in Confluence"
},
{
"id": "confluence_comment_created",
"name": "Confluence Comment Created",
@@ -2532,6 +2542,11 @@
"name": "Confluence Comment Removed",
"description": "Trigger workflow when a comment is removed in Confluence"
},
{
"id": "confluence_comment_updated",
"name": "Confluence Comment Updated",
"description": "Trigger workflow when a comment is updated in Confluence"
},
{
"id": "confluence_blog_created",
"name": "Confluence Blog Post Created",
@@ -2547,6 +2562,11 @@
"name": "Confluence Blog Post Removed",
"description": "Trigger workflow when a blog post is removed in Confluence"
},
{
"id": "confluence_blog_restored",
"name": "Confluence Blog Post Restored",
"description": "Trigger workflow when a blog post is restored from trash in Confluence"
},
{
"id": "confluence_attachment_created",
"name": "Confluence Attachment Created",
@@ -2557,6 +2577,11 @@
"name": "Confluence Attachment Removed",
"description": "Trigger workflow when an attachment is removed in Confluence"
},
{
"id": "confluence_attachment_updated",
"name": "Confluence Attachment Updated",
"description": "Trigger workflow when an attachment is updated in Confluence"
},
{
"id": "confluence_space_created",
"name": "Confluence Space Created",
@@ -2567,6 +2592,11 @@
"name": "Confluence Space Updated",
"description": "Trigger workflow when a space is updated in Confluence"
},
{
"id": "confluence_space_removed",
"name": "Confluence Space Removed",
"description": "Trigger workflow when a space is removed in Confluence"
},
{
"id": "confluence_label_added",
"name": "Confluence Label Added",
@@ -2577,13 +2607,18 @@
"name": "Confluence Label Removed",
"description": "Trigger workflow when a label is removed from content in Confluence"
},
{
"id": "confluence_user_created",
"name": "Confluence User Created",
"description": "Trigger workflow when a new user is added to Confluence"
},
{
"id": "confluence_webhook",
"name": "Confluence Webhook (All Events)",
"description": "Trigger workflow on any Confluence webhook event"
}
],
"triggerCount": 16,
"triggerCount": 23,
"authType": "oauth",
"category": "tools",
"integrationTypes": ["documents", "productivity", "search"],
@@ -6797,18 +6832,63 @@
"name": "Jira Issue Commented",
"description": "Trigger workflow when a comment is added to a Jira issue"
},
{
"id": "jira_comment_updated",
"name": "Jira Comment Updated",
"description": "Trigger workflow when a comment is updated on a Jira issue"
},
{
"id": "jira_comment_deleted",
"name": "Jira Comment Deleted",
"description": "Trigger workflow when a comment is deleted from a Jira issue"
},
{
"id": "jira_worklog_created",
"name": "Jira Worklog Created",
"description": "Trigger workflow when time is logged on a Jira issue"
},
{
"id": "jira_worklog_updated",
"name": "Jira Worklog Updated",
"description": "Trigger workflow when a worklog entry is updated on a Jira issue"
},
{
"id": "jira_worklog_deleted",
"name": "Jira Worklog Deleted",
"description": "Trigger workflow when a worklog entry is deleted from a Jira issue"
},
{
"id": "jira_sprint_created",
"name": "Jira Sprint Created",
"description": "Trigger workflow when a sprint is created in Jira"
},
{
"id": "jira_sprint_started",
"name": "Jira Sprint Started",
"description": "Trigger workflow when a sprint is started in Jira"
},
{
"id": "jira_sprint_closed",
"name": "Jira Sprint Closed",
"description": "Trigger workflow when a sprint is closed in Jira"
},
{
"id": "jira_project_created",
"name": "Jira Project Created",
"description": "Trigger workflow when a project is created in Jira"
},
{
"id": "jira_version_released",
"name": "Jira Version Released",
"description": "Trigger workflow when a version is released in Jira"
},
{
"id": "jira_webhook",
"name": "Jira Webhook (All Events)",
"description": "Trigger workflow on any Jira webhook event"
}
],
"triggerCount": 6,
"triggerCount": 15,
"authType": "oauth",
"category": "tools",
"integrationTypes": ["productivity", "customer-support"],
@@ -6962,8 +7042,34 @@
}
],
"operationCount": 34,
"triggers": [],
"triggerCount": 0,
"triggers": [
{
"id": "jsm_request_created",
"name": "JSM Request Created",
"description": "Trigger workflow when a new service request is created in Jira Service Management"
},
{
"id": "jsm_request_updated",
"name": "JSM Request Updated",
"description": "Trigger workflow when a service request is updated in Jira Service Management"
},
{
"id": "jsm_request_commented",
"name": "JSM Request Commented",
"description": "Trigger workflow when a comment is added to a Jira Service Management request"
},
{
"id": "jsm_request_resolved",
"name": "JSM Request Resolved",
"description": "Trigger workflow when a service request is resolved in Jira Service Management"
},
{
"id": "jsm_webhook",
"name": "JSM Webhook (All Events)",
"description": "Trigger workflow on any Jira Service Management webhook event"
}
],
"triggerCount": 5,
"authType": "oauth",
"category": "tools",
"integrationTypes": ["customer-support", "developer-tools", "productivity"],
@@ -8603,6 +8709,123 @@
"integrationTypes": ["ai", "documents"],
"tags": ["document-processing", "ocr"]
},
{
"type": "monday",
"slug": "monday",
"name": "Monday",
"description": "Manage Monday.com boards, items, and groups",
"longDescription": "Integrate with Monday.com to list boards, get board details, fetch and search items, create and update items, archive or delete items, create subitems, move items between groups, add updates, and create groups.",
"bgColor": "#FFFFFF",
"iconName": "MondayIcon",
"docsUrl": "https://docs.sim.ai/tools/monday",
"operations": [
{
"name": "List Boards",
"description": "List boards from your Monday.com account"
},
{
"name": "Get Board",
"description": "Get a specific Monday.com board with its groups and columns"
},
{
"name": "Get Item",
"description": "Get a specific item by ID from Monday.com"
},
{
"name": "Get Items",
"description": "Get items from a Monday.com board"
},
{
"name": "Search Items",
"description": "Search for items on a Monday.com board by column values"
},
{
"name": "Create Item",
"description": "Create a new item on a Monday.com board"
},
{
"name": "Update Item",
"description": "Update column values of an item on a Monday.com board"
},
{
"name": "Delete Item",
"description": "Delete an item from a Monday.com board"
},
{
"name": "Archive Item",
"description": "Archive an item on a Monday.com board"
},
{
"name": "Move Item to Group",
"description": "Move an item to a different group on a Monday.com board"
},
{
"name": "Create Subitem",
"description": "Create a subitem under a parent item on Monday.com"
},
{
"name": "Create Update",
"description": "Add an update (comment) to a Monday.com item"
},
{
"name": "Create Group",
"description": "Create a new group on a Monday.com board"
}
],
"operationCount": 13,
"triggers": [
{
"id": "monday_item_created",
"name": "Monday Item Created",
"description": "Trigger workflow when a new item is created on a Monday.com board"
},
{
"id": "monday_column_changed",
"name": "Monday Column Value Changed",
"description": "Trigger workflow when any column value changes on a Monday.com board"
},
{
"id": "monday_status_changed",
"name": "Monday Status Changed",
"description": "Trigger workflow when a status column value changes on a Monday.com board"
},
{
"id": "monday_item_name_changed",
"name": "Monday Item Name Changed",
"description": "Trigger workflow when an item name changes on a Monday.com board"
},
{
"id": "monday_item_archived",
"name": "Monday Item Archived",
"description": "Trigger workflow when an item is archived on a Monday.com board"
},
{
"id": "monday_item_deleted",
"name": "Monday Item Deleted",
"description": "Trigger workflow when an item is deleted on a Monday.com board"
},
{
"id": "monday_item_moved",
"name": "Monday Item Moved to Group",
"description": "Trigger workflow when an item is moved to any group on a Monday.com board"
},
{
"id": "monday_subitem_created",
"name": "Monday Subitem Created",
"description": "Trigger workflow when a subitem is created on a Monday.com board"
},
{
"id": "monday_update_created",
"name": "Monday Update Posted",
"description": "Trigger workflow when an update or comment is posted on a Monday.com item"
}
],
"triggerCount": 9,
"authType": "oauth",
"category": "tools",
"integrationTypes": ["productivity"],
"tags": ["project-management"]
},
{
"type": "mongodb",
"slug": "mongodb",

View File

@@ -0,0 +1,28 @@
import type { Metadata } from 'next'
import Link from 'next/link'
import { AUTH_PRIMARY_CTA_BASE } from '@/app/(auth)/components/auth-button-classes'
export const metadata: Metadata = {
title: 'Page Not Found',
robots: { index: false, follow: true },
}
export default function IntegrationsNotFound() {
return (
<div className='flex min-h-[60vh] items-center justify-center px-4 py-24'>
<div className='flex flex-col items-center gap-3'>
<h1 className='text-balance font-[430] font-season text-[40px] text-white leading-[110%] tracking-[-0.02em]'>
Page not found
</h1>
<p className='font-[430] font-season text-[color-mix(in_srgb,var(--landing-text-subtle)_60%,transparent)] text-lg leading-[125%] tracking-[0.02em]'>
The page you&apos;re looking for doesn&apos;t exist or has been moved.
</p>
<div className='mt-3 flex items-center gap-2'>
<Link href='/' className={AUTH_PRIMARY_CTA_BASE}>
Return to Home
</Link>
</div>
</div>
</div>
)
}

View File

@@ -0,0 +1,28 @@
import type { Metadata } from 'next'
import Link from 'next/link'
import { AUTH_PRIMARY_CTA_BASE } from '@/app/(auth)/components/auth-button-classes'
export const metadata: Metadata = {
title: 'Page Not Found',
robots: { index: false, follow: true },
}
export default function ModelsNotFound() {
return (
<div className='flex min-h-[60vh] items-center justify-center px-4 py-24'>
<div className='flex flex-col items-center gap-3'>
<h1 className='text-balance font-[430] font-season text-[40px] text-white leading-[110%] tracking-[-0.02em]'>
Page not found
</h1>
<p className='font-[430] font-season text-[color-mix(in_srgb,var(--landing-text-subtle)_60%,transparent)] text-lg leading-[125%] tracking-[0.02em]'>
The page you&apos;re looking for doesn&apos;t exist or has been moved.
</p>
<div className='mt-3 flex items-center gap-2'>
<Link href='/' className={AUTH_PRIMARY_CTA_BASE}>
Return to Home
</Link>
</div>
</div>
</div>
)
}

View File

@@ -2,6 +2,7 @@
import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { sleep } from '@sim/utils/helpers'
import type { Edge } from 'reactflow'
import { buildMockExecutionPlan } from '@/lib/academy/mock-execution'
import type {
@@ -323,7 +324,7 @@ export function SandboxCanvasProvider({
for (let i = 0; i < plan.length; i++) {
const step = plan[i]
setActiveBlocks(workflowId, new Set([step.blockId]))
await new Promise((resolve) => setTimeout(resolve, step.delay))
await sleep(step.delay)
addConsole({
workflowId,
blockId: step.blockId,

View File

@@ -7,13 +7,13 @@
import { db } from '@sim/db'
import { a2aAgent, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, isNull, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { generateSkillsFromWorkflow } from '@/lib/a2a/agent-card'
import { A2A_DEFAULT_CAPABILITIES } from '@/lib/a2a/constants'
import { sanitizeAgentName } from '@/lib/a2a/utils'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateId } from '@/lib/core/utils/uuid'
import { captureServerEvent } from '@/lib/posthog/server'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils'

View File

@@ -2,6 +2,7 @@ import type { Artifact, Message, PushNotificationConfig, TaskState } from '@a2a-
import { db } from '@sim/db'
import { a2aAgent, a2aPushNotificationConfig, a2aTask, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, isNull } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { A2A_DEFAULT_TIMEOUT, A2A_MAX_HISTORY_LENGTH } from '@/lib/a2a/constants'
@@ -18,7 +19,6 @@ import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { getClientIp } from '@/lib/core/utils/request'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { generateId } from '@/lib/core/utils/uuid'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'

View File

@@ -1,7 +1,7 @@
import type { Artifact, Message, PushNotificationConfig, Task, TaskState } from '@a2a-js/sdk'
import { generateId } from '@sim/utils/id'
import { generateInternalToken } from '@/lib/auth/internal'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { generateId } from '@/lib/core/utils/uuid'
/** A2A v0.3 JSON-RPC method names */
export const A2A_METHODS = {

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import { academyCertificate, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateShortId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -9,7 +10,6 @@ import type { CertificateMetadata } from '@/lib/academy/types'
import { getSession } from '@/lib/auth'
import type { TokenBucketConfig } from '@/lib/core/rate-limiter'
import { RateLimiter } from '@/lib/core/rate-limiter'
import { generateShortId } from '@/lib/core/utils/uuid'
const logger = createLogger('AcademyCertificatesAPI')

View File

@@ -3,40 +3,35 @@
*
* @vitest-environment node
*/
import { createMockRequest } from '@sim/testing'
import { authMock, authMockFns, createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockDb, mockLogger, mockParseProvider, mockJwtDecode, mockEq } = vi.hoisted(
() => {
const db = {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn(),
}
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockDb: db,
mockLogger: logger,
mockParseProvider: vi.fn(),
mockJwtDecode: vi.fn(),
mockEq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}
const { mockDb, mockLogger, mockParseProvider, mockJwtDecode, mockEq } = vi.hoisted(() => {
const db = {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn(),
}
)
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockDb: db,
mockLogger: logger,
mockParseProvider: vi.fn(),
mockJwtDecode: vi.fn(),
mockEq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}
})
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: mockDb,
@@ -78,7 +73,7 @@ describe('OAuth Connections API Route', () => {
})
it('should return connections successfully', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -134,7 +129,7 @@ describe('OAuth Connections API Route', () => {
})
it('should handle unauthenticated user', async () => {
mockGetSession.mockResolvedValueOnce(null)
authMockFns.mockGetSession.mockResolvedValueOnce(null)
const req = createMockRequest('GET')
@@ -147,7 +142,7 @@ describe('OAuth Connections API Route', () => {
})
it('should handle user with no connections', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -170,7 +165,7 @@ describe('OAuth Connections API Route', () => {
})
it('should handle database error', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -189,7 +184,7 @@ describe('OAuth Connections API Route', () => {
})
it('should decode ID token for display name', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})

View File

@@ -4,10 +4,18 @@
* @vitest-environment node
*/
import {
hybridAuthMock,
hybridAuthMockFns,
permissionsMock,
requestUtilsMock,
schemaMock,
workflowsUtilsMock,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockCheckSessionOrInternalAuth, mockLogger } = vi.hoisted(() => {
const { mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
@@ -18,56 +26,23 @@ const { mockCheckSessionOrInternalAuth, mockLogger } = vi.hoisted(() => {
child: vi.fn(),
}
return {
mockCheckSessionOrInternalAuth: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/lib/auth/hybrid', () => ({
AuthType: { SESSION: 'session', API_KEY: 'api_key', INTERNAL_JWT: 'internal_jwt' },
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('mock-request-id'),
}))
vi.mock('@/lib/core/utils/request', () => requestUtilsMock)
vi.mock('@/lib/credentials/oauth', () => ({
syncWorkspaceOAuthCredentialsForUser: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)
vi.mock('@/lib/workspaces/permissions/utils', () => ({
checkWorkspaceAccess: vi.fn(),
}))
vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)
vi.mock('@sim/db/schema', () => ({
account: {
userId: 'userId',
providerId: 'providerId',
id: 'id',
scope: 'scope',
updatedAt: 'updatedAt',
},
credential: {
id: 'id',
workspaceId: 'workspaceId',
type: 'type',
displayName: 'displayName',
providerId: 'providerId',
accountId: 'accountId',
},
credentialMember: {
id: 'id',
credentialId: 'credentialId',
userId: 'userId',
status: 'status',
},
user: { email: 'email', id: 'id' },
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
@@ -86,7 +61,7 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle unauthenticated user', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
@@ -102,7 +77,7 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle missing provider parameter', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
@@ -119,7 +94,7 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle no credentials found', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
@@ -135,7 +110,7 @@ describe('OAuth Credentials API Route', () => {
})
it('should return empty credentials when no workspace context', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',

View File

@@ -3,11 +3,18 @@
*
* @vitest-environment node
*/
import { createMockRequest } from '@sim/testing'
import {
auditMock,
authMock,
authMockFns,
createMockRequest,
requestUtilsMock,
schemaMock,
} from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockDb, mockSelectChain, mockLogger, mockSyncAllWebhooksForCredentialSet } =
vi.hoisted(() => {
const { mockDb, mockSelectChain, mockLogger, mockSyncAllWebhooksForCredentialSet } = vi.hoisted(
() => {
const selectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
@@ -28,32 +35,21 @@ const { mockGetSession, mockDb, mockSelectChain, mockLogger, mockSyncAllWebhooks
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockDb: db,
mockSelectChain: selectChain,
mockLogger: logger,
mockSyncAllWebhooksForCredentialSet: vi.fn().mockResolvedValue({}),
}
})
}
)
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: mockDb,
}))
vi.mock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -66,28 +62,13 @@ vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.mock('@/lib/core/utils/request', () => requestUtilsMock)
vi.mock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
vi.mock('@/lib/audit/log', () => ({
recordAudit: vi.fn(),
AuditAction: {
CREDENTIAL_SET_CREATED: 'credential_set.created',
CREDENTIAL_SET_UPDATED: 'credential_set.updated',
CREDENTIAL_SET_DELETED: 'credential_set.deleted',
OAUTH_CONNECTED: 'oauth.connected',
OAUTH_DISCONNECTED: 'oauth.disconnected',
},
AuditResourceType: {
CREDENTIAL_SET: 'credential_set',
OAUTH_CONNECTION: 'oauth_connection',
},
}))
vi.mock('@/lib/audit/log', () => auditMock)
import { POST } from '@/app/api/auth/oauth/disconnect/route'
@@ -102,7 +83,7 @@ describe('OAuth Disconnect API Route', () => {
})
it('should disconnect provider successfully', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -122,7 +103,7 @@ describe('OAuth Disconnect API Route', () => {
})
it('should disconnect specific provider ID successfully', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -143,7 +124,7 @@ describe('OAuth Disconnect API Route', () => {
})
it('should handle unauthenticated user', async () => {
mockGetSession.mockResolvedValueOnce(null)
authMockFns.mockGetSession.mockResolvedValueOnce(null)
const req = createMockRequest('POST', {
provider: 'google',
@@ -158,7 +139,7 @@ describe('OAuth Disconnect API Route', () => {
})
it('should handle missing provider', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -173,7 +154,7 @@ describe('OAuth Disconnect API Route', () => {
})
it('should handle database error', async () => {
mockGetSession.mockResolvedValueOnce({
authMockFns.mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})

View File

@@ -3,20 +3,17 @@
*
* @vitest-environment node
*/
import { createMockRequest } from '@sim/testing'
import {
authOAuthUtilsMock,
authOAuthUtilsMockFns,
createMockRequest,
hybridAuthMock,
hybridAuthMockFns,
requestUtilsMock,
} from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockGetUserId,
mockGetCredential,
mockRefreshTokenIfNeeded,
mockGetOAuthToken,
mockResolveOAuthAccountId,
mockGetServiceAccountToken,
mockAuthorizeCredentialUse,
mockCheckSessionOrInternalAuth,
mockLogger,
} = vi.hoisted(() => {
const { mockAuthorizeCredentialUse, mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
@@ -27,26 +24,12 @@ const {
child: vi.fn(),
}
return {
mockGetUserId: vi.fn(),
mockGetCredential: vi.fn(),
mockRefreshTokenIfNeeded: vi.fn(),
mockGetOAuthToken: vi.fn(),
mockResolveOAuthAccountId: vi.fn(),
mockGetServiceAccountToken: vi.fn(),
mockAuthorizeCredentialUse: vi.fn(),
mockCheckSessionOrInternalAuth: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/app/api/auth/oauth/utils', () => ({
getUserId: mockGetUserId,
getCredential: mockGetCredential,
refreshTokenIfNeeded: mockRefreshTokenIfNeeded,
getOAuthToken: mockGetOAuthToken,
resolveOAuthAccountId: mockResolveOAuthAccountId,
getServiceAccountToken: mockGetServiceAccountToken,
}))
vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
@@ -56,23 +39,16 @@ vi.mock('@/lib/auth/credential-access', () => ({
authorizeCredentialUse: mockAuthorizeCredentialUse,
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.mock('@/lib/core/utils/request', () => requestUtilsMock)
vi.mock('@/lib/auth/hybrid', () => ({
AuthType: { SESSION: 'session', API_KEY: 'api_key', INTERNAL_JWT: 'internal_jwt' },
checkHybridAuth: vi.fn(),
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
checkInternalAuth: vi.fn(),
}))
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
import { GET, POST } from '@/app/api/auth/oauth/token/route'
describe('OAuth Token API Routes', () => {
beforeEach(() => {
vi.clearAllMocks()
mockResolveOAuthAccountId.mockResolvedValue(null)
authOAuthUtilsMockFns.mockResolveOAuthAccountId.mockResolvedValue(null)
})
/**
@@ -86,14 +62,14 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'owner-user-id',
})
mockGetCredential.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() + 3600 * 1000),
providerId: 'google',
})
mockRefreshTokenIfNeeded.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockRefreshTokenIfNeeded.mockResolvedValueOnce({
accessToken: 'fresh-token',
refreshed: false,
})
@@ -109,8 +85,8 @@ describe('OAuth Token API Routes', () => {
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockAuthorizeCredentialUse).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalled()
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockGetCredential).toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockRefreshTokenIfNeeded).toHaveBeenCalled()
})
it('should handle workflowId for server-side authentication', async () => {
@@ -120,14 +96,14 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'workflow-owner-id',
credentialOwnerUserId: 'workflow-owner-id',
})
mockGetCredential.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() + 3600 * 1000),
providerId: 'google',
})
mockRefreshTokenIfNeeded.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockRefreshTokenIfNeeded.mockResolvedValueOnce({
accessToken: 'fresh-token',
refreshed: false,
})
@@ -144,7 +120,7 @@ describe('OAuth Token API Routes', () => {
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockAuthorizeCredentialUse).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockGetCredential).toHaveBeenCalled()
})
it('should handle missing credentialId', async () => {
@@ -199,7 +175,7 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'owner-user-id',
})
mockGetCredential.mockResolvedValueOnce(undefined)
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce(undefined)
const req = createMockRequest('POST', {
credentialId: 'nonexistent-credential-id',
@@ -219,14 +195,16 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'owner-user-id',
})
mockGetCredential.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() - 3600 * 1000), // Expired
providerId: 'google',
})
mockRefreshTokenIfNeeded.mockRejectedValueOnce(new Error('Refresh failure'))
authOAuthUtilsMockFns.mockRefreshTokenIfNeeded.mockRejectedValueOnce(
new Error('Refresh failure')
)
const req = createMockRequest('POST', {
credentialId: 'credential-id',
@@ -241,7 +219,7 @@ describe('OAuth Token API Routes', () => {
describe('credentialAccountUserId + providerId path', () => {
it('should reject unauthenticated requests', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
@@ -256,11 +234,11 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'User not authenticated')
expect(mockGetOAuthToken).not.toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockGetOAuthToken).not.toHaveBeenCalled()
})
it('should reject internal JWT authentication', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'internal_jwt',
userId: 'test-user-id',
@@ -276,11 +254,11 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'User not authenticated')
expect(mockGetOAuthToken).not.toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockGetOAuthToken).not.toHaveBeenCalled()
})
it('should reject requests for other users credentials', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'attacker-user-id',
@@ -296,16 +274,16 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(403)
expect(data).toHaveProperty('error', 'Unauthorized')
expect(mockGetOAuthToken).not.toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockGetOAuthToken).not.toHaveBeenCalled()
})
it('should allow session-authenticated users to access their own credentials', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
})
mockGetOAuthToken.mockResolvedValueOnce('valid-access-token')
authOAuthUtilsMockFns.mockGetOAuthToken.mockResolvedValueOnce('valid-access-token')
const req = createMockRequest('POST', {
credentialAccountUserId: 'test-user-id',
@@ -317,16 +295,19 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(200)
expect(data).toHaveProperty('accessToken', 'valid-access-token')
expect(mockGetOAuthToken).toHaveBeenCalledWith('test-user-id', 'google')
expect(authOAuthUtilsMockFns.mockGetOAuthToken).toHaveBeenCalledWith(
'test-user-id',
'google'
)
})
it('should return 404 when credential not found for user', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
authType: 'session',
userId: 'test-user-id',
})
mockGetOAuthToken.mockResolvedValueOnce(null)
authOAuthUtilsMockFns.mockGetOAuthToken.mockResolvedValueOnce(null)
const req = createMockRequest('POST', {
credentialAccountUserId: 'test-user-id',
@@ -353,14 +334,14 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() + 3600 * 1000),
providerId: 'google',
})
mockRefreshTokenIfNeeded.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockRefreshTokenIfNeeded.mockResolvedValueOnce({
accessToken: 'fresh-token',
refreshed: false,
})
@@ -376,8 +357,8 @@ describe('OAuth Token API Routes', () => {
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockAuthorizeCredentialUse).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalled()
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockGetCredential).toHaveBeenCalled()
expect(authOAuthUtilsMockFns.mockRefreshTokenIfNeeded).toHaveBeenCalled()
})
it('should handle missing credentialId', async () => {
@@ -415,7 +396,7 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce(undefined)
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce(undefined)
const req = new Request(
'http://localhost:3000/api/auth/oauth/token?credentialId=nonexistent-credential-id'
@@ -435,7 +416,7 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: null,
refreshToken: 'refresh-token',
@@ -460,14 +441,16 @@ describe('OAuth Token API Routes', () => {
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
authOAuthUtilsMockFns.mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
accessToken: 'test-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() - 3600 * 1000), // Expired
providerId: 'google',
})
mockRefreshTokenIfNeeded.mockRejectedValueOnce(new Error('Refresh failure'))
authOAuthUtilsMockFns.mockRefreshTokenIfNeeded.mockRejectedValueOnce(
new Error('Refresh failure')
)
const req = new Request(
'http://localhost:3000/api/auth/oauth/token?credentialId=credential-id'

View File

@@ -4,18 +4,13 @@
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@/lib/oauth/oauth', () => ({
refreshOAuthToken: vi.fn(),
OAUTH_PROVIDERS: {},
}))
vi.mock('@sim/logger', () => loggerMock)
import { db } from '@sim/db'
import { refreshOAuthToken } from '@/lib/oauth'
import {

View File

@@ -2,6 +2,7 @@ import { createSign } from 'crypto'
import { db } from '@sim/db'
import { account, credential, credentialSetMember } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { and, desc, eq, inArray } from 'drizzle-orm'
import { decryptSecret } from '@/lib/core/security/encryption'
import { refreshOAuthToken } from '@/lib/oauth'
@@ -331,7 +332,7 @@ export async function getOAuthToken(userId: string, providerId: string): Promise
return accessToken
} catch (error) {
logger.error(`Error refreshing token for user ${userId}, provider ${providerId}`, {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
stack: error instanceof Error ? error.stack : undefined,
providerId,
userId,
@@ -460,7 +461,7 @@ export async function refreshAccessTokenIfNeeded(
return refreshedToken.accessToken
} catch (error) {
logger.error(`[${requestId}] Error refreshing token for credential`, {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
stack: error instanceof Error ? error.stack : undefined,
providerId: credential.providerId,
credentialId,
@@ -664,7 +665,7 @@ export async function getCredentialsForCredentialSet(
}
} catch (error) {
logger.error(`Failed to refresh token for user ${cred.userId}, provider ${providerId}`, {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
continue
}

View File

@@ -5,6 +5,7 @@ import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { isSameOrigin } from '@/lib/core/utils/validation'
import { processCredentialDraft } from '@/lib/credentials/draft-processor'
import { safeAccountInsert } from '@/app/api/auth/oauth/utils'
@@ -113,7 +114,7 @@ export async function GET(request: NextRequest) {
const returnUrl = request.cookies.get('shopify_return_url')?.value
const redirectUrl = returnUrl || `${baseUrl}/workspace`
const redirectUrl = returnUrl && isSameOrigin(returnUrl) ? returnUrl : `${baseUrl}/workspace`
const finalUrl = new URL(redirectUrl)
finalUrl.searchParams.set('shopify_connected', 'true')

View File

@@ -1,9 +1,10 @@
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { env } from '@/lib/core/config/env'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { generateId } from '@/lib/core/utils/uuid'
import { isSameOrigin } from '@/lib/core/utils/validation'
import { getScopesForService } from '@/lib/oauth/utils'
const logger = createLogger('ShopifyAuthorize')
@@ -192,7 +193,7 @@ export async function GET(request: NextRequest) {
path: '/',
})
if (returnUrl) {
if (returnUrl && isSameOrigin(returnUrl)) {
response.cookies.set('shopify_return_url', returnUrl, {
httpOnly: true,
secure: process.env.NODE_ENV === 'production',

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { headers } from 'next/headers'
import { NextResponse } from 'next/server'
import { auth } from '@/lib/auth'
@@ -36,7 +37,7 @@ export async function POST() {
}
logger.error('Failed to generate socket token', {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
stack: error instanceof Error ? error.stack : undefined,
})
return NextResponse.json({ error: 'Failed to generate token' }, { status: 500 })

View File

@@ -147,6 +147,32 @@ export async function POST(request: NextRequest) {
oidcConfig.userInfoEndpoint = userInfoEndpoint
oidcConfig.jwksEndpoint = jwksEndpoint
const userProvidedEndpoints: Record<string, string | undefined> = {
authorizationEndpoint,
tokenEndpoint,
userInfoEndpoint,
jwksEndpoint,
}
for (const [name, endpointUrl] of Object.entries(userProvidedEndpoints)) {
if (endpointUrl) {
const endpointValidation = await validateUrlWithDNS(endpointUrl, `OIDC ${name}`)
if (!endpointValidation.isValid) {
logger.warn('Explicitly provided OIDC endpoint failed SSRF validation', {
endpoint: name,
url: endpointUrl,
error: endpointValidation.error,
})
return NextResponse.json(
{
error: `OIDC ${name} failed security validation: ${endpointValidation.error}`,
},
{ status: 400 }
)
}
}
}
const needsDiscovery =
!oidcConfig.authorizationEndpoint || !oidcConfig.tokenEndpoint || !oidcConfig.jwksEndpoint

View File

@@ -7,8 +7,6 @@ import { getSession } from '@/lib/auth'
import { getEffectiveBillingStatus } from '@/lib/billing/core/access'
import { getSimplifiedBillingSummary } from '@/lib/billing/core/billing'
import { getOrganizationBillingData } from '@/lib/billing/core/organization'
import { dollarsToCredits } from '@/lib/billing/credits/conversion'
import { getPlanTierCredits } from '@/lib/billing/plan-helpers'
const logger = createLogger('UnifiedBillingAPI')
@@ -47,7 +45,20 @@ export async function GET(request: NextRequest) {
let billingData
if (context === 'user') {
// Get user billing and billing blocked status in parallel
if (contextId) {
const membership = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.organizationId, contextId), eq(member.userId, session.user.id)))
.limit(1)
if (membership.length === 0) {
return NextResponse.json(
{ error: 'Access denied - not a member of this organization' },
{ status: 403 }
)
}
}
const [billingResult, billingStatus] = await Promise.all([
getSimplifiedBillingSummary(session.user.id, contextId || undefined),
getEffectiveBillingStatus(session.user.id),
@@ -107,7 +118,6 @@ export async function GET(request: NextRequest) {
)
}
// Transform data to match component expectations
billingData = {
organizationId: rawBillingData.organizationId,
organizationName: rawBillingData.organizationName,
@@ -122,17 +132,10 @@ export async function GET(request: NextRequest) {
averageUsagePerMember: rawBillingData.averageUsagePerMember,
billingPeriodStart: rawBillingData.billingPeriodStart?.toISOString() || null,
billingPeriodEnd: rawBillingData.billingPeriodEnd?.toISOString() || null,
tierCredits: getPlanTierCredits(rawBillingData.subscriptionPlan),
totalCurrentUsageCredits: dollarsToCredits(rawBillingData.totalCurrentUsage),
totalUsageLimitCredits: dollarsToCredits(rawBillingData.totalUsageLimit),
minimumBillingAmountCredits: dollarsToCredits(rawBillingData.minimumBillingAmount),
averageUsagePerMemberCredits: dollarsToCredits(rawBillingData.averageUsagePerMember),
members: rawBillingData.members.map((m) => ({
...m,
joinedAt: m.joinedAt.toISOString(),
lastActive: m.lastActive?.toISOString() || null,
currentUsageCredits: dollarsToCredits(m.currentUsage),
usageLimitCredits: dollarsToCredits(m.usageLimit),
})),
}

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import { subscription as subscriptionTable } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -9,12 +10,13 @@ import { getEffectiveBillingStatus } from '@/lib/billing/core/access'
import { isOrganizationOwnerOrAdmin } from '@/lib/billing/core/organization'
import { getHighestPrioritySubscription } from '@/lib/billing/core/plan'
import { writeBillingInterval } from '@/lib/billing/core/subscription'
import { getPlanType, isEnterprise, isOrgPlan } from '@/lib/billing/plan-helpers'
import { getPlanType, isEnterprise } from '@/lib/billing/plan-helpers'
import { getPlanByName } from '@/lib/billing/plans'
import { requireStripeClient } from '@/lib/billing/stripe-client'
import {
hasUsableSubscriptionAccess,
hasUsableSubscriptionStatus,
isOrgScopedSubscription,
} from '@/lib/billing/subscriptions/utils'
import { isBillingEnabled } from '@/lib/core/config/feature-flags'
import { captureServerEvent } from '@/lib/posthog/server'
@@ -92,7 +94,7 @@ export async function POST(request: NextRequest) {
)
}
if (isOrgPlan(sub.plan)) {
if (isOrgScopedSubscription(sub, userId)) {
const hasPermission = await isOrganizationOwnerOrAdmin(userId, sub.referenceId)
if (!hasPermission) {
return NextResponse.json({ error: 'Only team admins can change the plan' }, { status: 403 })
@@ -185,7 +187,7 @@ export async function POST(request: NextRequest) {
} catch (error) {
logger.error('Failed to switch subscription', {
userId: session?.user?.id,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Failed to switch plan' },

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -170,7 +171,7 @@ export async function POST(req: NextRequest) {
const duration = Date.now() - startTime
logger.error(`[${requestId}] Cost update failed`, {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
stack: error instanceof Error ? error.stack : undefined,
duration,
})
@@ -180,7 +181,7 @@ export async function POST(req: NextRequest) {
.release(claim.normalizedKey, claim.storageMethod)
.catch((releaseErr) => {
logger.warn(`[${requestId}] Failed to release idempotency claim`, {
error: releaseErr instanceof Error ? releaseErr.message : String(releaseErr),
error: toError(releaseErr).message,
normalizedKey: claim?.normalizedKey,
})
})

View File

@@ -3,6 +3,13 @@
*
* @vitest-environment node
*/
import {
redisConfigMock,
redisConfigMockFns,
schemaMock,
workflowsApiUtilsMock,
workflowsApiUtilsMockFns,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -12,7 +19,6 @@ const {
mockRedisDel,
mockRedisTtl,
mockRedisEval,
mockGetRedisClient,
mockRedisClient,
mockDbSelect,
mockDbInsert,
@@ -21,8 +27,6 @@ const {
mockSendEmail,
mockRenderOTPEmail,
mockAddCorsHeaders,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockSetChatAuthCookie,
mockGenerateRequestId,
mockGetStorageMethod,
@@ -41,7 +45,6 @@ const {
ttl: mockRedisTtl,
eval: mockRedisEval,
}
const mockGetRedisClient = vi.fn()
const mockDbSelect = vi.fn()
const mockDbInsert = vi.fn()
const mockDbDelete = vi.fn()
@@ -49,8 +52,6 @@ const {
const mockSendEmail = vi.fn()
const mockRenderOTPEmail = vi.fn()
const mockAddCorsHeaders = vi.fn()
const mockCreateSuccessResponse = vi.fn()
const mockCreateErrorResponse = vi.fn()
const mockSetChatAuthCookie = vi.fn()
const mockGenerateRequestId = vi.fn()
const mockGetStorageMethod = vi.fn()
@@ -63,7 +64,6 @@ const {
mockRedisDel,
mockRedisTtl,
mockRedisEval,
mockGetRedisClient,
mockRedisClient,
mockDbSelect,
mockDbInsert,
@@ -72,8 +72,6 @@ const {
mockSendEmail,
mockRenderOTPEmail,
mockAddCorsHeaders,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockSetChatAuthCookie,
mockGenerateRequestId,
mockGetStorageMethod,
@@ -82,9 +80,11 @@ const {
}
})
vi.mock('@/lib/core/config/redis', () => ({
getRedisClient: mockGetRedisClient,
}))
const mockGetRedisClient = redisConfigMockFns.mockGetRedisClient
const mockCreateSuccessResponse = workflowsApiUtilsMockFns.mockCreateSuccessResponse
const mockCreateErrorResponse = workflowsApiUtilsMockFns.mockCreateErrorResponse
vi.mock('@/lib/core/config/redis', () => redisConfigMock)
vi.mock('@sim/db', () => ({
db: {
@@ -103,25 +103,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
chat: {
id: 'id',
identifier: 'identifier',
authType: 'authType',
allowedEmails: 'allowedEmails',
title: 'title',
isActive: 'isActive',
archivedAt: 'archivedAt',
},
verification: {
id: 'id',
identifier: 'identifier',
value: 'value',
expiresAt: 'expiresAt',
createdAt: 'createdAt',
updatedAt: 'updatedAt',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: string, value: string) => ({ field, value, type: 'eq' })),
@@ -160,19 +142,7 @@ vi.mock('@/app/api/chat/utils', () => ({
setChatAuthCookie: mockSetChatAuthCookie,
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
}),
}))
vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/lib/core/config/env', () => ({
env: {

View File

@@ -2,6 +2,7 @@ import { randomInt } from 'crypto'
import { db } from '@sim/db'
import { chat, verification } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, gt, isNull } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { z } from 'zod'
@@ -10,7 +11,6 @@ import { getRedisClient } from '@/lib/core/config/redis'
import { addCorsHeaders, isEmailAllowed } from '@/lib/core/security/deployment'
import { getStorageMethod } from '@/lib/core/storage'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateId } from '@/lib/core/utils/uuid'
import { sendEmail } from '@/lib/messaging/email/mailer'
import { setChatAuthCookie } from '@/app/api/chat/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'

View File

@@ -3,8 +3,16 @@
*
* @vitest-environment node
*/
import { loggerMock, requestUtilsMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import {
encryptionMock,
executionPreprocessingMock,
executionPreprocessingMockFns,
loggingSessionMock,
requestUtilsMock,
workflowsApiUtilsMock,
workflowsApiUtilsMockFns,
} from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
/**
* Creates a mock NextRequest with cookies support for testing.
@@ -57,30 +65,17 @@ const {
mockValidateChatAuth,
mockSetChatAuthCookie,
mockValidateAuthToken,
mockCreateErrorResponse,
mockCreateSuccessResponse,
} = vi.hoisted(() => ({
mockDbSelect: vi.fn(),
mockAddCorsHeaders: vi.fn().mockImplementation((response: Response) => response),
mockValidateChatAuth: vi.fn().mockResolvedValue({ authorized: true }),
mockSetChatAuthCookie: vi.fn(),
mockValidateAuthToken: vi.fn().mockReturnValue(false),
mockCreateErrorResponse: vi
.fn()
.mockImplementation((message: string, status: number, code?: string) => {
return new Response(
JSON.stringify({
error: code || 'Error',
message,
}),
{ status }
)
}),
mockCreateSuccessResponse: vi.fn().mockImplementation((data: unknown) => {
return new Response(JSON.stringify(data), { status: 200 })
}),
}))
const mockCreateErrorResponse = workflowsApiUtilsMockFns.mockCreateErrorResponse
const mockCreateSuccessResponse = workflowsApiUtilsMockFns.mockCreateSuccessResponse
vi.mock('@sim/db', () => ({
db: { select: mockDbSelect },
chat: {},
@@ -99,42 +94,11 @@ vi.mock('@/app/api/chat/utils', () => ({
setChatAuthCookie: mockSetChatAuthCookie,
}))
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/app/api/workflows/utils', () => ({
createErrorResponse: mockCreateErrorResponse,
createSuccessResponse: mockCreateSuccessResponse,
}))
vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)
vi.mock('@/lib/execution/preprocessing', () => ({
preprocessExecution: vi.fn().mockResolvedValue({
success: true,
actorUserId: 'test-user-id',
workflowRecord: {
id: 'test-workflow-id',
userId: 'test-user-id',
isDeployed: true,
workspaceId: 'test-workspace-id',
variables: {},
},
userSubscription: {
plan: 'pro',
status: 'active',
},
rateLimitInfo: {
allowed: true,
remaining: 100,
resetAt: new Date(),
},
}),
}))
vi.mock('@/lib/logs/execution/logging-session', () => ({
LoggingSession: vi.fn().mockImplementation(() => ({
safeStart: vi.fn().mockResolvedValue(undefined),
safeCompleteWithError: vi.fn().mockResolvedValue(undefined),
})),
}))
vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)
vi.mock('@/lib/workflows/streaming/streaming', () => ({
createStreamingResponse: vi.fn().mockImplementation(async () => createMockStream()),
@@ -155,9 +119,7 @@ vi.mock('@/lib/core/utils/sse', () => ({
vi.mock('@/lib/core/utils/request', () => requestUtilsMock)
vi.mock('@/lib/core/security/encryption', () => ({
decryptSecret: vi.fn().mockResolvedValue({ decrypted: 'test-password' }),
}))
vi.mock('@/lib/core/security/encryption', () => encryptionMock)
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { createStreamingResponse } from '@/lib/workflows/streaming/streaming'
@@ -202,6 +164,27 @@ describe('Chat Identifier API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
executionPreprocessingMockFns.mockPreprocessExecution.mockResolvedValue({
success: true,
actorUserId: 'test-user-id',
workflowRecord: {
id: 'test-workflow-id',
userId: 'test-user-id',
isDeployed: true,
workspaceId: 'test-workspace-id',
variables: {},
},
userSubscription: {
plan: 'pro',
status: 'active',
},
rateLimitInfo: {
allowed: true,
remaining: 100,
resetAt: new Date(),
},
})
mockAddCorsHeaders.mockImplementation((response: Response) => response)
mockValidateChatAuth.mockResolvedValue({ authorized: true })
mockValidateAuthToken.mockReturnValue(false)
@@ -238,10 +221,6 @@ describe('Chat Identifier API Route', () => {
})
})
afterEach(() => {
vi.clearAllMocks()
})
describe('GET endpoint', () => {
it('should return chat info for a valid identifier', async () => {
const req = createMockNextRequest('GET')

View File

@@ -1,12 +1,12 @@
import { db } from '@sim/db'
import { chat, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, isNull } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { addCorsHeaders, validateAuthToken } from '@/lib/core/security/deployment'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateId } from '@/lib/core/utils/uuid'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { ChatFiles } from '@/lib/uploads'

View File

@@ -3,52 +3,43 @@
*
* @vitest-environment node
*/
import { auditMock } from '@sim/testing'
import {
auditMock,
authMock,
authMockFns,
encryptionMock,
encryptionMockFns,
schemaMock,
workflowsApiUtilsMock,
workflowsApiUtilsMockFns,
workflowsOrchestrationMock,
workflowsOrchestrationMockFns,
workflowsPersistenceUtilsMock,
workflowsPersistenceUtilsMockFns,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockGetSession,
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockUpdate,
mockSet,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockEncryptSecret,
mockCheckChatAccess,
mockDeployWorkflow,
mockPerformChatUndeploy,
mockLogger,
} = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockCreateSuccessResponse: vi.fn(),
mockCreateErrorResponse: vi.fn(),
mockEncryptSecret: vi.fn(),
mockCheckChatAccess: vi.fn(),
mockDeployWorkflow: vi.fn(),
mockPerformChatUndeploy: vi.fn(),
mockLogger: logger,
}
})
const { mockSelect, mockFrom, mockWhere, mockLimit, mockUpdate, mockSet, mockCheckChatAccess } =
vi.hoisted(() => {
return {
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockCheckChatAccess: vi.fn(),
}
})
const mockCreateSuccessResponse = workflowsApiUtilsMockFns.mockCreateSuccessResponse
const mockCreateErrorResponse = workflowsApiUtilsMockFns.mockCreateErrorResponse
const mockEncryptSecret = encryptionMockFns.mockEncryptSecret
const mockDeployWorkflow = workflowsPersistenceUtilsMockFns.mockDeployWorkflow
const mockPerformChatUndeploy = workflowsOrchestrationMockFns.mockPerformChatUndeploy
const mockNotifySocketDeploymentChanged =
workflowsOrchestrationMockFns.mockNotifySocketDeploymentChanged
vi.mock('@/lib/audit/log', () => auditMock)
vi.mock('@/lib/core/config/feature-flags', () => ({
@@ -56,40 +47,24 @@ vi.mock('@/lib/core/config/feature-flags', () => ({
isHosted: false,
isProd: false,
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
update: mockUpdate,
},
}))
vi.mock('@sim/db/schema', () => ({
chat: { id: 'id', identifier: 'identifier', userId: 'userId', archivedAt: 'archivedAt' },
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@/lib/core/security/encryption', () => ({
encryptSecret: mockEncryptSecret,
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/lib/core/security/encryption', () => encryptionMock)
vi.mock('@/lib/core/utils/urls', () => ({
getEmailDomain: vi.fn().mockReturnValue('localhost:3000'),
}))
vi.mock('@/app/api/chat/utils', () => ({
checkChatAccess: mockCheckChatAccess,
}))
vi.mock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow,
}))
vi.mock('@/lib/workflows/orchestration', () => ({
performChatUndeploy: mockPerformChatUndeploy,
}))
vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)
vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ type: 'and', conditions })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
@@ -125,15 +100,12 @@ describe('Chat Edit API Route', () => {
mockEncryptSecret.mockResolvedValue({ encrypted: 'encrypted-password' })
mockDeployWorkflow.mockResolvedValue({ success: true, version: 1 })
})
afterEach(() => {
vi.clearAllMocks()
mockNotifySocketDeploymentChanged.mockResolvedValue(undefined)
})
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123')
const response = await GET(req, { params: Promise.resolve({ id: 'chat-123' }) })
@@ -144,7 +116,7 @@ describe('Chat Edit API Route', () => {
})
it('should return 404 when chat not found or access denied', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -160,7 +132,7 @@ describe('Chat Edit API Route', () => {
})
it('should return chat details when user has access', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -190,7 +162,7 @@ describe('Chat Edit API Route', () => {
describe('PATCH', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'PATCH',
@@ -204,7 +176,7 @@ describe('Chat Edit API Route', () => {
})
it('should return 404 when chat not found or access denied', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -223,7 +195,7 @@ describe('Chat Edit API Route', () => {
})
it('should update chat when user has access', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -256,7 +228,7 @@ describe('Chat Edit API Route', () => {
})
it('should handle identifier conflicts', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -285,7 +257,7 @@ describe('Chat Edit API Route', () => {
})
it('should validate password requirement for password auth', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -312,7 +284,7 @@ describe('Chat Edit API Route', () => {
})
it('should keep the existing password when updating a password-protected chat', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -352,7 +324,7 @@ describe('Chat Edit API Route', () => {
})
it('should allow access when user has workspace admin permission', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'admin-user-id' },
})
@@ -383,7 +355,7 @@ describe('Chat Edit API Route', () => {
describe('DELETE', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'DELETE',
@@ -396,7 +368,7 @@ describe('Chat Edit API Route', () => {
})
it('should return 404 when chat not found or access denied', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -414,7 +386,7 @@ describe('Chat Edit API Route', () => {
})
it('should delete chat when user has access', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -440,7 +412,7 @@ describe('Chat Edit API Route', () => {
})
it('should allow deletion when user has workspace admin permission', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'admin-user-id' },
})

View File

@@ -9,7 +9,7 @@ import { getSession } from '@/lib/auth'
import { isDev } from '@/lib/core/config/feature-flags'
import { encryptSecret } from '@/lib/core/security/encryption'
import { getEmailDomain } from '@/lib/core/utils/urls'
import { performChatUndeploy } from '@/lib/workflows/orchestration'
import { notifySocketDeploymentChanged, performChatUndeploy } from '@/lib/workflows/orchestration'
import { deployWorkflow } from '@/lib/workflows/persistence/utils'
import { checkChatAccess } from '@/app/api/chat/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
@@ -155,6 +155,7 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
logger.info(
`Redeployed workflow ${existingChat[0].workflowId} for chat update (v${deployResult.version})`
)
await notifySocketDeploymentChanged(existingChat[0].workflowId)
}
let encryptedPassword

View File

@@ -3,31 +3,31 @@
*
* @vitest-environment node
*/
import { createEnvMock } from '@sim/testing'
import {
authMock,
authMockFns,
createEnvMock,
schemaMock,
workflowsApiUtilsMock,
workflowsApiUtilsMockFns,
workflowsOrchestrationMock,
workflowsOrchestrationMockFns,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockCheckWorkflowAccessForChatCreation,
mockPerformChatDeploy,
mockGetSession,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockCreateSuccessResponse: vi.fn(),
mockCreateErrorResponse: vi.fn(),
mockCheckWorkflowAccessForChatCreation: vi.fn(),
mockPerformChatDeploy: vi.fn(),
mockGetSession: vi.fn(),
}))
const { mockSelect, mockFrom, mockWhere, mockLimit, mockCheckWorkflowAccessForChatCreation } =
vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockCheckWorkflowAccessForChatCreation: vi.fn(),
}))
const mockCreateSuccessResponse = workflowsApiUtilsMockFns.mockCreateSuccessResponse
const mockCreateErrorResponse = workflowsApiUtilsMockFns.mockCreateErrorResponse
const mockPerformChatDeploy = workflowsOrchestrationMockFns.mockPerformChatDeploy
vi.mock('@sim/db', () => ({
db: {
@@ -35,10 +35,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
chat: { userId: 'userId', identifier: 'identifier', archivedAt: 'archivedAt' },
workflow: { id: 'id', userId: 'userId', isDeployed: 'isDeployed' },
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ type: 'and', conditions })),
@@ -46,22 +43,15 @@ vi.mock('drizzle-orm', () => ({
isNull: vi.fn((field: unknown) => ({ type: 'isNull', field })),
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/app/api/chat/utils', () => ({
checkWorkflowAccessForChatCreation: mockCheckWorkflowAccessForChatCreation,
}))
vi.mock('@/lib/workflows/orchestration', () => ({
performChatDeploy: mockPerformChatDeploy,
}))
vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@/lib/core/config/env', () =>
createEnvMock({
@@ -101,13 +91,9 @@ describe('Chat API Route', () => {
})
})
afterEach(() => {
vi.clearAllMocks()
})
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat')
const response = await GET(req)
@@ -117,7 +103,7 @@ describe('Chat API Route', () => {
})
it('should return chat deployments for authenticated user', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -133,7 +119,7 @@ describe('Chat API Route', () => {
})
it('should handle errors when fetching deployments', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -149,7 +135,7 @@ describe('Chat API Route', () => {
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
@@ -162,7 +148,7 @@ describe('Chat API Route', () => {
})
it('should validate request data', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -178,7 +164,7 @@ describe('Chat API Route', () => {
})
it('should reject if identifier already exists', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -206,7 +192,7 @@ describe('Chat API Route', () => {
})
it('should reject if workflow not found', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -237,7 +223,7 @@ describe('Chat API Route', () => {
})
it('should allow chat deployment when user owns workflow directly', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
@@ -275,7 +261,7 @@ describe('Chat API Route', () => {
})
it('passes chat customizations and outputConfigs through in the API request shape', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
@@ -319,7 +305,7 @@ describe('Chat API Route', () => {
})
it('should allow chat deployment when user has workspace admin permission', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
@@ -356,7 +342,7 @@ describe('Chat API Route', () => {
})
it('should reject when workflow is in workspace but user lacks admin permission', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -390,7 +376,7 @@ describe('Chat API Route', () => {
})
it('should handle workspace permission check errors gracefully', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
@@ -418,7 +404,7 @@ describe('Chat API Route', () => {
})
it('should call performChatDeploy for undeployed workflow', async () => {
mockGetSession.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})

View File

@@ -3,12 +3,17 @@
*
* @vitest-environment node
*/
import { databaseMock, loggerMock, requestUtilsMock } from '@sim/testing'
import {
encryptionMock,
encryptionMockFns,
loggingSessionMock,
requestUtilsMock,
workflowsUtilsMock,
} from '@sim/testing'
import type { NextResponse } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockDecryptSecret,
mockMergeSubblockStateWithValues,
mockMergeSubBlockValues,
mockValidateAuthToken,
@@ -16,7 +21,6 @@ const {
mockAddCorsHeaders,
mockIsEmailAllowed,
} = vi.hoisted(() => ({
mockDecryptSecret: vi.fn(),
mockMergeSubblockStateWithValues: vi.fn().mockReturnValue({}),
mockMergeSubBlockValues: vi.fn().mockReturnValue({}),
mockValidateAuthToken: vi.fn().mockReturnValue(false),
@@ -25,16 +29,9 @@ const {
mockIsEmailAllowed: vi.fn(),
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const mockDecryptSecret = encryptionMockFns.mockDecryptSecret
vi.mock('@/lib/logs/execution/logging-session', () => ({
LoggingSession: vi.fn().mockImplementation(() => ({
safeStart: vi.fn().mockResolvedValue(undefined),
safeComplete: vi.fn().mockResolvedValue(undefined),
safeCompleteWithError: vi.fn().mockResolvedValue(undefined),
})),
}))
vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)
vi.mock('@/executor', () => ({
Executor: vi.fn(),
@@ -49,9 +46,7 @@ vi.mock('@/lib/workflows/subblocks', () => ({
mergeSubBlockValues: mockMergeSubBlockValues,
}))
vi.mock('@/lib/core/security/encryption', () => ({
decryptSecret: mockDecryptSecret,
}))
vi.mock('@/lib/core/security/encryption', () => encryptionMock)
vi.mock('@/lib/core/utils/request', () => requestUtilsMock)
@@ -68,9 +63,7 @@ vi.mock('@/lib/core/config/feature-flags', () => ({
isProd: false,
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)
import { decryptSecret } from '@/lib/core/security/encryption'
import { setChatAuthCookie, validateChatAuth } from '@/app/api/chat/utils'

View File

@@ -3,33 +3,22 @@
*
* @vitest-environment node
*/
import { authMock, authMockFns, createEnvMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockFetch } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
const { mockFetch } = vi.hoisted(() => ({
mockFetch: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@/lib/copilot/constants', () => ({
SIM_AGENT_API_URL_DEFAULT: 'https://agent.sim.example.com',
SIM_AGENT_API_URL: 'https://agent.sim.example.com',
}))
vi.mock('@/lib/core/config/env', () => ({
env: {
COPILOT_API_KEY: 'test-api-key',
},
getEnv: vi.fn(),
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value.toLowerCase() === 'true' || value === '1' : Boolean(value),
isFalsy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value.toLowerCase() === 'false' || value === '0' : value === false,
}))
vi.mock('@/lib/core/config/env', () => createEnvMock({ COPILOT_API_KEY: 'test-api-key' }))
import { DELETE, GET } from '@/app/api/copilot/api-keys/route'
@@ -41,7 +30,7 @@ describe('Copilot API Keys API Route', () => {
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -52,7 +41,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return list of API keys with masked values', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
const mockApiKeys = [
{
@@ -90,7 +81,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return empty array when user has no API keys', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,
@@ -106,7 +99,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should forward userId to Sim Agent', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,
@@ -130,7 +125,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return error when Sim Agent returns non-ok response', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: false,
@@ -147,7 +144,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return 500 when Sim Agent returns invalid response', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,
@@ -163,7 +162,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle network errors gracefully', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockRejectedValueOnce(new Error('Network error'))
@@ -176,7 +177,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle API keys with empty apiKey string', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
const mockApiKeys = [
{
@@ -202,7 +205,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle JSON parsing errors from Sim Agent', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,
@@ -220,7 +225,7 @@ describe('Copilot API Keys API Route', () => {
describe('DELETE', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=key-123')
const response = await DELETE(request)
@@ -231,7 +236,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return 400 when id parameter is missing', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await DELETE(request)
@@ -242,7 +249,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should successfully delete an API key', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,
@@ -270,7 +279,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return error when Sim Agent returns non-ok response', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: false,
@@ -287,7 +298,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should return 500 when Sim Agent returns invalid response', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,
@@ -303,7 +316,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle network errors gracefully', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockRejectedValueOnce(new Error('Network error'))
@@ -316,7 +331,9 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle JSON parsing errors from Sim Agent on delete', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
authMockFns.mockGetSession.mockResolvedValue({
user: { id: 'user-123', email: 'test@example.com' },
})
mockFetch.mockResolvedValueOnce({
ok: true,

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
@@ -20,7 +21,7 @@ export async function POST(request: Request) {
const body = await request.json().catch((err) => {
logger.warn('Abort request body parse failed; continuing with empty object', {
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
return {}
})
@@ -35,7 +36,7 @@ export async function POST(request: Request) {
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch((err) => {
logger.warn('getLatestRunForStream failed while resolving chatId for abort', {
streamId,
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
return null
})
@@ -70,7 +71,7 @@ export async function POST(request: Request) {
} catch (err) {
logger.warn('Explicit abort marker request failed; proceeding with local abort', {
streamId,
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
}

View File

@@ -3,19 +3,17 @@
*
* @vitest-environment node
*/
import { authMock, authMockFns, schemaMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const { mockDelete, mockWhere, mockGetSession, mockGetAccessibleCopilotChat } = vi.hoisted(() => ({
const { mockDelete, mockWhere, mockGetAccessibleCopilotChat } = vi.hoisted(() => ({
mockDelete: vi.fn(),
mockWhere: vi.fn(),
mockGetSession: vi.fn(),
mockGetAccessibleCopilotChat: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: {
@@ -23,13 +21,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
workspaceId: 'workspaceId',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -58,7 +50,7 @@ describe('Copilot Chat Delete API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const mockReturning = vi.fn().mockResolvedValue([{ workspaceId: 'ws-1' }])
mockWhere.mockReturnValue({ returning: mockReturning })
@@ -72,7 +64,7 @@ describe('Copilot Chat Delete API Route', () => {
describe('DELETE', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = createMockRequest('DELETE', {
chatId: 'chat-123',
@@ -86,7 +78,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should successfully delete a chat', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {
chatId: 'chat-123',
@@ -103,7 +95,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should return 500 for invalid request body - missing chatId', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {})
@@ -115,7 +107,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should return 500 for invalid request body - chatId is not a string', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {
chatId: 12345,
@@ -129,7 +121,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should handle database errors gracefully', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockWhere.mockRejectedValueOnce(new Error('Database connection failed'))
@@ -145,7 +137,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should handle JSON parsing errors in request body', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = new NextRequest('http://localhost:3000/api/copilot/chat/delete', {
method: 'DELETE',
@@ -163,7 +155,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should delete chat even if it does not exist (idempotent)', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockGetAccessibleCopilotChat.mockResolvedValueOnce(null)
@@ -179,7 +171,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should delete chat with empty string chatId (validation should fail)', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {
chatId: '',

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
@@ -82,7 +83,7 @@ export async function GET(req: NextRequest) {
logger.warn('Failed to read preview sessions for copilot chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
return []
}),
@@ -90,7 +91,7 @@ export async function GET(req: NextRequest) {
logger.warn('Failed to fetch latest run for copilot chat snapshot', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
return null
}),
@@ -110,7 +111,7 @@ export async function GET(req: NextRequest) {
logger.warn('Failed to load copilot chat stream snapshot', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
}
}

View File

@@ -1,11 +1,11 @@
/**
* @vitest-environment node
*/
import { authMock, authMockFns, schemaMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockGetSession,
mockSelect,
mockFrom,
mockWhereSelect,
@@ -17,7 +17,6 @@ const {
mockPublishStatusChanged,
mockSql,
} = vi.hoisted(() => ({
mockGetSession: vi.fn(),
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhereSelect: vi.fn(),
@@ -30,9 +29,7 @@ const {
mockSql: vi.fn((strings: TemplateStringsArray, ...values: unknown[]) => ({ strings, values })),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: {
@@ -41,15 +38,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
workspaceId: 'workspaceId',
messages: 'messages',
conversationId: 'conversationId',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -77,7 +66,7 @@ describe('copilot chat stop route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
mockLimit.mockResolvedValue([
{
@@ -96,7 +85,7 @@ describe('copilot chat stop route', () => {
})
it('returns 401 when unauthenticated', async () => {
mockGetSession.mockResolvedValueOnce(null)
authMockFns.mockGetSession.mockResolvedValueOnce(null)
const response = await POST(
createRequest({

View File

@@ -1,13 +1,13 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { normalizeMessage, type PersistedMessage } from '@/lib/copilot/chat/persisted-message'
import { taskPubSub } from '@/lib/copilot/tasks'
import { generateId } from '@/lib/core/utils/uuid'
const logger = createLogger('CopilotChatStopAPI')

View File

@@ -2,6 +2,7 @@
* @vitest-environment node
*/
import { copilotHttpMock, copilotHttpMockFns } from '@sim/testing'
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import {
@@ -9,19 +10,13 @@ import {
MothershipStreamV1EventType,
} from '@/lib/copilot/generated/mothership-stream-v1'
const {
getLatestRunForStream,
readEvents,
readFilePreviewSessions,
checkForReplayGap,
authenticateCopilotRequestSessionOnly,
} = vi.hoisted(() => ({
getLatestRunForStream: vi.fn(),
readEvents: vi.fn(),
readFilePreviewSessions: vi.fn(),
checkForReplayGap: vi.fn(),
authenticateCopilotRequestSessionOnly: vi.fn(),
}))
const { getLatestRunForStream, readEvents, readFilePreviewSessions, checkForReplayGap } =
vi.hoisted(() => ({
getLatestRunForStream: vi.fn(),
readEvents: vi.fn(),
readFilePreviewSessions: vi.fn(),
checkForReplayGap: vi.fn(),
}))
vi.mock('@/lib/copilot/async-runs/repository', () => ({
getLatestRunForStream,
@@ -48,9 +43,7 @@ vi.mock('@/lib/copilot/request/session', () => ({
},
}))
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly,
}))
vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)
import { GET } from './route'
@@ -72,7 +65,7 @@ async function readAllChunks(response: Response): Promise<string[]> {
describe('copilot chat stream replay route', () => {
beforeEach(() => {
vi.clearAllMocks()
authenticateCopilotRequestSessionOnly.mockResolvedValue({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValue({
userId: 'user-1',
isAuthenticated: true,
})

View File

@@ -1,4 +1,6 @@
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { sleep } from '@sim/utils/helpers'
import { type NextRequest, NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import {
@@ -97,7 +99,7 @@ export async function GET(request: NextRequest) {
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch((err) => {
logger.warn('Failed to fetch latest run for stream', {
streamId,
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
return null
})
@@ -119,7 +121,7 @@ export async function GET(request: NextRequest) {
readFilePreviewSessions(streamId).catch((error) => {
logger.warn('Failed to read preview sessions for stream batch', {
streamId,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
return []
}),
@@ -235,7 +237,7 @@ export async function GET(request: NextRequest) {
(err) => {
logger.warn('Failed to poll latest run for stream', {
streamId,
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
return null
}
@@ -273,7 +275,7 @@ export async function GET(request: NextRequest) {
break
}
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
await sleep(POLL_INTERVAL_MS)
}
if (!controllerClosed && Date.now() - startTime >= MAX_STREAM_MS) {
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
@@ -286,7 +288,7 @@ export async function GET(request: NextRequest) {
if (!controllerClosed && !request.signal.aborted) {
logger.warn('Stream replay failed', {
streamId,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream replay failed before completion.',

View File

@@ -3,32 +3,22 @@
*
* @vitest-environment node
*/
import { authMock, authMockFns, schemaMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockUpdate,
mockSet,
mockUpdateWhere,
mockGetSession,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockUpdateWhere: vi.fn(),
mockGetSession: vi.fn(),
}))
const { mockSelect, mockFrom, mockWhere, mockLimit, mockUpdate, mockSet, mockUpdateWhere } =
vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockUpdateWhere: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: {
@@ -37,14 +27,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
messages: 'messages',
updatedAt: 'updatedAt',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -65,7 +48,7 @@ describe('Copilot Chat Update Messages API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
@@ -82,7 +65,7 @@ describe('Copilot Chat Update Messages API Route', () => {
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = createMockRequest('POST', {
chatId: 'chat-123',
@@ -104,7 +87,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid request body - missing chatId', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
messages: [
@@ -125,7 +108,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid request body - missing messages', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
chatId: 'chat-123',
@@ -139,7 +122,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid message structure - missing required fields', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
chatId: 'chat-123',
@@ -158,7 +141,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid message role', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
chatId: 'chat-123',
@@ -180,7 +163,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 404 when chat is not found', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockLimit.mockResolvedValueOnce([])
@@ -204,7 +187,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 404 when chat belongs to different user', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockLimit.mockResolvedValueOnce([])
@@ -228,7 +211,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should successfully update chat messages', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-123',
@@ -275,7 +258,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should successfully update chat messages with optional fields', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-456',
@@ -361,7 +344,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle empty messages array', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-789',
@@ -391,7 +374,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle database errors during chat lookup', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockLimit.mockRejectedValueOnce(new Error('Database connection failed'))
@@ -415,7 +398,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle database errors during update operation', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-123',
@@ -448,7 +431,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle JSON parsing errors in request body', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = new NextRequest('http://localhost:3000/api/copilot/chat/update-messages', {
method: 'POST',
@@ -466,7 +449,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle large message arrays', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-large',
@@ -503,7 +486,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle messages with both user and assistant roles', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-mixed',

View File

@@ -3,26 +3,15 @@
*
* @vitest-environment node
*/
import { copilotHttpMock, copilotHttpMockFns, schemaMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockSelectDistinctOn,
mockFrom,
mockLeftJoin,
mockWhere,
mockOrderBy,
mockAuthenticate,
mockCreateUnauthorizedResponse,
mockCreateInternalServerErrorResponse,
} = vi.hoisted(() => ({
const { mockSelectDistinctOn, mockFrom, mockLeftJoin, mockWhere, mockOrderBy } = vi.hoisted(() => ({
mockSelectDistinctOn: vi.fn(),
mockFrom: vi.fn(),
mockLeftJoin: vi.fn(),
mockWhere: vi.fn(),
mockOrderBy: vi.fn(),
mockAuthenticate: vi.fn(),
mockCreateUnauthorizedResponse: vi.fn(),
mockCreateInternalServerErrorResponse: vi.fn(),
}))
vi.mock('@sim/db', () => ({
@@ -31,31 +20,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
title: 'title',
workflowId: 'workflowId',
workspaceId: 'workspaceId',
userId: 'userId',
updatedAt: 'updatedAt',
},
workflow: {
id: 'id',
workspaceId: 'workspaceId',
archivedAt: 'archivedAt',
},
workspace: {
id: 'id',
archivedAt: 'archivedAt',
},
permissions: {
id: 'id',
entityType: 'entityType',
entityId: 'entityId',
userId: 'userId',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -66,11 +31,7 @@ vi.mock('drizzle-orm', () => ({
sql: vi.fn(),
}))
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createInternalServerErrorResponse: mockCreateInternalServerErrorResponse,
}))
vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)
import { GET } from '@/app/api/copilot/chats/route'
@@ -83,13 +44,6 @@ describe('Copilot Chats List API Route', () => {
mockLeftJoin.mockReturnValue({ leftJoin: mockLeftJoin, where: mockWhere })
mockWhere.mockReturnValue({ orderBy: mockOrderBy })
mockOrderBy.mockResolvedValue([])
mockCreateUnauthorizedResponse.mockReturnValue(
new Response(JSON.stringify({ error: 'Unauthorized' }), { status: 401 })
)
mockCreateInternalServerErrorResponse.mockImplementation(
(message: string) => new Response(JSON.stringify({ error: message }), { status: 500 })
)
})
afterEach(() => {
@@ -98,7 +52,7 @@ describe('Copilot Chats List API Route', () => {
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: null,
isAuthenticated: false,
})
@@ -112,7 +66,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should return empty chats array when user has no chats', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -131,7 +85,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should return list of chats for authenticated user', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -165,7 +119,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should return chats ordered by updatedAt descending', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -202,7 +156,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should handle chats with null workflowId', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -226,7 +180,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should handle database errors gracefully', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -242,7 +196,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should only return chats belonging to authenticated user', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -265,7 +219,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should return 401 when userId is null despite isAuthenticated being true', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: null,
isAuthenticated: true,
})

View File

@@ -3,6 +3,13 @@
*
* @vitest-environment node
*/
import {
authMock,
authMockFns,
schemaMock,
workflowsUtilsMock,
workflowsUtilsMockFns,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -13,8 +20,6 @@ const {
mockThen,
mockDelete,
mockDeleteWhere,
mockAuthorize,
mockGetSession,
mockGetAccessibleCopilotChat,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
@@ -23,14 +28,10 @@ const {
mockThen: vi.fn(),
mockDelete: vi.fn(),
mockDeleteWhere: vi.fn(),
mockAuthorize: vi.fn(),
mockGetSession: vi.fn(),
mockGetAccessibleCopilotChat: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@/lib/core/utils/urls', () => ({
getBaseUrl: vi.fn(() => 'http://localhost:3000'),
@@ -39,9 +40,7 @@ vi.mock('@/lib/core/utils/urls', () => ({
getEmailDomain: vi.fn(() => 'localhost:3000'),
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorize,
}))
vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
@@ -54,18 +53,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
workflowCheckpoints: {
id: 'id',
userId: 'userId',
workflowId: 'workflowId',
workflowState: 'workflowState',
},
workflow: {
id: 'id',
userId: 'userId',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -83,9 +71,9 @@ describe('Copilot Checkpoints Revert API Route', () => {
thenResults = []
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
mockAuthorize.mockResolvedValue({
workflowsUtilsMockFns.mockAuthorizeWorkflowByWorkspacePermission.mockResolvedValue({
allowed: true,
status: 200,
})
@@ -134,12 +122,12 @@ describe('Copilot Checkpoints Revert API Route', () => {
/** Helper to set authenticated state */
function setAuthenticated(user = { id: 'user-123', email: 'test@example.com' }) {
mockGetSession.mockResolvedValue({ user })
authMockFns.mockGetSession.mockResolvedValue({ user })
}
/** Helper to set unauthenticated state */
function setUnauthenticated() {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
}
describe('POST', () => {
@@ -273,7 +261,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
thenResults.push(mockCheckpoint) // Checkpoint found
thenResults.push(mockWorkflow) // Workflow found but different user
mockAuthorize.mockResolvedValueOnce({
workflowsUtilsMockFns.mockAuthorizeWorkflowByWorkspacePermission.mockResolvedValueOnce({
allowed: false,
status: 403,
})

View File

@@ -3,6 +3,13 @@
*
* @vitest-environment node
*/
import {
authMock,
authMockFns,
schemaMock,
workflowsUtilsMock,
workflowsUtilsMockFns,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -15,9 +22,7 @@ const {
mockInsert,
mockValues,
mockReturning,
mockGetSession,
mockGetAccessibleCopilotChat,
mockAuthorizeWorkflowByWorkspacePermission,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
@@ -27,14 +32,10 @@ const {
mockInsert: vi.fn(),
mockValues: vi.fn(),
mockReturning: vi.fn(),
mockGetSession: vi.fn(),
mockGetAccessibleCopilotChat: vi.fn(),
mockAuthorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@sim/db', () => ({
db: {
@@ -43,18 +44,7 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: { id: 'id', userId: 'userId' },
workflowCheckpoints: {
id: 'id',
userId: 'userId',
workflowId: 'workflowId',
chatId: 'chatId',
messageId: 'messageId',
createdAt: 'createdAt',
updatedAt: 'updatedAt',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -66,9 +56,7 @@ vi.mock('@/lib/copilot/chat/lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorizeWorkflowByWorkspacePermission,
}))
vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)
import { GET, POST } from './route'
@@ -84,7 +72,7 @@ describe('Copilot Checkpoints API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
@@ -101,7 +89,9 @@ describe('Copilot Checkpoints API Route', () => {
userId: 'user-123',
workflowId: 'workflow-123',
})
mockAuthorizeWorkflowByWorkspacePermission.mockResolvedValue({ allowed: true })
workflowsUtilsMockFns.mockAuthorizeWorkflowByWorkspacePermission.mockResolvedValue({
allowed: true,
})
})
afterEach(() => {
@@ -110,7 +100,7 @@ describe('Copilot Checkpoints API Route', () => {
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = createMockRequest('POST', {
workflowId: 'workflow-123',
@@ -126,7 +116,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 500 for invalid request body', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
workflowId: 'workflow-123',
@@ -140,7 +130,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 400 when chat not found or unauthorized', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockGetAccessibleCopilotChat.mockResolvedValueOnce(null)
const req = createMockRequest('POST', {
@@ -157,7 +147,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 400 for invalid workflow state JSON', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
workflowId: 'workflow-123',
@@ -173,7 +163,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should successfully create a checkpoint', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const checkpoint = {
id: 'checkpoint-123',
@@ -222,7 +212,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should create checkpoint without messageId', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const checkpoint = {
id: 'checkpoint-123',
@@ -251,7 +241,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should handle database errors during checkpoint creation', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockReturning.mockRejectedValue(new Error('Database insert failed'))
@@ -269,7 +259,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should handle database errors during chat lookup', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockGetAccessibleCopilotChat.mockRejectedValueOnce(new Error('Database query failed'))
@@ -289,7 +279,7 @@ describe('Copilot Checkpoints API Route', () => {
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints?chatId=chat-123')
@@ -301,7 +291,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 400 when chatId is missing', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints')
@@ -313,7 +303,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return checkpoints for authenticated user and chat', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const mockCheckpoints = [
{
@@ -374,7 +364,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should handle database errors when fetching checkpoints', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockOrderBy.mockRejectedValue(new Error('Database query failed'))
@@ -388,7 +378,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return empty array when no checkpoints found', async () => {
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockOrderBy.mockResolvedValue([])

View File

@@ -1,36 +1,17 @@
/**
* @vitest-environment node
*/
import { copilotHttpMock, copilotHttpMockFns } from '@sim/testing'
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
getAsyncToolCall,
getRunSegment,
upsertAsyncToolCall,
completeAsyncToolCall,
publishToolConfirmation,
} = vi.hoisted(() => ({
authenticateCopilotRequestSessionOnly: vi.fn(),
createBadRequestResponse: vi.fn((message: string) =>
Response.json({ error: message }, { status: 400 })
),
createInternalServerErrorResponse: vi.fn((message: string) =>
Response.json({ error: message }, { status: 500 })
),
createNotFoundResponse: vi.fn((message: string) =>
Response.json({ error: message }, { status: 404 })
),
createRequestTracker: vi.fn(() => ({ requestId: 'req-1', getDuration: () => 1 })),
createUnauthorizedResponse: vi.fn(() =>
Response.json({ error: 'Unauthorized' }, { status: 401 })
),
getAsyncToolCall: vi.fn(),
getRunSegment: vi.fn(),
upsertAsyncToolCall: vi.fn(),
@@ -38,14 +19,7 @@ const {
publishToolConfirmation: vi.fn(),
}))
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
}))
vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)
vi.mock('@/lib/copilot/async-runs/repository', () => ({
getAsyncToolCall,
@@ -71,7 +45,7 @@ describe('Copilot Confirm API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
authenticateCopilotRequestSessionOnly.mockResolvedValue({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValue({
userId: 'user-1',
isAuthenticated: true,
})
@@ -90,7 +64,7 @@ describe('Copilot Confirm API Route', () => {
}
it('returns 401 when the session is unauthenticated', async () => {
authenticateCopilotRequestSessionOnly.mockResolvedValue({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValue({
userId: null,
isAuthenticated: false,
})

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
@@ -106,7 +107,7 @@ async function updateToolCallStatus(
logger.error('Failed to update tool call status', {
toolCallId,
status,
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
return false
}
@@ -133,7 +134,7 @@ export async function POST(req: NextRequest) {
const existing = await getAsyncToolCall(toolCallId).catch((err) => {
logger.warn('Failed to fetch async tool call', {
toolCallId,
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
return null
})
@@ -145,7 +146,7 @@ export async function POST(req: NextRequest) {
const run = await getRunSegment(existing.runId).catch((err) => {
logger.warn('Failed to fetch run segment', {
runId: existing.runId,
error: err instanceof Error ? err.message : String(err),
error: toError(err).message,
})
return null
})

View File

@@ -3,34 +3,20 @@
*
* @vitest-environment node
*/
import { copilotHttpMock, copilotHttpMockFns, schemaMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockInsert,
mockValues,
mockReturning,
mockSelect,
mockFrom,
mockWhere,
mockAuthenticate,
mockCreateUnauthorizedResponse,
mockCreateBadRequestResponse,
mockCreateInternalServerErrorResponse,
mockCreateRequestTracker,
} = vi.hoisted(() => ({
mockInsert: vi.fn(),
mockValues: vi.fn(),
mockReturning: vi.fn(),
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockAuthenticate: vi.fn(),
mockCreateUnauthorizedResponse: vi.fn(),
mockCreateBadRequestResponse: vi.fn(),
mockCreateInternalServerErrorResponse: vi.fn(),
mockCreateRequestTracker: vi.fn(),
}))
const { mockInsert, mockValues, mockReturning, mockSelect, mockFrom, mockWhere } = vi.hoisted(
() => ({
mockInsert: vi.fn(),
mockValues: vi.fn(),
mockReturning: vi.fn(),
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
})
)
vi.mock('@sim/db', () => ({
db: {
@@ -39,31 +25,13 @@ vi.mock('@sim/db', () => ({
},
}))
vi.mock('@sim/db/schema', () => ({
copilotFeedback: {
feedbackId: 'feedbackId',
userId: 'userId',
chatId: 'chatId',
userQuery: 'userQuery',
agentResponse: 'agentResponse',
isPositive: 'isPositive',
feedback: 'feedback',
workflowYaml: 'workflowYaml',
createdAt: 'createdAt',
},
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createBadRequestResponse: mockCreateBadRequestResponse,
createInternalServerErrorResponse: mockCreateInternalServerErrorResponse,
createRequestTracker: mockCreateRequestTracker,
}))
vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)
import { GET, POST } from '@/app/api/copilot/feedback/route'
@@ -85,20 +53,6 @@ describe('Copilot Feedback API Route', () => {
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockResolvedValue([])
mockCreateRequestTracker.mockReturnValue({
requestId: 'test-request-id',
getDuration: vi.fn().mockReturnValue(100),
})
mockCreateUnauthorizedResponse.mockReturnValue(
new Response(JSON.stringify({ error: 'Unauthorized' }), { status: 401 })
)
mockCreateBadRequestResponse.mockImplementation(
(message: string) => new Response(JSON.stringify({ error: message }), { status: 400 })
)
mockCreateInternalServerErrorResponse.mockImplementation(
(message: string) => new Response(JSON.stringify({ error: message }), { status: 500 })
)
})
afterEach(() => {
@@ -107,7 +61,7 @@ describe('Copilot Feedback API Route', () => {
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: null,
isAuthenticated: false,
})
@@ -127,7 +81,7 @@ describe('Copilot Feedback API Route', () => {
})
it('should successfully submit positive feedback', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -162,7 +116,7 @@ describe('Copilot Feedback API Route', () => {
})
it('should successfully submit negative feedback with text', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -197,7 +151,7 @@ describe('Copilot Feedback API Route', () => {
})
it('should successfully submit feedback with workflow YAML', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -248,7 +202,7 @@ edges:
})
it('should return 400 for invalid chatId format', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -268,7 +222,7 @@ edges:
})
it('should return 400 for empty userQuery', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -288,7 +242,7 @@ edges:
})
it('should return 400 for empty agentResponse', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -308,7 +262,7 @@ edges:
})
it('should return 400 for missing isPositiveFeedback', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -327,7 +281,7 @@ edges:
})
it('should handle database errors gracefully', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -349,7 +303,7 @@ edges:
})
it('should handle JSON parsing errors in request body', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -370,7 +324,7 @@ edges:
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: null,
isAuthenticated: false,
})
@@ -384,7 +338,7 @@ edges:
})
it('should return empty feedback array when no feedback exists', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -401,7 +355,7 @@ edges:
})
it('should only return feedback records for the authenticated user', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -438,7 +392,7 @@ edges:
})
it('should handle database errors gracefully', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -454,7 +408,7 @@ edges:
})
it('should return metadata with response', async () => {
mockAuthenticate.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { type NextRequest, NextResponse } from 'next/server'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
@@ -76,7 +77,7 @@ export async function GET(_req: NextRequest) {
return NextResponse.json({ success: true, models })
} catch (error) {
logger.error('Error fetching available models', {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
return NextResponse.json(
{

View File

@@ -3,54 +3,22 @@
*
* @vitest-environment node
*/
import { createMockRequest } from '@sim/testing'
import { copilotHttpMock, copilotHttpMockFns, createEnvMock, createMockRequest } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockAuthenticateCopilotRequestSessionOnly,
mockCreateUnauthorizedResponse,
mockCreateBadRequestResponse,
mockCreateInternalServerErrorResponse,
mockCreateRequestTracker,
mockFetch,
} = vi.hoisted(() => ({
mockAuthenticateCopilotRequestSessionOnly: vi.fn(),
mockCreateUnauthorizedResponse: vi.fn(),
mockCreateBadRequestResponse: vi.fn(),
mockCreateInternalServerErrorResponse: vi.fn(),
mockCreateRequestTracker: vi.fn(),
const { mockFetch } = vi.hoisted(() => ({
mockFetch: vi.fn(),
}))
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticateCopilotRequestSessionOnly,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createBadRequestResponse: mockCreateBadRequestResponse,
createInternalServerErrorResponse: mockCreateInternalServerErrorResponse,
createRequestTracker: mockCreateRequestTracker,
}))
vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)
vi.mock('@/lib/copilot/constants', () => ({
SIM_AGENT_API_URL_DEFAULT: 'https://agent.sim.example.com',
SIM_AGENT_API_URL: 'https://agent.sim.example.com',
}))
vi.mock('@/lib/core/config/env', () => ({
env: {
COPILOT_API_KEY: 'test-api-key',
},
getEnv: vi.fn((key: string) => {
const vals: Record<string, string | undefined> = {
COPILOT_API_KEY: 'test-api-key',
}
return vals[key]
}),
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value.toLowerCase() === 'true' || value === '1' : Boolean(value),
isFalsy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value.toLowerCase() === 'false' || value === '0' : value === false,
}))
vi.mock('@/lib/core/config/env', () => createEnvMock({ COPILOT_API_KEY: 'test-api-key' }))
import { POST } from '@/app/api/copilot/stats/route'
@@ -58,20 +26,6 @@ describe('Copilot Stats API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
global.fetch = mockFetch
mockCreateUnauthorizedResponse.mockReturnValue(
new Response(JSON.stringify({ error: 'Unauthorized' }), { status: 401 })
)
mockCreateBadRequestResponse.mockImplementation(
(message: string) => new Response(JSON.stringify({ error: message }), { status: 400 })
)
mockCreateInternalServerErrorResponse.mockImplementation(
(message: string) => new Response(JSON.stringify({ error: message }), { status: 500 })
)
mockCreateRequestTracker.mockReturnValue({
requestId: 'test-request-id',
getDuration: vi.fn().mockReturnValue(100),
})
})
afterEach(() => {
@@ -80,7 +34,7 @@ describe('Copilot Stats API Route', () => {
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: null,
isAuthenticated: false,
})
@@ -99,7 +53,7 @@ describe('Copilot Stats API Route', () => {
})
it('should successfully forward stats to Sim Agent', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -139,7 +93,7 @@ describe('Copilot Stats API Route', () => {
})
it('should return 400 for invalid request body - missing messageId', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -157,7 +111,7 @@ describe('Copilot Stats API Route', () => {
})
it('should return 400 for invalid request body - missing diffCreated', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -175,7 +129,7 @@ describe('Copilot Stats API Route', () => {
})
it('should return 400 for invalid request body - missing diffAccepted', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -193,7 +147,7 @@ describe('Copilot Stats API Route', () => {
})
it('should return 400 when upstream Sim Agent returns error', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -217,7 +171,7 @@ describe('Copilot Stats API Route', () => {
})
it('should handle upstream error with message field', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -241,7 +195,7 @@ describe('Copilot Stats API Route', () => {
})
it('should handle upstream error with no JSON response', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -265,7 +219,7 @@ describe('Copilot Stats API Route', () => {
})
it('should handle network errors gracefully', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -286,7 +240,7 @@ describe('Copilot Stats API Route', () => {
})
it('should handle JSON parsing errors in request body', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -307,7 +261,7 @@ describe('Copilot Stats API Route', () => {
})
it('should forward stats with diffCreated=false and diffAccepted=false', async () => {
mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
copilotHttpMockFns.mockAuthenticateCopilotRequestSessionOnly.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})

View File

@@ -1,12 +1,12 @@
import { db } from '@sim/db'
import { member, templateCreators } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateId } from '@/lib/core/utils/uuid'
import type { CreatorProfileDetails } from '@/app/_types/creator-profile'
const logger = createLogger('CreatorProfilesAPI')

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -9,7 +10,6 @@ import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { generateId } from '@/lib/core/utils/uuid'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('CredentialSetInvite')

View File

@@ -1,12 +1,12 @@
import { db } from '@sim/db'
import { account, credentialSet, credentialSetMember, member, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { generateId } from '@/lib/core/utils/uuid'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetMembers')

View File

@@ -6,11 +6,11 @@ import {
organization,
} from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { generateId } from '@/lib/core/utils/uuid'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetInviteToken')

View File

@@ -1,11 +1,11 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, organization } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { generateId } from '@/lib/core/utils/uuid'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetMemberships')

View File

@@ -1,13 +1,13 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, count, desc, eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { generateId } from '@/lib/core/utils/uuid'
const logger = createLogger('CredentialSets')

View File

@@ -1,11 +1,11 @@
import { db } from '@sim/db'
import { credential, credentialMember, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateId } from '@/lib/core/utils/uuid'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('CredentialMembersAPI')

View File

@@ -1,13 +1,13 @@
import { db } from '@sim/db'
import { credential, credentialMember, environment, workspaceEnvironment } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { generateId } from '@/lib/core/utils/uuid'
import { getCredentialActorContext } from '@/lib/credentials/access'
import {
syncPersonalEnvCredentialsForUser,

View File

@@ -1,11 +1,11 @@
import { db } from '@sim/db'
import { credential, credentialMember, pendingCredentialDraft } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq, lt } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateId } from '@/lib/core/utils/uuid'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('CredentialDraftAPI')

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import { account, credential, credentialMember, workspace } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -8,7 +9,6 @@ import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateId } from '@/lib/core/utils/uuid'
import { getWorkspaceMemberUserIds } from '@/lib/credentials/environment'
import { syncWorkspaceOAuthCredentialsForUser } from '@/lib/credentials/oauth'
import { getServiceConfigByProviderId } from '@/lib/oauth'

View File

@@ -1,6 +1,7 @@
import { asyncJobs, db } from '@sim/db'
import { workflowExecutionLogs } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { and, eq, inArray, lt, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { verifyCronAuth } from '@/lib/auth/internal'
@@ -73,7 +74,7 @@ export async function GET(request: NextRequest) {
cleaned++
} catch (error) {
logger.error(`Failed to clean up execution ${execution.executionId}:`, {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
failed++
}
@@ -104,7 +105,7 @@ export async function GET(request: NextRequest) {
}
} catch (error) {
logger.error('Failed to clean up stale async jobs:', {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
}
@@ -131,7 +132,7 @@ export async function GET(request: NextRequest) {
}
} catch (error) {
logger.error('Failed to clean up stale pending jobs:', {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
}
@@ -158,7 +159,7 @@ export async function GET(request: NextRequest) {
}
} catch (error) {
logger.error('Failed to delete old async jobs:', {
error: error instanceof Error ? error.message : String(error),
error: toError(error).message,
})
}

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import { environment } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { generateId } from '@sim/utils/id'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
@@ -8,7 +9,6 @@ import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { decryptSecret, encryptSecret } from '@/lib/core/security/encryption'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateId } from '@/lib/core/utils/uuid'
import { syncPersonalEnvCredentialsForUser } from '@/lib/credentials/environment'
import type { EnvironmentVariable } from '@/lib/environment/api'

View File

@@ -1,13 +1,10 @@
/**
* @vitest-environment node
*/
import { authMock, authMockFns, hybridAuthMock, hybridAuthMockFns, schemaMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const mocks = vi.hoisted(() => {
const mockGetSession = vi.fn()
const mockCheckHybridAuth = vi.fn()
const mockCheckSessionOrInternalAuth = vi.fn()
const mockCheckInternalAuth = vi.fn()
const mockVerifyFileAccess = vi.fn()
const mockVerifyWorkspaceFileAccess = vi.fn()
const mockDeleteFile = vi.fn()
@@ -18,10 +15,6 @@ const mocks = vi.hoisted(() => {
const mockDownloadFile = vi.fn()
return {
mockGetSession,
mockCheckHybridAuth,
mockCheckSessionOrInternalAuth,
mockCheckInternalAuth,
mockVerifyFileAccess,
mockVerifyWorkspaceFileAccess,
mockDeleteFile,
@@ -33,29 +26,7 @@ const mocks = vi.hoisted(() => {
}
})
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
}),
}))
vi.mock('@sim/db/schema', () => ({
workflowFolder: {
id: 'id',
userId: 'userId',
parentId: 'parentId',
updatedAt: 'updatedAt',
workspaceId: 'workspaceId',
sortOrder: 'sortOrder',
createdAt: 'createdAt',
},
workflow: { id: 'id', folderId: 'folderId', userId: 'userId', updatedAt: 'updatedAt' },
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
}))
vi.mock('@sim/db/schema', () => schemaMock)
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
@@ -82,7 +53,7 @@ vi.mock('drizzle-orm', () => ({
sql: vi.fn((strings: unknown, ...values: unknown[]) => ({ type: 'sql', sql: strings, values })),
}))
vi.mock('@/lib/core/utils/uuid', () => ({
vi.mock('@sim/utils/id', () => ({
generateId: vi.fn(() => 'test-uuid'),
generateShortId: vi.fn(() => 'mock-short-id'),
isValidUuid: vi.fn((v: string) =>
@@ -90,16 +61,9 @@ vi.mock('@/lib/core/utils/uuid', () => ({
),
}))
vi.mock('@/lib/auth', () => ({
getSession: mocks.mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@/lib/auth/hybrid', () => ({
AuthType: { SESSION: 'session', API_KEY: 'api_key', INTERNAL_JWT: 'internal_jwt' },
checkHybridAuth: mocks.mockCheckHybridAuth,
checkSessionOrInternalAuth: mocks.mockCheckSessionOrInternalAuth,
checkInternalAuth: mocks.mockCheckInternalAuth,
}))
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
vi.mock('@/app/api/files/authorization', () => ({
verifyFileAccess: mocks.mockVerifyFileAccess,
@@ -151,8 +115,8 @@ describe('File Delete API Route', () => {
randomUUID: vi.fn().mockReturnValue('mock-uuid-1234-5678'),
})
mocks.mockGetSession.mockResolvedValue({ user: { id: 'test-user-id' } })
mocks.mockCheckSessionOrInternalAuth.mockResolvedValue({
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'test-user-id' } })
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true,
userId: 'test-user-id',
error: undefined,

View File

@@ -3,7 +3,12 @@
*
* @vitest-environment node
*/
import { createMockRequest } from '@sim/testing'
import {
createMockRequest,
hybridAuthMock,
hybridAuthMockFns,
inputValidationMock,
} from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -23,9 +28,6 @@ const {
mockFsWriteFile,
mockJoin,
mockGetSession,
mockCheckInternalAuth,
mockCheckHybridAuth,
mockCheckSessionOrInternalAuth,
actualPath,
} = vi.hoisted(() => {
// eslint-disable-next-line @typescript-eslint/no-require-imports
@@ -57,9 +59,6 @@ const {
return actualPath.join(...args)
}),
mockGetSession: vi.fn(),
mockCheckInternalAuth: vi.fn(),
mockCheckHybridAuth: vi.fn(),
mockCheckSessionOrInternalAuth: vi.fn(),
actualPath,
}
})
@@ -105,17 +104,9 @@ vi.mock('@/lib/auth', () => ({
signUp: vi.fn(),
}))
vi.mock('@/lib/auth/hybrid', () => ({
AuthType: { SESSION: 'session', API_KEY: 'api_key', INTERNAL_JWT: 'internal_jwt' },
checkInternalAuth: mockCheckInternalAuth,
checkHybridAuth: mockCheckHybridAuth,
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
vi.mock('@/lib/core/security/input-validation.server', () => ({
secureFetchWithPinnedIP: vi.fn(),
validateUrlWithDNS: vi.fn(),
}))
vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)
vi.mock('@/lib/core/utils/logging', () => ({
sanitizeUrlForLog: vi.fn((url: string) => url),
@@ -165,19 +156,19 @@ function setupFileApiMocks(
mockGetSession.mockResolvedValue(null)
}
mockCheckInternalAuth.mockResolvedValue({
hybridAuthMockFns.mockCheckInternalAuth.mockResolvedValue({
success: authenticated,
userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized',
})
mockCheckHybridAuth.mockResolvedValue({
hybridAuthMockFns.mockCheckHybridAuth.mockResolvedValue({
success: authenticated,
userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized',
})
mockCheckSessionOrInternalAuth.mockResolvedValue({
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
success: authenticated,
userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized',

View File

@@ -4,11 +4,11 @@
* @vitest-environment node
*/
import { authMock, authMockFns } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockGetSession,
mockVerifyFileAccess,
mockVerifyWorkspaceFileAccess,
mockUseBlobStorage,
@@ -25,7 +25,6 @@ const {
mockGetStorageProviderUploads,
mockIsUsingCloudStorageUploads,
} = vi.hoisted(() => ({
mockGetSession: vi.fn(),
mockVerifyFileAccess: vi.fn().mockResolvedValue(true),
mockVerifyWorkspaceFileAccess: vi.fn().mockResolvedValue(true),
mockUseBlobStorage: { value: false },
@@ -46,9 +45,7 @@ const {
mockIsUsingCloudStorageUploads: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/auth', () => authMock)
vi.mock('@/app/api/files/authorization', () => ({
verifyFileAccess: mockVerifyFileAccess,
@@ -108,9 +105,9 @@ function setupFileApiMocks(
const { authenticated = true, storageProvider = 's3', cloudEnabled = true } = options
if (authenticated) {
mockGetSession.mockResolvedValue({ user: defaultMockUser })
authMockFns.mockGetSession.mockResolvedValue({ user: defaultMockUser })
} else {
mockGetSession.mockResolvedValue(null)
authMockFns.mockGetSession.mockResolvedValue(null)
}
const useBlobStorage = storageProvider === 'blob' && cloudEnabled

Some files were not shown because too many files have changed in this diff Show More